Migration Audit.
20 hours. $5,000.
Reusable scaffold.
⬢ Three engagement patterns
OpenAI → Anthropic Migration
You're hitting GPT rate limits, costs, or reliability ceiling and need to move workloads to Claude. We benchmark your top 5 prompt patterns head-to-head, deliver an annotated diff of behavioral deltas, and ship Terraform + adapter scaffolding. Most teams take this in-house in 4-6 weeks. We deliver in 5 business days.
Single → Multi-Provider Routing
You're locked to one provider and want cost / reliability hedging. We run your top workloads across 4 frontier-model providers, surface where each wins by metric (cost · latency · quality on YOUR rubric), and ship a routing config + observability dashboard. Same scaffolding we use for our own Parliament.
Frontier Model Eval on Your Workload
You can't decide between Claude 4.6 / GPT-5 / Gemini 3.1 / Llama 405B / DeepSeek V3.2 for production. We run your representative workloads through our 22-seat Parliament + Council annotation, deliver an outlier-failure report, and recommend the routing pattern that maximizes your specific value function.
⬢ What you get vs. what you don't
In scope (20 hours · 5 business days)
- 30-min scoping call · NDA signed if needed
- Workload analysis: 5 representative prompts/flows from your stack
- Cross-provider benchmark (top 4 frontier models on your workload)
- Annotated comparison report (cost · latency · quality deltas)
- Working scaffold: Terraform module + provider adapter + integration test
- Recommendation memo with go/no-go decision tree
- One review session at end · 60 min · followup notes within 24h
Out of scope (engage separately)
- Full production migration execution (we deliver the scaffold · your team ships it)
- Custom model fine-tuning
- RAG architecture review (separate engagement)
- Compliance / SOC 2 / GDPR audit (separate engagement · we partner with specialists)
- On-call SLA / incident response (subscription tier)
- White-label resale of the scaffold (license separately)
⬢ Tiered packages
Strategy Call
- Architecture review of your current LLM stack
- Cost-benchmark estimate (where you're overpaying)
- Provider-rotation feasibility assessment
- Written recommendations memo within 48h
- Credited against any Migration Audit booked within 30 days
Migration Audit
- One pattern of your choice (OpenAI→Anthropic / Multi-Provider / Frontier Eval)
- 5 workload patterns benchmarked across 4 providers
- Working Terraform + adapter + integration test
- Annotated comparison report
- Recommendation memo with go/no-go decision tree
- 60-min review + 24h followup
Implementation Support
- Migration Audit included (above)
- Pair-programming for actual production migration
- Incident-response on-call for first 7 days post-launch
- Three follow-up reviews (week 2 · week 4 · week 8)
- Capped at 50 hours · overage billed at $200/hr
⬢ Why us
Production receipts, not slides. The team running this consultancy ships and operates the live MirzaTech Parliament — 22 frontier-model seats across 4 provider lanes (NVIDIA NIM · Groq · OpenRouter · Novita). The same routing logic, fallback patterns, and trace capture you'd want in YOUR stack are running in ours, in production, today.
Reusable artifacts forced by the engagement model. Every audit produces a Terraform module + adapter + integration test committed to a git repo you own. Not a slide deck. Not a PDF. Code that runs.
Hard time cap is the feature. 20 hours / 5 business days / fixed fee. We over-deliver when scaffolding is reusable across clients (you benefit · we amortize). We exit when the cap hits. No scope creep. No surprise invoices.