We cut your cost per token by 62%
Smart routing sends simple prompts to cheap models, hard ones to powerful ones. One API, 13+ providers, automatic fallbacks. Your AI stack, finally optimized.
Open source. MIT Licensed. No credit card required.






Works with every major provider
Everything you need to
ship AI at scale
Route, trace, evaluate, and distill — one platform, every provider.
One endpoint.
Every LLM provider.
One OpenAI-compatible API that routes to every major provider — OpenAI, Anthropic, Google, Mistral, Groq, and more. Swap providers in one line. Automatic fallbacks keep you online.
import opentracy as ot
# Call any model — one line
response = ot.completion(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}],
fallbacks=["anthropic/claude-3-haiku"]
)
print(response.choices[0].message.content)
print(f"Cost: ${response._cost:.6f}")

Route smarter.
Pay less per request.
Automatically send simple prompts to fast, cheap models and route complex reasoning to the most capable one — across any provider, no code changes needed.

Know exactly where
every dollar goes.
Per-token pricing on 70+ models, broken down by model, user, or feature. Set budget alerts and hard caps — no more end-of-month surprises.



Complete visibility
into every request.
Every request logged with full input, output, cost, latency, and model metadata. AI-powered scanning detects hallucinations before your users do.

Train your own model
from production data.
Turn production traces into fine-tuning datasets automatically. Get frontier-model quality from a model you own — at a fraction of the inference cost.



Catch quality drops
before users do.
Continuous evaluations on production traffic detect regressions and hallucinations automatically.
Simple, predictable pricing
Start free. Scale when you need to. No surprises.
Free
For developers exploring LLM routing and observability.
- ✓Up to 10,000 requests/month
- ✓3 distillation runs/month
- ✓All 13+ providers
- ✓Full trace logging
- ✓Community support
Starter
For teams running LLMs in production.
- ✓Unlimited requests
- ✓Unlimited distillation
- ✓Advanced evaluations
- ✓AI quality scanning
- ✓Priority support
- ✓Team collaboration
- ✓Self-host option
- ✓Custom routing rules
Enterprise
For organizations that want a turnkey AI infrastructure with hands-on guidance.
- ✓Everything in Starter
- ✓24/7 dedicated support
- ✓Full setup done for you
- ✓Implementation consulting
- ✓Dedicated onboarding
- ✓Custom API architecture review
- ✓VPC deployment
- ✓SSO / SAML
- ✓Audit logs
- ✓Custom SLAs
- ✓On-premise option
- ✓BYOK encryption
Join the community
Open source, open development. Build with us.
Open source. Self-host or cloud.
Run on your own infrastructure with full control, or use our managed cloud. MIT licensed, no vendor lock-in.
Free tier available. No credit card required.