Skip to main content
The shortest possible path to adopting OpenTracy is to not install anything new in your app. Just change base_url on your existing OpenAI client.

What this buys you

  • Every request your app already makes becomes an OpenTracy trace.
  • The engine can fan out to 13 providers — keep calling model="gpt-4o", or switch to model="anthropic/claude-sonnet-4-6" without touching your auth code.
  • Routing aliases become usable: later, point model="smart" at a distilled student without the app knowing.

The change

Python (OpenAI SDK)

from openai import OpenAI

# Before:
# client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

# After:
client = OpenAI(
    base_url="http://localhost:8080/v1",   # OpenTracy engine
    api_key="any",                         # engine holds provider keys
)

resp = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "hello"}],
)
That’s it. Your app makes the same API calls, gets the same response shape, but every call is traced in ClickHouse and the engine handles routing / fallback / retry / cost tracking.

TypeScript (OpenAI SDK)

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "http://localhost:8080/v1",
  apiKey: "any",
});

const resp = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "hello" }],
});

curl

curl http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "openai/gpt-4o-mini", "messages": [{"role": "user", "content": "hello"}]}'

Where do provider API keys live?

On the engine, not the client. Three ways:
  1. ~/.opentracy/secrets.json on the host running the engine:
    {
      "openai_api_key": "sk-...",
      "anthropic_api_key": "sk-ant-...",
      "groq_api_key": "gsk_..."
    }
    
  2. Environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, etc.) read by the engine process.
  3. UI: Settings → Integrations in the self-hosted UI. Stored encrypted in ClickHouse.

Changing models across providers

Because the engine speaks all 13 provider APIs, switching models is a string change:
# OpenAI
resp = client.chat.completions.create(model="openai/gpt-4o-mini", ...)

# Anthropic — no new SDK, no new auth code
resp = client.chat.completions.create(model="anthropic/claude-sonnet-4-6", ...)

# Groq
resp = client.chat.completions.create(model="groq/llama-3.1-8b-instant", ...)
The provider/model format is the only convention to learn.

Using a routing alias

An alias is a logical name you define once in the engine, then call by name:
# In the engine config, alias "smart" → gpt-4o with claude-sonnet-4-6 fallback
# Your app:
resp = client.chat.completions.create(
    model="smart",    # alias, resolved by the engine
    messages=[{"role": "user", "content": "..."}],
)
Later, when you’ve distilled a student, point "smart" at the student. The app code doesn’t change — the model upgrade is a config change on the engine side.

What you get in the response

Standard OpenAI fields plus OpenTracy extras:
resp.choices[0].message.content          # the answer
resp.usage.prompt_tokens, completion_tokens
# Extras: not in upstream OpenAI responses
resp._cost                               # USD for this call
resp._latency_ms                         # total latency including provider
resp._routing                            # {"alias": "smart", "selected_model": "gpt-4o", ...}
The extras are under single-underscore names so they don’t collide with any future OpenAI SDK field.

Streaming

Streaming works unmodified. The engine translates upstream streaming formats (Anthropic SSE, Bedrock event-stream, etc.) into OpenAI’s SSE shape, so your client code doesn’t need per-provider logic:
stream = client.chat.completions.create(
    model="anthropic/claude-sonnet-4-6",
    messages=[{"role": "user", "content": "count to 5"}],
    stream=True,
)
for chunk in stream:
    print(chunk.choices[0].delta.content or "", end="", flush=True)

Tool calls

Tool / function calls translate across providers. You pass OpenAI-shaped tools and tool_choice, and the engine adapts them to Anthropic’s tools, Gemini’s function declarations, etc.:
resp = client.chat.completions.create(
    model="anthropic/claude-sonnet-4-6",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "parameters": {"type": "object", "properties": {"city": {"type": "string"}}},
        },
    }],
)

Caveats

The engine has to be reachable from your app. For production: run the engine in the same VPC / network as your app, or expose it on a trusted internal hostname. Don’t put the engine on the public internet without auth.
By default every request is traced with full prompt and response text. If you handle PII, set OPENTRACY_TRACE_REDACT=true or OPENTRACY_TRACE_CONTENT=false on the engine — see Traces → Privacy.

Next

Self-host

Run engine + ClickHouse + UI with Docker Compose.

Python SDK

If you’re starting fresh (not adapting an OpenAI app), use opentracy directly.