OpenRouter Tracing
Trace OpenRouter API calls with OpenInference auto-instrumentation using OpenAI-compatible endpoints for LLM observability.
OpenInference provides auto-instrumentation for OpenRouter through the OpenAI Python Library since OpenRouter provides a fully OpenAI-compatible API endpoint. This allows you to use the same instrumentation and monitoring capabilities as OpenAI.
Note: OpenRouter exposes a /v1
endpoint that mirrors OpenAI's schema, making it fully compatible with OpenAI SDKs and OpenInference auto-instrumentation.
Prerequisites
OpenRouter account and API key
Arize account with Space ID and API Key
Why OpenRouter Works with OpenInference
Arize's OpenInference auto-instrumentation works with OpenRouter because:
OpenRouter provides a fully OpenAI-API-compatible endpoint - The
/v1
endpoint mirrors OpenAI's schemaReuse official OpenAI SDKs - Point the OpenAI client's
base_url
to OpenRouterAutomatic instrumentation - OpenInference hooks into OpenAI SDK calls seamlessly
Install
pip install openinference-instrumentation-openai openai arize-otel
Setup
Set your OpenRouter API key:
export OPENAI_API_KEY='your_openrouter_api_key'
Initialize Arize and instrument OpenAI:
from arize.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer_provider = register(
space_id="your-space-id",
api_key="your-api-key",
project_name="your-project-name",
)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Configure OpenAI client for OpenRouter:
import openai
client = openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="your_openrouter_api_key"
)
Make traced calls:
response = client.chat.completions.create(
model="meta-llama/llama-3.1-8b-instruct:free",
messages=[{"role": "user", "content": "Write a haiku about observability."}],
)
print(response.choices[0].message.content)
What Gets Traced
All OpenRouter model calls are automatically traced and include:
Request/response data and timing
Model name and provider information
Token usage and cost data (when supported)
Error handling and debugging information
JavaScript/TypeScript Support
OpenInference also provides instrumentation for the OpenAI JS/TS SDK, which works with OpenRouter. For setup and examples, please refer to the OpenInference JS examples for OpenAI.
Common Issues
API Key: Use your OpenRouter API key, not OpenAI's
Model Names: Use exact model names from OpenRouter's documentation
Rate Limits: Check your OpenRouter dashboard for usage limits
Additional Resources
Last updated
Was this helpful?