Skip to main content

Documentation Index

Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt

Use this file to discover all available pages before exploring further.

OpenInference provides auto-instrumentation for OpenRouter through the OpenAI Python Library since OpenRouter provides a fully OpenAI-compatible API endpoint. This allows you to use the same instrumentation and monitoring capabilities as OpenAI. Note: OpenRouter exposes a /v1 endpoint that mirrors OpenAI’s schema, making it fully compatible with OpenAI SDKs and OpenInference auto-instrumentation.

Prerequisites

  • OpenRouter account and API key
  • Arize AX account with Space ID and API Key

Why OpenRouter Works with OpenInference

Arize’s OpenInference auto-instrumentation works with OpenRouter because:
  1. OpenRouter provides a fully OpenAI-API-compatible endpoint - The /v1 endpoint mirrors OpenAI’s schema
  2. Reuse official OpenAI SDKs - Point the OpenAI client’s base_url to OpenRouter
  3. Automatic instrumentation - OpenInference hooks into OpenAI SDK calls seamlessly

Install

pip install openinference-instrumentation-openai openai arize-otel

Setup

  1. Set your OpenRouter API key:
export OPENAI_API_KEY='your_openrouter_api_key'
  1. Initialize Arize and instrument OpenAI:
from arize.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(
    space_id="your-space-id",
    api_key="your-api-key",
    project_name="your-project-name",
)

OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
  1. Configure OpenAI client for OpenRouter:
import openai

client = openai.OpenAI(
    base_url="https://openrouter.ai/api/v1",
    api_key="your_openrouter_api_key"
)
  1. Make traced calls:
response = client.chat.completions.create(
    model="meta-llama/llama-3.1-8b-instruct:free",
    messages=[{"role": "user", "content": "Write a haiku about observability."}],
)
print(response.choices[0].message.content)

What Gets Traced

All OpenRouter model calls are automatically traced and include:
  • Request/response data and timing
  • Model name and provider information
  • Token usage and cost data (when supported)
  • Error handling and debugging information

JavaScript/TypeScript Support

OpenInference also provides instrumentation for the OpenAI JS/TS SDK, which works with OpenRouter. For setup and examples, please refer to the OpenInference JS examples for OpenAI.

Common Issues

  • API Key: Use your OpenRouter API key, not OpenAI’s
  • Model Names: Use exact model names from OpenRouter’s documentation
  • Rate Limits: Check your OpenRouter dashboard for usage limits

Additional Resources

OpenRouter Documentation

OpenInference OpenAI Instrumentation