Documentation Index
Fetch the complete documentation index at: https://arize-ax.mintlify.dev/docs/llms.txt
Use this file to discover all available pages before exploring further.
LiteLLM allows developers to call all LLM APIs using the OpenAI format. LiteLLM Proxy is a proxy server to call 100+ LLMs in OpenAI format. Both are supported by this OpenInference auto-instrumentation, and traces can be viewed in Arize.
Any calls made to the following functions will be automatically captured by this integration:
-
completion()
-
acompletion()
-
completion_with_retries()
-
embedding()
-
aembedding()
-
image_generation()
-
aimage_generation()
Launch Arize
To get started, sign up for a free Arize account and get your Space ID and API Key.
Install
pip install openinference-instrumentation-litellm litellm arize-otel
Setup
Use the register function to connect your application to Arize and instrument LiteLLM.
# Import open-telemetry dependencies
from arize.otel import register
# Setup OTel via our convenience function
tracer_provider = register(
space_id = "your-space-id", # in app space settings page
api_key = "your-api-key", # in app space settings page
project_name = "your-project-name", # name this to whatever you would like
)
# Import the instrumentor from OpenInference
from openinference.instrumentation.litellm import LiteLLMInstrumentor
# Instrument LiteLLM
LiteLLMInstrumentor().instrument(tracer_provider=tracer_provider)
Add any API keys needed by the models you are using with LiteLLM. For example, if using an OpenAI model:
import os
os.environ["OPENAI_API_KEY"] = "PASTE_YOUR_API_KEY_HERE"
Run LiteLLM
You can now use LiteLLM as normal and calls will be traced in Arize.
import litellm
# Ensure the required API key (e.g., OPENAI_API_KEY) is set in your environment
completion_response = litellm.completion(
model="gpt-3.5-turbo",
messages=[{"content": "What's the capital of China?", "role": "user"}]
)
print(completion_response)
Observe
Traces should now be visible in your Arize account!
Resources