Add your OpenAI API key as an environment variable:
export OPENAI_API_KEY='your_openai_api_key'
For Azure OpenAI, you would set AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, etc., as per Azure’s requirements.Use the register function to connect your application to Arize and instrument the OpenAI client:
from arize.otel import registertracer_provider = register( space_id = "your-space-id", # in app space settings page api_key = "your-api-key", # in app space settings page project_name = "your-project-name", )from openinference.instrumentation.openai import OpenAIInstrumentorOpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
The OpenAI client will automatically use the OPENAI_API_KEY (or Azure-specific variables) from your environment.
import openaiclient = openai.OpenAI()response = client.chat.completions.create( model="gpt-4o", # Or your chosen model, e.g., a gpt-3.5-turbo variant messages=[{"role": "user", "content": "Write a haiku about observability."}],)print(response.choices[0].message.content)
Now that you have tracing setup, all invocations of OpenAI (completions, chat completions, embeddings) using the instrumented client will be streamed to your Arize account for observability and evaluation.
OpenInference also provides instrumentation for the OpenAI JS/TS SDK. For setup and examples, please refer to the OpenInference JS examples for OpenAI.