Get Started: Tracing

Now that you have Phoenix up and running, the next step is to start sending traces from your Python application. Traces let you see what’s happening inside your system, including function calls, LLM requests, tool calls, and other operations.

1

Launch Phoenix

Before sending traces, make sure Phoenix is running. For more step by step instructions, check out this Get Started guide.

Log in, create a space, navigate to the settings page in your space, and create your API keys.

In your code, set your environment variables.

import os
os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX Collector endpoint"

You can find your collector endpoint here:

After launching your space, go to settings.
Launch your space, navigate to settings & copy your hostname for your collector endpoint

Your Collector Endpoint is: https://app.phoenix.arize.com/s/ + your space name.

2

Install the Phoenix OTEL Package

To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.

pip install arize-phoenix-otel
3

Set-Up Tracing

There are two ways to trace your application: manually, or automatically with an auto-instrumentor. OpenInference provides the auto-instrumenter option through ready-to-use integrations with popular frameworks, so you can capture traces without adding manual logging code.

Phoenix can capture all calls made to supported libraries automatically. Just install the associated library.

pip install openinference-instrumentation-openai
4

Register a Tracer

In your Python code, register Phoenix as the trace provider. This connects your application to Phoenix, making a project in the UI after you send a trace, and optionally enables auto-instrumentation (automatic tracing for supported libraries like OpenAI).

from phoenix.otel import register

tracer_provider = register(
    project_name="my-llm-app",
    auto_instrument=True,
)

tracer = tracer_provider.get_tracer(__name__)
5

Start Your Application

Now that you have set up tracing & your project in Phoenix, it's time to actually invoke your llm, agent, or application.

First add your OpenAI API Key & then invoke the model.

import os
from getpass import getpass

if not (openai_api_key := os.getenv("OPENAI_API_KEY")):
    openai_api_key = getpass("🔑 Enter your OpenAI API key: ")

os.environ["OPENAI_API_KEY"] = openai_api_key
# Add OpenAI API Key
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
print(response.choices[0].message.content)
6

View your Traces in Phoenix

You should now see traces in Phoenix!

Learn More:

Last updated

Was this helpful?