Get Started: Tracing

Now that you have Phoenix up and running, the next step is to start sending traces from your Python application. Traces let you see what’s happening inside your system, including function calls, LLM requests, tool calls, and other operations.

1

Launch Phoenix

Before sending traces, make sure Phoenix is running. For more step by step instructions, check out this Get Started guide.

Log in, create a space, navigate to the settings page in your space, and create your API keys.

In your code, set your environment variables.

export PHOENIX_API_KEY = "ADD YOUR PHOENIX API KEY"
export PHOENIX_COLLECTOR_ENDPOINT = "ADD YOUR PHOENIX COLLECTOR ENDPOINT"

You can find your collector endpoint here:

After launching your space, go to settings.
Launch your space, navigate to settings & copy your hostname for your collector endpoint

Your Collector Endpoint is: https://app.phoenix.arize.com/s/ + your space name.

2

Install the Phoenix OTEL Package

To collect traces from your application, you must configure an OpenTelemetry TracerProvider to send traces to Phoenix.

pip install arize-phoenix-otel
3

Set-Up Tracing

There are two ways to trace your application: manually, or automatically with an auto-instrumentor. OpenInference provides the auto-instrumenter option through ready-to-use integrations with popular frameworks, so you can capture traces without adding manual logging code.

Phoenix can capture all calls made to supported libraries automatically. Just install the associated library.

pip install openinference-instrumentation-openai
4

5

Register a Tracer

6

In your Python code, register Phoenix as the trace provider. This connects your application to Phoenix, making a project in the UI after you send a trace, and optionally enables auto-instrumentation (automatic tracing for supported libraries like OpenAI).

7
from phoenix.otel import register

tracer_provider = register(
    project_name="my-llm-app",
    auto_instrument=True,
)

tracer = tracer_provider.get_tracer(__name__)
8

9

In a new file called instrumentation.ts (or .js if applicable)

10
// instrumentation.ts
import { register } from "@arizeai/phoenix-otel";

const provider = register({
  projectName: "my-llm-app", // Sets the project name in Phoenix UI
});
11

The register function automatically:

12
  • Reads PHOENIX_COLLECTOR_ENDPOINT and PHOENIX_API_KEY from environment variables

  • Configures the collector endpoint (defaults to http://localhost:6006)

  • Sets up batch span processing for production use

  • Registers the provider globally

13

Environment Variables:

  • PHOENIX_COLLECTOR_ENDPOINT - The URL to your Phoenix instance (e.g., https://app.phoenix.arize.com)

  • PHOENIX_API_KEY - Your Phoenix API key for authentication

14

Now, import this file at the top of your main program entrypoint, or invoke it with the node cli's --require flag:

15
  • Import Method:

    • In main.ts or similar: import "./instrumentation.ts"

    • In your CLI, script, Dockerfile, etc: node main.ts

  • --require Method:

    • In your CLI, script, Dockerfile, etc: node --require ./instrumentation.ts main.ts {% endtab %} {% endtabs %} {% endstep %}

16

Start Your Application

Now that you have set up tracing & your project in Phoenix, it's time to actually invoke your llm, agent, or application.

First add your OpenAI API Key & then invoke the model.

import os
from getpass import getpass

if not (openai_api_key := os.getenv("OPENAI_API_KEY")):
    openai_api_key = getpass("🔑 Enter your OpenAI API key: ")

os.environ["OPENAI_API_KEY"] = openai_api_key
# Add OpenAI API Key
import openai

client = openai.OpenAI()
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Why is the sky blue?"}],
)
print(response.choices[0].message.content)
17

View your Traces in Phoenix

You should now see traces in Phoenix!

Learn More:

Last updated

Was this helpful?