Setup using Phoenix OTEL
Phoenix OTEL provides a lightweight, vendor-agnostic way to instrument your services using OpenTelemetry and stream traces directly into Phoenix for debugging, monitoring, and model evaluation. This guide walks you through configuring your application to emit OTEL-compatible spans, enabling end-to-end visibility into requests, model calls, and system behavior with minimal overhead.
Getting Started
To begin sending traces to Phoenix using OpenTelemetry, install the Phoenix OTEL packages for your environment and configure a basic tracer. Below are quick-start examples for both Python and TypeScript users
phoenix.otel is a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults.
pip install arize-phoenix-otelThese defaults are aware of environment variables you may have set to configure Phoenix:
PHOENIX_COLLECTOR_ENDPOINTPHOENIX_PROJECT_NAMEPHOENIX_CLIENT_HEADERSPHOENIX_API_KEYPHOENIX_GRPC_PORT
Install OpenTelemetry API packages:
# npm, pnpm, yarn, etc
npm install @opentelemetry/semantic-conventions @opentelemetry/api @opentelemetry/instrumentation @opentelemetry/resources @opentelemetry/sdk-trace-base @opentelemetry/sdk-trace-node @opentelemetry/exporter-trace-otlp-protoInstall OpenInference instrumentation packages. Below is an example of adding instrumentation for OpenAI as well as the semantic conventions for OpenInference.
# npm, pnpm, yarn, etc
npm install openai @arizeai/openinference-instrumentation-openai @arizeai/openinference-semantic-conventionsConfigure your Environment
There are two ways to configure the collector endpoint:
Using environment variables
Using the
endpointkeyword argument
If you're using Phoenix Cloud, you'll need your API Key & your space endpoint. If you're running Phoenix locally, you'll just need your endpoint. (ex. localhost:6006)
Create your OTEL tracer
Once the Phoenix OTEL packages are installed, initialize a tracer so your spans can be collected and viewed in Phoenix.
The phoenix.otel module provides a high-level register function to configure OpenTelemetry tracing by setting a global TracerProvider. The register function can also configure headers and whether or not to process spans one by one or by batch.
from phoenix.otel import register
tracer_provider = register(
project_name="default", # sets a project name for spans
batch=True, # uses a batch span processor
auto_instrument=True, # uses all installed OpenInference instrumentors
# optional: if you want to configure custom endpoint
# endpoint="http://localhost:6006/v1/traces"
# protocol="grpc", # use "http/protobuf" for http transport
)If the PHOENIX_API_KEY environment variable is set, register will automatically add an authorization header to each span payload.
If you're setting the PHOENIX_COLLECTOR_ENDPOINT environment variable, register will
automatically try to send spans to your Phoenix server using gRPC.
register can be configured with different keyword arguments:
project_name: The Phoenix project nameor use
PHOENIX_PROJECT_NAMEenv. var
headers: Headers to send along with each span payloador use
PHOENIX_CLIENT_HEADERSenv. var
batch: Whether or not to process spans in batch
// instrumentation.ts
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
const tracerProvider = new NodeTracerProvider({
spanProcessors: [
// BatchSpanProcessor will flush spans in batches after some time,
// this is recommended in production. For development or testing purposes
// you may try SimpleSpanProcessor for instant span flushing to the Phoenix UI.
new BatchSpanProcessor(
new OTLPTraceExporter({
url: `http://localhost:6006/v1/traces`,
// (optional) if connecting to Phoenix Cloud
// headers: { "api_key": process.env.PHOENIX_API_KEY },
// (optional) if connecting to self-hosted Phoenix with Authentication enabled
// headers: { "Authorization": `Bearer ${process.env.PHOENIX_API_KEY}` }
})
),
],
});
tracerProvider.register();
console.log("👀 OpenInference initialized");Configuring the collector endpoint
When passing in the endpoint argument, you must specify the fully qualified endpoint. If the PHOENIX_GRPC_PORT environment variable is set, it will override the default gRPC port.
For default, the local endpoint will be localhost:6006 or your space's hostname when using Pheonix Cloud.
When passing in the endpoint argument, you must specify the fully qualified endpoint. If the PHOENIX_GRPC_PORT environment variable is set, it will override the default gRPC port.
The HTTP transport protocol is inferred from the endpoint
from phoenix.otel import register
tracer_provider = register(endpoint="http://localhost:6006/v1/traces")The GRPC transport protocol is inferred from the endpoint
from phoenix.otel import register
tracer_provider = register(endpoint="http://localhost:4317")Additionally, the protocol argument can be used to enforce the OTLP transport protocol regardless of the endpoint. This might be useful in cases such as when the GRPC endpoint is bound to a different port than the default (4317). The valid protocols are: "http/protobuf", and "grpc".
from phoenix.otel import register
tracer_provider = register(
endpoint="http://localhost:9999",
protocol="grpc", # use "http/protobuf" for http transport
)Instrumentation
Once you've connected your application to your Phoenix instance using phoenix.otel.register, you need to instrument your application. You have a few options to do this:
Using OpenInference auto-instrumentors. If you've used the
auto_instrumentflag above, then any instrumentor packages in your environment will be called automatically. For a full list of OpenInference packages, see https://arize.com/docs/phoenix/integrationsUsing Phoenix Decorators.
Using Base OTEL.
Last updated
Was this helpful?