Why Translation is Needed
Semantic conventions are standardized attribute names and values that ensure consistent tracing across different LLM providers, models, and frameworks. Different instrumentation standards use different semantic conventions to describe LLM operations. Phoenix uses OpenInference semantic conventions as its standard format. To ensure all traces are displayed consistently in Phoenix, traces from other libraries must be translated to the OpenInference format using span processors.How Translation Works - Span Processors
Span processors are components that process spans before they are exported, allowing them to be modified, filtered, or transformed. These processors:- Map attribute names from source conventions to OpenInference conventions
- Transform attribute values to match expected formats
- Preserve all data while normalizing the structure
View OpenLIT Traces in Phoenix
Convert OpenLIT traces to OpenInference format using theOpenInferenceSpanProcessor from the openinference-instrumentation-openlit package.
OpenInference OpenLit Instrumentation
View on PyPI
1
Install the necessary packages:
2
Start Phoenix in the background as a collector. By default, it listens on
http://localhost:6006. You can visit the app via a browser at the same address.3
Configure the tracer provider and add the span processors. The
OpenInferenceSpanProcessor converts OpenLIT traces to OpenInference format, and the BatchSpanProcessor exports them to Phoenix via the OTLP gRPC endpoint:4
Initialize OpenLIT with the tracer and set up Semantic Kernel:
5
Invoke your model and view the converted traces in Phoenix:The traces will be visible in the Phoenix UI at
http://localhost:6006.View OpenLLMetry Traces in Phoenix
Convert OpenLLMetry traces to OpenInference format using theOpenInferenceSpanProcessor from the openinference-instrumentation-openllmetry package.
OpenInference OpenLLMetry Instrumentation
View on PyPI
1
Install the necessary packages:
2
Start Phoenix in the background as a collector. By default, it listens on
http://localhost:6006. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)3
Configure the tracer provider and add the span processors. The
OpenInferenceSpanProcessor converts OpenLLMetry traces to OpenInference format, and the BatchSpanProcessor exports them to Phoenix via the OTLP gRPC endpoint:4
Initialize the OpenAI instrumentor with the tracer provider to generate OpenLLMetry traces:
5
Invoke your model and view the converted traces in Phoenix:The traces will be visible in the Phoenix UI at
http://localhost:6006.View OpenTelemetry GenAI Traces in Phoenix
Convert OpenTelemetry GenAI span attributes to OpenInference format using the@arizeai/openinference-genai package for TypeScript/JavaScript applications.
This example:
- Creates a custom TraceExporter that converts OpenTelemetry GenAI spans to OpenInference spans
- Uses the custom exporter in a SpanProcessor
- Exports traces to Phoenix
OpenInference GenAI Package
View on npm
1
Install the necessary packages:
2
Start Phoenix in the background as a collector. By default, it listens on
http://localhost:6006. You can visit the app via a browser at the same address.3
Create a custom TraceExporter file (e.g.,
openinferenceOTLPTraceExporter.ts) that converts the OpenTelemetry GenAI attributes to OpenInference attributes:4
Use the custom exporter in a SpanProcessor and configure the tracer provider. Set the
COLLECTOR_ENDPOINT environment variable to your Phoenix endpoint (e.g., http://localhost:6006 for local Phoenix):5
Once your application is running and generating traces, the converted OpenTelemetry GenAI traces will be visible in the Phoenix UI. The custom exporter automatically converts GenAI span attributes to OpenInference format before exporting to Phoenix.

