Configure OTEL tracer
When the register function does not offer enough customization for your needs, you can use the opentelemetry_sdk to control how you send traces.
This snippet contains a few OTel concepts:
The header is an environment variable for authentication to send data.
A resource represents an origin (e.g., a particular service, or in this case, a project) from which your spans are emitted.
Span processors filter, batch, and perform operations on your spans prior to export. You can set multiple locations for exporting data, such as the console.
Your tracer provides a handle for you to create spans and add attributes in your application code.
The collector (e.g., Phoenix) receives the spans exported by your application.
The SimpleSpanProcessor is synchronous and blocking. Use the BatchSpanProcessor for non-blocking production application instrumentation.
Here is sample code to setup instrumentation using OpenTelemetry libraries before starting the OpenAI auto instrumentor from openinference.
Install the libraries
pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc openinference-semantic-conventionsCustomize your tracing below
# Import open-telemetry dependencies
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
# Import the automatic instrumentor from OpenInference
from openinference.instrumentation.openai import OpenAIInstrumentor
# Set the Space and API keys as headers for authentication
headers = f"space_id={ARIZE_SPACE_ID},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers
# Set resource attributes for the name and version for your application
trace_attributes = {
"model_id": "your model name", # This is how your model will show up in Arize
"model_version": "v1", # You can filter your spans by model version in Arize
}
# Define the desired endpoint URL to send traces
endpoint = "https://otlp.arize.com/v1"
# Set the tracer provider
tracer_provider = trace_sdk.TracerProvider(
resource=Resource(attributes=trace_attributes)
)
tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
# To get your tracer
tracer = trace_api.get_tracer(__name__)
# Finish automatic instrumentation
OpenAIInstrumentor().instrument()Now start asking questions to your LLM app and watch the traces being collected by Arize. For more examples of editing your OTEL tracer, check our openinferenece-instrumentation-openai examples.
Install the following libraries via npm.
npm install @arizeai/openinference-instrumentation-openai @opentelemetry/exporter-trace-otlp-grpc @grpc/grpc-jsinstrumentation.ts should be implemented as below.
/*instrumentation.ts */
import { registerInstrumentations } from "@opentelemetry/instrumentation";
import {
OpenAIInstrumentation
} from "@arizeai/openinference-instrumentation-openai";
import { ConsoleSpanExporter } from "@opentelemetry/sdk-trace-base";
import {
NodeTracerProvider,
BatchSpanProcessor,
} from "@opentelemetry/sdk-trace-node";
import { Resource } from "@opentelemetry/resources";
import {
OTLPTraceExporter as GrpcOTLPTraceExporter
} from "@opentelemetry/exporter-trace-otlp-grpc"; // Arize specific
import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
import { Metadata } from "@grpc/grpc-js"
// For troubleshooting, set the log level to DiagLogLevel.DEBUG
diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
// Arize specific - Create metadata and add your headers
const metadata = new Metadata();
// Your Arize Space and API Keys, which can be found in the UI
metadata.set('space_id', 'your-space-id');
metadata.set('api_key', 'your-api-key');
const provider = new NodeTracerProvider({
resource: new Resource({
// Arize specific - The name of a new or preexisting model you
// want to export spans to
"model_id": "your-model-id",
"model_version": "your-model-version"
}),
});
provider.addSpanProcessor(new BatchSpanProcessor(new ConsoleSpanExporter()));
provider.addSpanProcessor(
new BatchSpanProcessor(
new GrpcOTLPTraceExporter({
url: "https://otlp.arize.com/v1",
metadata,
}),
),
);
registerInstrumentations({
instrumentations: [new OpenAIInstrumentation({})],
});
provider.register();Get the current span context and tracer
Accessing the current span at a given point in time allows you to enrich it with additional information.
from opentelemetry import trace
current_span = trace.get_current_span()
# enrich 'current_span' with some informationGet Active Context
Example to grab the active context:
import { context, trace } from '@opentelemetry/api';
// Function to demonstrate context usage
function demonstrateActiveContext() {
// Get the active context
const activeContext = context.active();
// Example of using the active context to set and get a value
const ctxWithValue = context.with(activeContext, () => {
context.setValue('key', 'value');
});
// Accessing the current span if tracing is set up
const currentSpan = trace.getSpan(activeContext);
}Get the current span Sometimes it’s helpful to do something with the current/active span at a particular point in program execution.
const activeSpan = opentelemetry.trace.getActiveSpan();
// do something with the active span, optionally ending it if that is appropriate for your use case.Get a span from context It can also be helpful to get the span from a given context that isn’t necessarily the active span.
const ctx = context.active();
const span = opentelemetry.trace.getSpan(ctx);
// do something with the acquired span, optionally ending it if that is appropriate for your use case.Last updated
Was this helpful?

