Skip to main content
Take full control of OpenTelemetry. The getting started pages cover register() and OpenInference integrations — this page is for when you need more: batch processing for production, routing spans to multiple projects, or configuring resource attributes directly via the OpenTelemetry SDK.

OpenInference Instrumentation Packages

OpenInference provides auto-instrumentors for popular frameworks. Install the package for your provider, call .instrument(), and every call is traced automatically.
PackageDescription
openinference-semantic-conventionsSemantic conventions for tracing LLM apps
openinference-instrumentation-openaiOpenAI SDK
openinference-instrumentation-anthropicAnthropic SDK
openinference-instrumentation-langchainLangChain
openinference-instrumentation-llama-indexLlamaIndex
openinference-instrumentation-bedrockAWS Bedrock
openinference-instrumentation-mistralaiMistralAI
openinference-instrumentation-dspyDSPy
openinference-instrumentation-crewaiCrewAI
openinference-instrumentation-litellmLiteLLM
openinference-instrumentation-groqGroq
openinference-instrumentation-instructorInstructor
openinference-instrumentation-haystackHaystack
openinference-instrumentation-guardrailsGuardrails AI
openinference-instrumentation-vertexaiVertexAI
PackageDescription
@arizeai/openinference-semantic-conventionsSemantic conventions
@arizeai/openinference-coreCore utility functions
@arizeai/openinference-instrumentation-openaiOpenAI SDK
@arizeai/openinference-instrumentation-langchainLangChain.js
@arizeai/openinference-vercelVercel AI SDK
register() wires these up for most apps — but when you need more control over the tracer itself, configure OpenTelemetry directly:

Configure the OTel Tracer Directly

pip install opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp-proto-grpc openinference-semantic-conventions openinference-instrumentation-openai
import os
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import ConsoleSpanExporter, BatchSpanProcessor
from opentelemetry.sdk.resources import Resource
from openinference.instrumentation.openai import OpenAIInstrumentor

# Authentication — read credentials from environment
ARIZE_SPACE_ID = os.environ["ARIZE_SPACE_ID"]
ARIZE_API_KEY = os.environ["ARIZE_API_KEY"]
headers = f"space_id={ARIZE_SPACE_ID},api_key={ARIZE_API_KEY}"
os.environ["OTEL_EXPORTER_OTLP_TRACES_HEADERS"] = headers

# Resource attributes describe the source of telemetry
trace_attributes = {
    "model_id": "your-project-name",  # Maps to project in Arize
    "model_version": "v1",
}

endpoint = "https://otlp.arize.com/v1"

# Set up the tracer provider
tracer_provider = trace_sdk.TracerProvider(
    resource=Resource(attributes=trace_attributes)
)
tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint)))
tracer_provider.add_span_processor(BatchSpanProcessor(ConsoleSpanExporter()))
trace_api.set_tracer_provider(tracer_provider=tracer_provider)

tracer = trace_api.get_tracer(__name__)

# Auto-instrument
OpenAIInstrumentor().instrument()

Key Concepts

  • Resource attributes describe the source of telemetry (service, model, environment). Set once on the TracerProvider.
  • Span attributes describe a single span. Set per-span in your code.
  • Span processors filter, batch, and perform operations on spans before export.
  • model_id as a resource attribute maps to the project name in Arize AX.
The most important processor decision for production is which span processor to use:

Batch vs Simple Span Processor

A span processor controls when and how spans are exported. Choose based on your environment:
PropertyBatchSpanProcessorSimpleSpanProcessor
Best forProduction & stagingLocal debugging, demos, CI
Export behaviorAsync, in batchesEach span immediately (sync)
Impact on latencyLow — work done off the request pathHigher — export blocks the request
ThroughputHigh, optimized for volumeLow, can bottleneck under load
Reliability on exitRequires force_flush() / shutdown()Spans exported immediately
Visibility speedSlight delay (buffering)Immediate
Failure surfacingExport failures logged in backgroundFailures raised inline
TuningConfigurable (batch size, delay, timeouts)Minimal
The SimpleSpanProcessor is synchronous and blocking. Use BatchSpanProcessor for production.
Once your tracer is running, you’ll often want to attach more context to the span currently in flight:

Get the Current Span

Access the current span at any point to enrich it with additional information:
from opentelemetry import trace

current_span = trace.get_current_span()
# enrich current_span with attributes, events, etc.
Running multiple apps from one codebase, or splitting traffic across environments? You can split spans across projects:

Route Spans to Multiple Projects

To route traces from one application to multiple Arize spaces or projects, use register_with_routing from arize-otel:
pip install arize-otel
from arize.otel import register_with_routing, set_routing_context

# Register once with a single API key — routing happens per-context
tracer_provider = register_with_routing(
    api_key="your-api-key",
)

# Route specific operations to a different space + project
with set_routing_context(space_id="other-space-id", project_name="other-project"):
    # Spans created in this block are routed to "other-space-id" / "other-project"
    ...
register_with_routing uses ARIZE_API_KEY from your environment if api_key isn’t passed. Both space_id and project_name must be set inside set_routing_context — otherwise routing won’t be applied.Python-only today. For JS/TS apps or more complex routing (e.g., by span attribute), route at the OTel Collector layer — see OTEL Collector deployment patterns.
If you operate a centralized OpenTelemetry Collector serving many teams or spaces, see the shared-collector pattern that forwards arize-space-id from inbound request metadata — avoids redeploying the collector each time a new space is added.

Next step

Keep sensitive data out of your traces with masking and PII redaction:

Next: Mask and Redact Data