Skip to main content
The most common production pattern: auto-instrumentation handles LLM provider calls, manual spans cover the custom parts (tool executions, orchestration, domain logic). They stitch together automatically into the same trace because they share the same TracerProvider. If your app uses tool/function calling and auto only captures the LLM API call — not your code that runs the tool — this pattern fills in the gaps.
1

Set up auto-instrumentation

Register your tracer provider and auto-instrument your LLM provider:
from arize.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor

tracer_provider = register(
    space_id="YOUR_SPACE_ID",
    api_key="YOUR_API_KEY",
    project_name="my-project",
)
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
2

Get a tracer for manual spans

Use the same TracerProvider to get a tracer — this ensures manual and auto spans end up in the same trace:
from opentelemetry import trace
import openai

tracer = trace.get_tracer(__name__)
client = openai.OpenAI()
3

Add manual CHAIN and TOOL spans around your logic

Manual spans automatically nest as children of each other and alongside auto-instrumented spans:
def run_agent(user_input: str) -> str:
    with tracer.start_as_current_span("agent-run") as span:
        span.set_attribute("openinference.span.kind", "CHAIN")
        span.set_attribute("input.value", user_input)

        # Auto-instrumented — appears as child LLM span automatically
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[{"role": "user", "content": user_input}],
        )
        tool_call = response.choices[0].message.tool_calls[0]

        # Manual TOOL span — also a child, captures tool execution
        with tracer.start_as_current_span(tool_call.function.name) as tool_span:
            tool_span.set_attribute("openinference.span.kind", "TOOL")
            tool_span.set_attribute("input.value", tool_call.function.arguments)
            result = run_tool(tool_call.function.name, tool_call.function.arguments)
            tool_span.set_attribute("output.value", result)

        span.set_attribute("output.value", result)
        return result
The result is a complete trace tree: CHAIN (manual) → LLM (auto) → TOOL (manual). Without the manual spans, you’d only see the LLM call — the tool execution and agent orchestration would be invisible.

Next step

Visualize your agent’s execution as an interactive graph:

Next: Agent Trajectory