Pydantic AI Tracing

How to use the python PydanticAIInstrumentor to trace PydanticAI agents

PydanticAI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic, it provides a clean, type-safe way to build AI agents with structured outputs.

Launch Phoenix

Install

pip install openinference-instrumentation-pydantic-ai pydantic-ai opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-api

Setup

Set up tracing using OpenTelemetry and the PydanticAI instrumentation:

import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from openinference.instrumentation.pydantic_ai import OpenInferenceSpanProcessor
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

# Set up the tracer provider
tracer_provider = TracerProvider()
trace.set_tracer_provider(tracer_provider)

# Add the OpenInference span processor
endpoint = f"{os.environ['PHOENIX_COLLECTOR_ENDPOINT']}/v1/traces"

# If you are using a local instance without auth, ignore these headers
headers = {"Authorization": f"Bearer {os.environ['PHOENIX_API_KEY']}"}
exporter = OTLPSpanExporter(endpoint=endpoint, headers=headers)

tracer_provider.add_span_processor(OpenInferenceSpanProcessor())
tracer_provider.add_span_processor(SimpleSpanProcessor(exporter))

Basic Usage

Here's a simple example using PydanticAI with automatic tracing:

Advanced Usage

Agent with System Prompts and Tools

Observe

Now that you have tracing setup, all PydanticAI agent operations will be streamed to your running Phoenix instance for observability and evaluation. You'll be able to see:

  • Agent interactions: Complete conversations between your application and the AI model

  • Structured outputs: Pydantic model validation and parsing results

  • Tool usage: When agents call external tools and their responses

  • Performance metrics: Response times, token usage, and success rates

  • Error handling: Validation errors, API failures, and retry attempts

  • Multi-agent workflows: Complex interactions between multiple agents

The traces will provide detailed insights into your AI agent behaviors, making it easier to debug issues, optimize performance, and ensure reliability in production.

Resources

Last updated

Was this helpful?