Pydantic AI Tracing
Pydantic AI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic, it provides a clean, type-safe way to build AI agents with structured outputs, tool integration, and multi-agent workflows.
Arize provides first-class support for instrumenting Pydantic AI agents with comprehensive observability for input/output messages, structured outputs, tool usage, and complex multi-agent workflows. Monitor your AI agents in production with detailed tracing and performance analytics.
We follow a standardized format for trace data structure using OpenInference, our open source package based on OpenTelemetry. The package we use is arize-otel, a lightweight convenience package to set up OpenTelemetry and send traces to Arize.
Quick Start: Pydantic AI Instrumentation
Installation & Setup
!pip install openinference-instrumentation-pydantic-ai pydantic-ai opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-api
# Import open-telemetry dependencies
import os
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from openinference.instrumentation.pydantic_ai import OpenInferenceSpanProcessor
from opentelemetry.sdk.resources import Resource
# Set the Space and API keys as headers for authentication
headers = f"space_id={ARIZE_SPACE_ID},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers
# Set resource attributes for the name and version for your application
trace_attributes = {
"model_id": "your project name", # This is how your project will show up in Arize AX
"model_version": "v1", # You can filter your spans by project version in Arize AX
}
# Define the desired endpoint URL to send traces
endpoint = "https://otlp.arize.com/v1"
# Set the tracer provider
tracer_provider = trace_sdk.TracerProvider(
resource=Resource(attributes=trace_attributes)
)
tracer_provider.add_span_processor(OpenInferenceSpanProcessor())
tracer_provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(endpoint)))
trace_api.set_tracer_provider(tracer_provider=tracer_provider)
Start using your Pydantic AI agents and monitor traces in Arize. For advanced examples, explore our openinference-instrumentation-pydantic-ai examples.
Basic Agent Usage Example
Here's a simple example using Pydantic AI with automatic tracing for structured outputs:
import os
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel
import nest_asyncio
nest_asyncio.apply()
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
# Define your Pydantic model
class LocationModel(BaseModel):
city: str
country: str
# Create and configure the agent
model = OpenAIModel("gpt-4", provider='openai')
agent = Agent(model, output_type=LocationModel, instrument=True)
# Run the agent
result = agent.run_sync("The windy city in the US of A.")
print(result)
Advanced Pydantic AI Patterns
AI Agents with System Prompts and Tools
Build sophisticated AI agents with custom tools and system prompts:
from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel
from typing import List
import httpx
class WeatherInfo(BaseModel):
location: str
temperature: float = Field(description="Temperature in Celsius")
condition: str
humidity: int = Field(description="Humidity percentage")
# Create an agent with system prompts and tools
weather_agent = Agent(
model=OpenAIModel("gpt-4"),
output_type=WeatherInfo,
system_prompt="You are a helpful weather assistant. Always provide accurate weather information.",
instrument=True
)
@weather_agent.tool
async def get_weather_data(ctx: RunContext[None], location: str) -> str:
"""Get current weather data for a location."""
# Mock weather API call - replace with actual weather service
async with httpx.AsyncClient() as client:
# This is a placeholder - use a real weather API
mock_data = {
"temperature": 22.5,
"condition": "partly cloudy",
"humidity": 65
}
return f"Weather in {location}: {mock_data}"
# Run the agent with tool usage
result = weather_agent.run_sync("What's the weather like in Paris?")
print(result)
What gets instrumented
Arize provides complete visibility into your Pydantic AI agent operations with automatic tracing of all interactions. With the above setup, Arize captures:
Core Agent Interactions
Agent Conversations: Complete conversations between your application and AI models
Structured Outputs: Pydantic model validation, parsing results, and type safety
Input/Output Tracking: Detailed logging of all agent inputs and generated outputs
Advanced Agent Features
Tool Usage: When agents call external tools, their parameters, and responses
Multi-Agent Workflows: Complex interactions and data flow between multiple agents
System Prompt Tracking: How system prompts influence agent behavior
Performance & Reliability Monitoring
Performance Metrics: Response times, token usage, and throughput analytics
Error Handling: Validation errors, API failures, retry attempts, and recovery
Success Rates: Agent completion rates and quality metrics
Production Insights
Usage Patterns: How agents are being used in production
Cost Tracking: Token usage and API costs across different models
Optimization Opportunities: Identify bottlenecks and improvement areas
Last updated
Was this helpful?