Pydantic AI

Instrument AI agents built using Pydantic AI framework

Pydantic AI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic, it provides a clean, type-safe way to build AI agents with structured outputs, tool integration, and multi-agent workflows.

Arize provides first-class support for instrumenting Pydantic AI agents with comprehensive observability for input/output messages, structured outputs, tool usage, and complex multi-agent workflows. Monitor your AI agents in production with detailed tracing and performance analytics.

We follow a standardized format for trace data structure using OpenInference, our open source package based on OpenTelemetry. The package we use is arize-otel, a lightweight convenience package to set up OpenTelemetry and send traces to Arize.

Quick Start: Pydantic AI Instrumentation

Installation & Setup

!pip install openinference-instrumentation-pydantic-ai pydantic-ai opentelemetry-sdk opentelemetry-exporter-otlp opentelemetry-api
import os
from opentelemetry import trace
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace import TracerProvider
from openinference.instrumentation.pydantic_ai import OpenInferenceSpanProcessor
from opentelemetry.sdk.trace.export import SimpleSpanProcessor

tracer_provider = TracerProvider()
trace.set_tracer_provider(tracer_provider)

# Set the Space and API keys as headers for authentication
headers = f"space_id={ARIZE_SPACE_ID},api_key={ARIZE_API_KEY}"
os.environ['OTEL_EXPORTER_OTLP_TRACES_HEADERS'] = headers

# Define the desired endpoint URL to send traces
endpoint = "https://otlp.arize.com/v1"

# Set the tracer provider
exporter = OTLPSpanExporter(endpoint=endpoint)
tracer_provider.add_span_processor(OpenInferenceSpanProcessor())
tracer_provider.add_span_processor(SimpleSpanProcessor(exporter))

Start using your Pydantic AI agents and monitor traces in Arize. For advanced examples, explore our openinference-instrumentation-pydantic-ai examples.

Basic Agent Usage Example

Here's a simple example using Pydantic AI with automatic tracing for structured outputs:

import os
from pydantic import BaseModel
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIModel

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"

# Define your Pydantic model for structured output
class LocationInfo(BaseModel):
    city: str
    country: str
    confidence: float

# Create and configure the agent with instrumentation enabled
model = OpenAIModel("gpt-4")
agent = Agent(
    model=model, 
    output_type=LocationInfo,
    instrument=True  # Enable built-in tracing
)

# Run the agent - this will be automatically traced
result = agent.run_sync("The windy city in the US of A.")
print(f"Location: {result.city}, {result.country}")
print(f"Confidence: {result.confidence}")

Advanced Pydantic AI Patterns

AI Agents with System Prompts and Tools

Build sophisticated AI agents with custom tools and system prompts:

from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from pydantic_ai.models.openai import OpenAIModel
from typing import List
import httpx

class WeatherInfo(BaseModel):
    location: str
    temperature: float = Field(description="Temperature in Celsius")
    condition: str
    humidity: int = Field(description="Humidity percentage")

# Create an agent with system prompts and tools
weather_agent = Agent(
    model=OpenAIModel("gpt-4"),
    output_type=WeatherInfo,
    system_prompt="You are a helpful weather assistant. Always provide accurate weather information.",
    instrument=True
)

@weather_agent.tool
async def get_weather_data(ctx: RunContext[None], location: str) -> str:
    """Get current weather data for a location."""
    # Mock weather API call - replace with actual weather service
    async with httpx.AsyncClient() as client:
        # This is a placeholder - use a real weather API
        mock_data = {
            "temperature": 22.5,
            "condition": "partly cloudy",
            "humidity": 65
        }
        return f"Weather in {location}: {mock_data}"

# Run the agent with tool usage
result = weather_agent.run_sync("What's the weather like in Paris?")
print(result)

What gets instrumented

Arize provides complete visibility into your Pydantic AI agent operations with automatic tracing of all interactions. With the above setup, Arize captures:

Core Agent Interactions

  • Agent Conversations: Complete conversations between your application and AI models

  • Structured Outputs: Pydantic model validation, parsing results, and type safety

  • Input/Output Tracking: Detailed logging of all agent inputs and generated outputs

Advanced Agent Features

  • Tool Usage: When agents call external tools, their parameters, and responses

  • Multi-Agent Workflows: Complex interactions and data flow between multiple agents

  • System Prompt Tracking: How system prompts influence agent behavior

Performance & Reliability Monitoring

  • Performance Metrics: Response times, token usage, and throughput analytics

  • Error Handling: Validation errors, API failures, retry attempts, and recovery

  • Success Rates: Agent completion rates and quality metrics

Production Insights

  • Usage Patterns: How agents are being used in production

  • Cost Tracking: Token usage and API costs across different models

  • Optimization Opportunities: Identify bottlenecks and improvement areas

Last updated

Was this helpful?