Pydantic AI
Instrument AI agents built using Pydantic AI framework
Pydantic AI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI. Built by the team behind Pydantic, it provides a clean, type-safe way to build AI agents with structured outputs, tool integration, and multi-agent workflows.
Arize provides first-class support for instrumenting Pydantic AI agents with comprehensive observability for input/output messages, structured outputs, tool usage, and complex multi-agent workflows. Monitor your AI agents in production with detailed tracing and performance analytics.
We follow a standardized format for trace data structure using OpenInference, our open source package based on OpenTelemetry. The package we use is arize-otel, a lightweight convenience package to set up OpenTelemetry and send traces to Arize.
Quick Start: Pydantic AI Instrumentation
Installation & Setup
Start using your Pydantic AI agents and monitor traces in Arize. For advanced examples, explore our openinference-instrumentation-pydantic-ai examples.
Basic Agent Usage Example
Here's a simple example using Pydantic AI with automatic tracing for structured outputs:
Advanced Pydantic AI Patterns
AI Agents with System Prompts and Tools
Build sophisticated AI agents with custom tools and system prompts:
What gets instrumented
Arize provides complete visibility into your Pydantic AI agent operations with automatic tracing of all interactions. With the above setup, Arize captures:
Core Agent Interactions
Agent Conversations: Complete conversations between your application and AI models
Structured Outputs: Pydantic model validation, parsing results, and type safety
Input/Output Tracking: Detailed logging of all agent inputs and generated outputs
Advanced Agent Features
Tool Usage: When agents call external tools, their parameters, and responses
Multi-Agent Workflows: Complex interactions and data flow between multiple agents
System Prompt Tracking: How system prompts influence agent behavior
Performance & Reliability Monitoring
Performance Metrics: Response times, token usage, and throughput analytics
Error Handling: Validation errors, API failures, retry attempts, and recovery
Success Rates: Agent completion rates and quality metrics
Production Insights
Usage Patterns: How agents are being used in production
Cost Tracking: Token usage and API costs across different models
Optimization Opportunities: Identify bottlenecks and improvement areas
Last updated
Was this helpful?