The Evaluator
Your go-to blog for insights on AI observability and evaluation.
Stay in the know with updates and new resources:

Evaluating Large Language Models: Are Modern Benchmarks Sufficient?
With the accelerated development of GenAI, there is a particular focus on its testing and evaluation, resulting in the release of several LLM benchmarks. Each of these benchmarks tests the…

Building and Deploying Observable AI Agents with Google Agent Framework and Arize
Co-authored by Ali Arsanjani, Director of Applied AI Engineering at Google Cloud 1. Introduction: The Dawn of the Agentic Era We have entered into a new era of AI innovation…

Arize AI and the Future of Agent Interoperability: Embracing Google’s A2A Protocol
We’re excited to announce that Arize AI is partnering with Google as a launch partner for the Agent Interop Protocol (A2A), an open standard enabling seamless communication between AI agents…

Tracing and Evaluating Gemini Audio with Arize
Google’s Gemini models represent a powerful leap forward in multimodal AI, particularly in their ability to process and transcribe audio content with remarkable accuracy. However, even advanced models require robust…

AI Benchmark Deep Dive: Gemini 2.5 and Humanity’s Last Exam
Our latest paper reading provided a comprehensive overview of modern AI benchmarks, taking a close look at Google’s recent Gemini 2.5 release and its performance on key evaluations, notably the…

Model Context Protocol
Want to learn more about Anthropic’s groundbreaking Model Context Protocol (MCP)? We break down how this open standard is revolutionizing AI by enabling seamless integration between LLMs and external data…

Self-Improving Agents: Automating LLM Performance Optimization using Arize and NVIDIA NeMo
Enterprises face a critical challenge in keeping their LLM models accurate and reliable over time. Traditional model improvement approaches are slow, manual, and reactive, making it difficult to scale and…

Prompt Optimization Techniques
LLMs are powerful tools, but their performance is heavily influenced by how prompts are structured. The difference between an effective and ineffective prompt can determine whether a model produces accurate…

Prompt Management from First Principles
How we built a holistic prompt management system that preserves developer freedom Unlike traditional software, where code execution follows predictable paths, LLM applications are inherently non-deterministic. Their behavior is shaped…

How We Scaled Support in Arize Copilot Without Slowing Down
Arize Copilot has always had a clear vision: to empower AI engineers and data scientists to spend less time on repetitive tasks and more time building innovative applications. Copilot streamlines…