Text reads: Arize AI raises $70M Series C

Arize AI Raises $70M Series C to Build the Gold Standard for AI Evaluation & Observability

Jason Lopatecki

Co-founder and CEO

Aparna Dhinakaran

Co-founder & Chief Product Officer

In 2020, we founded Arize with a clear mission: to give teams the tools they need to understand, troubleshoot, and improve AI performance in the real world. Our initial seed investment deck started with the simple line “We Make AI Work.”

Since then, AI has evolved at breakneck speed—expanding beyond traditional machine learning into generative models, multi-agent systems, and autonomous decision-making. But with all this progress comes one of the most momentous challenges that AI builders have faced: How to make Artificial Intelligence really work.

That’s why today, we’re thrilled to announce our $70 million Series C to accelerate our mission: ensuring LLMs and AI agents don’t just work—but work reliably at scale in the real world.

Powering The Next Generation of AI Agents

This round, the largest-ever investment in AI observability, was led by Adams Street Partners, with participation from M12 (Microsoft’s venture fund), Sinewave Ventures, OMERS Ventures, Datadog, PagerDuty, Industry Ventures, and Archerman Capital. We’re also grateful to our existing investors—Foundation Capital, Battery Ventures, TCV, and Swift VC—for doubling down on their commitment to our vision.

Why now? AI is no longer confined to research labs or X/Twitter demos— AI agents will be making real-world decisions in trading, logistics, and critical infrastructure, often without direct human oversight. As a result, trust, evaluation, and reliability have never been more important. Arize ensures that AI teams can test, debug, and optimize their systems before failure cascades into production.

Unified Platform: Evaluation & Observability

I have Cursor open in one window and Arize open in another – Arize Customer

Our vision of building the next generation of intelligent applications is radically different from how we build software today:

In software development, you have different systems for development and production.

In AI, data is the fuel that drives development and the data derived from production will power development.

In software, tracing is an afterthought.

In AI, tracing is a first class citizen that is instrumental in your local AI development.

In software, testing is fairly simple code.

In AI, testing requires AI evaluations and those same AI evaluations are used to perfect your product in production.

In software, there is code and a small number of people who can edit that code.

In AI, there are prompts & models and anyone that can write English can edit a prompt.

In software, you deliver fixed deterministic systems that process data.

In AI, we believe that self learning systems powered by production data, directed by AI evaluations, will be optimizing themselves in self learning iterative loops.

Simply put, in Artificial Intelligence, there will be a single Unified Platform across development and production, Evaluation and Observability, unified by data.

Independence Matters

The burgeoning ecosystem of agent frameworks, gateways and model providers means that independence matters more than ever.

In response, we’ve built a best-in-class, framework independent, AI evaluation and observability suite to help AI engineers debug, monitor, and optimize AI systems:

  • Arize AX for Enterprise – The leading evaluation and observability platform for AI engineers, spanning generative AI, AI agents, machine learning, and computer vision.
  • Arize Phoenix OSS – The open-source AI observability and performance tracing tool launched in 2023, now with over two million monthly downloads and growing.
  • Arize AI Copilot – The first AI assistant for AI engineers, launched in 2024, with over 50 built-in skills, from debugging agent traces to writing evals and optimizing prompts.

AI teams need better infrastructure for debugging and evaluation—not just for today’s AI applications, but for the future of multi-agent systems, reinforcement learning, and autonomous AI.

That’s why we’re also expanding our partnership with Microsoft, bringing deeper integrations with Azure AI Studio, the Azure AI Foundry portal, SDK, and CLI. Additionally, we continue to deepen technical integrations with Google Cloud and NVIDIA’s AI microservices, making it easier for AI engineers to standardize observability across any stack.

Shaping the Future of Trustworthy LLMs & AI Agents

At Arize, we believe AI can only reach its full potential if it’s built on a foundation of reliability, transparency, and accountability. As AI takes on high-stakes roles in finance, healthcare, and autonomous systems, ensuring its trustworthiness isn’t just important—it’s mission-critical.

From day one, we’ve been committed to building the infrastructure AI engineers need to push the field forward—whether that’s debugging complex models, closing gaps in training data, reducing bias, or optimizing multi-agent systems. Our goal isn’t just to make AI work; it’s to make AI work responsibly, explainably, and in ways that amplify human decision-making.

This funding isn’t just about our growth—it’s about investing in the broader AI ecosystem. We’re doubling down on our work with customers, partners, and the open-source community to ensure AI remains a force for progress—rather than an unchecked risk.

What’s Next?

With this new round of funding, we’re doubling down on our mission:

  • Scaling AI evaluation and monitoring for LLMs, AI agents, and multi-agent systems
  • Expanding Arize Phoenix OSS, now the most widely adopted AI observability library
  • Advancing research through OpenEvals and AgentEvals initiatives
  • Hiring world-class engineers to shape the future of AI observability

We’re Hiring! Join Us in Shaping the Future of AI Observability

Building the future of AI observability isn’t just an exciting technical challenge—it’s a mission-critical problem that will define how AI is built and deployed for years to come.

At Arize, we don’t just build tools; we tackle the hardest problems in AI reliability. Our engineers, researchers, and product teams work at the intersection of machine learning, software engineering, and AI infrastructure, developing technology that help companies push the boundaries of what’s possible with LLMs, autonomous agents, and reinforcement learning.

We’re looking for curious, driven engineers, researchers, and GTM builders who are passionate about AI’s future and want to ensure it’s built on a solid foundation. If you want to work on the bleeding-edge of AI infrastructure, we’d love to hear from you. Check out our open roles here.

Big things are ahead for 2025, and we’re just getting started.

–Jason & Aparna