Model Performance Management Whitepaper

Whitepapers

The Next Evolutionary Step In Model Performance Management

Machine learning troubleshooting is painful and time-consuming today, but it doesn’t have to be. This paper charts the evolution that ML teams go through — from no monitoring to monitoring to full-stack ML observability — and offers a modernization blueprint for teams to implement ML performance tracing to solve problems faster. In this paper, you’ll learn:
  • Best practices for ML performance troubleshooting
  • The key components of observability and differences between system and ML observability
  • Useful definitions of ML performance tracing, data slice, and performance impact score
  • How to break down performance by slices and do effective root cause analysis
Download the full paper to learn more about full-stack ML observability with ML performance tracing.

Download the Whitepaper

About the author

Aparna Dhinakaran
Co-founder & Chief Product Officer

Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in machine learning (ML) observability. A frequent speaker at top conferences and thought leader in the space, Dhinakaran was recently named to the Forbes 30 Under 30. Before Arize, Dhinakaran was an ML engineer and leader at Uber, Apple, and TubeMogul (acquired by Adobe). During her time at Uber, she built several core ML Infrastructure platforms, including Michealangelo. She has a bachelor’s from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

Get LLM and ML observability in minutes.

Get Started