Machine Learning Observability 101

Despite the tremendous growth and breakthroughs in machine learning and AI, models routinely run into performance degradation in the real-world for a variety of reasons. Given the high stakes for both companies and society at large, it’s more important than ever for humans to understand AI—and know how to fix it when it breaks. 

ML observability is how that mission is accomplished and the topic of this ebook. While ML monitoring alerts you when the performance of your model is degrading, ML observability helps you get to the bottom of why—a bigger, harder problem. 

In this ebook, you’ll learn: 

  • What to monitor in production and common model failure modes
  • Best practices for getting to the bottom of model performance issues  and measuring business outcomes
  • The different levels of explainability and how each can be used across the ML lifecycle
  • Best practices detecting and diagnosing issues with data quality and drift
  • A primer on service health and service-level ML performance monitoring

Read the eBook

About the author

Aparna Dhinakaran
Co-founder & Chief Product Officer

Aparna Dhinakaran is Chief Product Officer at Arize AI, a startup focused on ML Observability. She was previously an ML engineer at Uber, Apple, and Tubemogul (acquired by Adobe). During her time at Uber, she built a number of core ML Infrastructure platforms including Michaelangelo. She has a bachelor's from Berkeley's Electrical Engineering and Computer Science program where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

Ready to level up your ML observability game?

Request A Demo