ML Observability 101: How To Make Your Models Work IRL

What is ML Observability?

Will your model work in production? Why isn’t your model performing the way you thought it would? What’s wrong, why?

Successfully taking a machine learning model from research to production is hard. As more and more machine learning models are deployed into production, it is imperative we have better observability tools to monitor, troubleshoot, and explain their decisions.
ML Observability helps you eliminate the guesswork and deliver continuous model improvements. Learn how to:

  • Use statistical distance checks to monitor features and model output in production
  • Analyze performance regressions such as drift and how it impacts business metrics
  • Use troubleshooting techniques to determine if issues are model or data related

Access The Recording

About the author

Aparna Dhinakaran
Co-founder & Chief Product Officer

Aparna Dhinakaran is the Co-Founder and Chief Product Officer at Arize AI, a pioneer and early leader in machine learning (ML) observability. A frequent speaker at top conferences and thought leader in the space, Dhinakaran was recently named to the Forbes 30 Under 30. Before Arize, Dhinakaran was an ML engineer and leader at Uber, Apple, and TubeMogul (acquired by Adobe). During her time at Uber, she built several core ML Infrastructure platforms, including Michealangelo. She has a bachelor’s from Berkeley's Electrical Engineering and Computer Science program, where she published research with Berkeley's AI Research group. She is on a leave of absence from the Computer Vision Ph.D. program at Cornell University.

Get ML observability in minutes.

Get Started