Advanced metrics workshop series

Webinar

Advanced Metrics Workshop Series

  Live

  30 minutes

Join us every Thursday for a 30-minute Advanced Metrics Workshop. We we will dive into the metrics teams use to calculate fairness, performance and drift in production. Understand the top fairness metrics teams are using to mitigate algorithmic bias. Learn when to use each type of drift metric and which performance metrics to use for monitoring by isolating the contributing features and cohorts

AI Fairness Metrics in Production | February 9th, 9am PST

Algorithmic bias in machine learning is both incredibly important and deeply challenging for any organization that uses ML because bias can occur in all stages of the ML lifecycle: in the generation of the dataset, in the training and evaluation phases of a model, and after shipping the model. Join Arize’s advanced workshop series to learn the subtleties of fairness metrics and how to perform bias tracing on a multidimensional level in production. 

In this workshop you will: 

  • Learn the top fairness metrics teams are using to mitigate algorithmic bias
  • Understand how metrics are impacted by a lack of adequate representation of a sensitive class 
  • Bias tracing on a multidimensional level by isolating the features and cohorts likely contributing to algorithmic bias 

This talk is relevant to managers, scientists and engineers who want to get to the root of a fairness issue–or are looking to build products with fairness in mind.

ML Performance Metrics in Production | February 16th, 9am PST

Taking a model from research to production is hard — and keeping it there is even harder! Many machine learning engineers have used performance metrics to calculate how their models are doing in post deployment, but what are the right performance metrics to use and how sensitively should they be calibrated? Which metrics should you be using for your use case and how can you monitor them? What does a single score really tell you about profitability, user satisfaction and overall success of your model? Join Arize’s advanced workshop series to learn the subtleties of performance metrics and monitoring in production. 

In this workshop you will: 

  • Learn the top performance metrics teams are using to mitigate performance degradation
  • Understand when to use each type of performance metric and how to calculate each for monitoring 
  • Root cause performance degradation in production by isolating the contributing features and cohorts

This workshop is relevant to managers, scientists and engineers who want to get to the root of a performance issue–or are looking to gain education on the latest model evaluation metrics.

Model Drift Metrics in Production | February 23nd, 9am PST

As more machine learning models are deployed into production, it’s imperative to have the right skillset to monitor, troubleshoot, and explain model performance. However, what happens when your ground truths are delayed? How do you know your model is performing optimally if you can’t calculate performance metrics? When should you use a PSI vs KL divergence to measure drift? What about a KS test over the JS distance? Join Arize’s advanced workshop series to learn the subtleties of drift metrics and monitoring in production. 

In this workshop you will: 

  • Learn the top drift metrics teams are using as a proxy to performance 
  • Understand when to use each type of drift metric and how to calculate for each metric for monitoring
  • Root cause drift issues in production by isolating the contributing features and cohorts

This workshop is relevant to managers, scientists and engineers who want to get to the root of a drift issue–or are looking to gain education on the latest model drift metrics.

Register

Speakers

Amber Roberts
Machine Learning Engineer

Amber Roberts is an astrophysicist and machine learning engineer who was previously the Head of AI at Insight Data Science. Since then she has been at Splunk in their ML Product Org to build out ML feature solutions as a ML Product Manager. She now joins us at Arize as a ML Sales Engineer looking to help teams across industries build ML Observability into their productionalized AI environments.

Get ML observability in minutes.

Get Started