Best Practices Workshop Series: AI Model Monitoring & Optimization

  Available on-demand

  30 minutes per module

Part 1 - Identifying Recommendation Bias In Ranking Models

Stop losing valuable customers and revenue to biased recommendations. Ranking models play a crucial role in driving personalized recommendations for customers across the board, but they can also inadvertently introduce bias.

Join this workshop to learn how to detect and mitigate recommendation bias in your models, maximize customer satisfaction, and increase revenue with Arize.

In this workshop, you will learn how to:

  • Monitor rank-aware evaluation metrics such as NDCG, MAP, and Group AUC to identify and troubleshoot problematic ranking groups.
  • Create custom recommendation metrics such as personalization, diversity, and popularity to help root cause areas of concern.
  • Evaluate business metrics such as customer churn, lifetime value, and clicks to purchase to measure business impact.

This workshop is designed for AI/ML practitioners such as data scientists, data engineers, and machine learning engineers who are curious to gain a comprehensive framework for monitoring their ranking models to ensure they deliver high-quality, relevant recommendations.

Part 2 - Real-Time Observability In ‘The Age of AI’

To mitigate the impact of model failures, you need to create a real-time data pipeline within ML observability to gain an accurate view of your model health at all times. This helps prevent ‘garbage in garbage out’ evaluations on stale data, protect fairness standards, and aid business objectives in real time.

This workshop will teach you how to connect your data source directly to Arize, automatically sync new data to evaluate model health, and proactively monitor your model’s behavior across various evaluation metrics.

This workshop is designed for AI/ML practitioners such as data engineers and machine learning engineers who seek hands-on experience connecting and optimizing their AI systems.

You will learn how to:

  • Automatically sync your latest data to calculate model health metrics on the most up-to-date model inferences
  • Monitor your AI systems in real-time and identify issues before they become critical
  • Ensure fairness and trace bias in your AI systems
  • Create relevant performance dashboards to share with your stakeholders
Part 3: Zero To Hero: High-Dimensional Data Visualization (LLM, NLP, & CV Model Monitoring)

There are many unknowns about how businesses, individuals, and society will continue adopting AI. However, one thing is certain – models that generate unstructured data are here to stay.

About 80% of the data generated is unstructured, such as images, text, or audio, and ML teams that work with this type of data often ship models without the right tools for these use cases. This lack of visibility can create costly and time-intensive labeling/retraining efforts.

This workshop will help you navigate the ins and outs of evaluating the performance of models built on unstructured data. Learn how to generate an embedding, monitor embedding drift, and visualize your dataset to troubleshoot model issues.

This workshop is designed for all AI/ML practitioners such as data scientists, data engineers, and machine learning engineers regardless of their experience with embeddings.

Learn how to:

  • Create dense vectors using the Arize Python SDK
  • Store and upload your embeddings vectors & features using a cloud storage provider
  • Visualize and monitor embeddings drift using Euclidean distance
    Interact with a UMAP point cloud and clusters to troubleshoot drift
Part 4: Fundamentals of Dynamic Monitoring & Performance Insights

Whether your machine learning (ML) model recommends products on an e-commerce website or predicts fraudulent transactions – production models can change quickly, resulting in suboptimal predictions and negative business outcomes.

This workshop will explore the core concepts of model performance, model monitoring, and model explainability across all model use cases.

This session is designed for ML model practitioners – data scientists, data engineers, and machine learning engineers – to gain hands-on experience.

You will learn how to use model performance tracing and model monitoring to ensure high-performing models and prevent financial losses, including:

  • Root cause performance and gain insights across areas of concern
  • Enabling proactive monitors tailored to your model’s feature importance
  • Creating dashboards to share insights with your team and keep stakeholders informed
  • Incorporating model performance feedback into your existing ML pipeline

Register now

Featured speakers

Amber Roberts
Machine Learning Engineer

Amber Roberts is an astrophysicist and machine learning engineer who was previously the Head of AI at Insight Data Science. Since then she has been at Splunk in their ML Product Org to build out ML feature solutions as a ML Product Manager. She now joins us at Arize as a ML Sales Engineer looking to help teams across industries build ML Observability into their productionalized AI environments.

Jack Zhou
Product Manager

Sally-Ann DeLucia
ML Solutions Engineer