Videos

Best Practices Workshop Series: AI Model Monitoring & Optimization

On-Demand

  30 minutes

This virtual workshop series designed for data scientists, data engineers, and machine learning engineers who are looking to gain hands-on experience with the core concepts of model performance, monitoring, and explainability for your AI models.

Identifying Recommendation Bias In Ranking Models | May 17th, 9:30am PT

Stop losing valuable customers and revenue to biased recommendations. Ranking models play a crucial role in driving personalized recommendations for customers across the board, but they can also inadvertently introduce bias. 

Join this workshop to learn how to detect and mitigate recommendation bias in your models, maximize customer satisfaction, and increase revenue with Arize.

In this workshop, you will learn how to:

  • Monitor rank-aware evaluation metrics such as NDCG, MAP, and Group AUC to identify and troubleshoot problematic ranking groups.
  • Create custom recommendation metrics such as personalization, diversity, and popularity to help root cause areas of concern.
  • Evaluate business metrics such as customer churn, lifetime value, and clicks to purchase to measure business impact.

This workshop is designed for AI/ML practitioners such as data scientists, data engineers, and machine learning engineers who are curious to gain a comprehensive framework for monitoring their ranking models to ensure they deliver high-quality, relevant recommendations.

Real-Time Observability In ‘The Age of AI’ | May 24th, 9:30am PST

To mitigate the impact of model failures, you need to create a real-time data pipeline within ML observability to gain an accurate view of your model health at all times. This helps prevent ‘garbage in garbage out’ evaluations on stale data, protect fairness standards, and aid business objectives in real time. 

This workshop will teach you how to connect your data source directly to Arize, automatically sync new data to evaluate model health, and proactively monitor your model’s behavior across various evaluation metrics.

This workshop is designed for AI/ML practitioners such as data engineers and machine learning engineers who seek hands-on experience connecting and optimizing their AI systems. 

You will learn how to:

  • Automatically sync your latest data to calculate model health metrics on the most up-to-date model inferences
  • Monitor your AI systems in real-time and identify issues before they become critical
  • Ensure fairness and trace bias in your AI systems
  • Create relevant performance dashboards to share with your stakeholders
Zero To Hero: High-Dimensional Data Visualization (LLM, NLP, and CV Model Monitoring) | May 31st, 9:30am PST

There are many unknowns about how businesses, individuals, and society will continue adopting AI. However, one thing is certain – models that generate unstructured data are here to stay. 

About 80% of the data generated is unstructured, such as images, text, or audio, and ML teams that work with this type of data often ship models without the right tools for these use cases. This lack of visibility can create costly and time-intensive labeling/retraining efforts. 

This workshop will help you navigate the ins and outs of evaluating the performance of models built on unstructured data. Learn how to generate an embedding, monitor embedding drift,  and visualize your dataset to troubleshoot model issues.  

This workshop is designed for all AI/ML practitioners such as data scientists, data engineers, and machine learning engineers regardless of their experience with embeddings.

Learn how to: 

  • Create dense vectors using the Arize Python SDK
  • Store and upload your embeddings vectors & features using a cloud storage provider
  • Visualize and monitor embeddings drift using Euclidean distance 
  • Interact with a UMAP point cloud and clusters to troubleshoot drift
Fundamentals of Dynamic Monitoring and Performance Insights | June 7th, 9:30am PST

Whether your machine learning (ML) model recommends products on an e-commerce website or predicts fraudulent transactions – production models can change quickly, resulting in suboptimal predictions and negative business outcomes.

This workshop will explore the core concepts of model performance, model monitoring, and model explainability across all model use cases. 

This session is designed for ML model practitioners – data scientists, data engineers, and machine learning engineers – to gain hands-on experience.

You will learn how to use model performance tracing and model monitoring to ensure high-performing models and prevent financial losses, including:

  • Root cause performance and gain insights across areas of concern
  • Enabling proactive monitors tailored to your model’s feature importance
  • Creating dashboards to share insights with your team and keep stakeholders informed
  • Incorporating model performance feedback into your existing ML pipeline

View Recordings

Speakers

Jack Zhou
Product Manager

Sally-Ann DeLucia
ML Solutions Engineer

Claire Longo
Customer Success Lead

Aman Khan
Group Product Manager

Get ML observability in minutes.

Get Started