With machine learning (ML) models becoming increasingly complex, it is imperative for ML teams to utilize state-of-the-art ML observability tools in order to monitor for data quality, drift, performance and explainability. In this workshop you will learn how to troubleshoot, triage and resolve issues in ML production environments. Experience ML observability first hand with a walkthrough of the Arize platform using practical use case examples to identify segments where your model is underperforming and the business impact of these models decisions. We will also take a deep dive to investigate the root causes of data and model drift in order to mitigate the impact of future performance degradations