Videos
This hands-on workshop will introduce Phoenix, Arize AI’s open-source library for ML observability in a notebook. We’ll first explain the concept of ML observability from first principles; at a high-level, a machine learning system is observable if you can not only detect data quality, drift, and performance issues in production (monitoring), but can also quickly identify the root-cause of the issue (root-cause analysis).You’ll see these concepts in action in the interactive portion of the workshop, where you’ll use Phoenix in an active learning workflow to:
- Monitor an image classification model in production
- Detect a production drift issue
- Automatically identify and export problematic production data for labeling and fine-tuning of your image classification model