Arize supports a flexible array of monitoring options to help you better identify drift and understand whether your machine learning systems are performing the way you expect.
Data & Prediction Drift
Arize’s platform can test data distribution changes across millions of prediction facets, pinpointing specific problems so teams can triage why models are drifting from their intended purpose.
Where no ground truth exists, prediction drift can serve as a proxy metric for performance degradation. Analyze the accuracy of predictions with lookback windows down to the hourly level.
Monitor features on a regular basis to ensure they have not drifted from training or validation tests.
The relationship between a system’s inputs and outputs can change over time and cause concept drift. Our platform gives teams the opportunity to monitor this shift, highlighting the need to retrain a model.
Performance analysis of models in production can be complex. Our platform provides tools that make it easier to track performance metrics and verify the accuracy of predictions.
Link Ground Truth
Our platform handles the linkage of predictions with ground truth, allowing for dynamic access to robust accuracy analysis.
Real-time Performance Dash
Performance can be tracked across any combination of dimensions in fully customizable dashboards, surfacing problems quicker.
Multivariate changes across features that are distinctly different from training should be tracked as outliers.
Detect & Separate Outliers
Out-of-distribution points can be tracked, grouped, and analyzed separately to avoid skewing your aggregate performance data.
Maintaining the quality of data flowing into a model can be challenging due to upstream data changes and the increasing pace of data proliferation.
Keep Data Integrity in Check
Our platform enables you to set up data quality dashboards to detect data changes and check for unexpected, missing, or extreme model inputs and outputs.