Binary Classification Cases
| Binary Classification Cases | Expected Fields | Performance Metrics |
|---|---|---|
| Case 1: Supports Only Classification Metrics | prediction label, actual label | Accuracy, Recall, Precision, FPR, FNR, F1, Sensitivity, Specificity |
| Case 2: Supports Classification, AUC, Log Loss Metrics | prediction score, prediction label, actual label | AUC, PR-AUC, Log Loss, Accuracy, Recall, Precision, FPR, FNR, F1, Sensitivity, Specificity |
| Case 3: Supports AUC & Log Loss Metrics | prediction score, actual label | AUC, PR-AUC, Log Loss |
Case #1 - Supports Only Classification Metrics
- Python Pandas Batch
- Python Single Record
- Data Connector
Google Colab
| state | pos_approved | zip_code | age | prediction_label | actual_label | prediction_ts |
|---|---|---|---|---|---|---|
ca | True | 12345 | 25 | not_fraud | fraud | 1618590882 |
Code Example
Pandas Batch Logging
Case #2 - Supports Classification & AUC/Log Loss Metrics
- Python Pandas
- Python Single Record
- Data Connector
Google Colab
| state | pos_approved | zip_code | age | prediction_label | actual_label | prediction_score | prediction_tsa |
|---|---|---|---|---|---|---|---|
ca | True | 12345 | 25 | not_fraud | fraud | 0.3 | 1618590882 |
Pandas Batch Logging
Case #3: Supports AUC & Log Loss Metrics
- Python Pandas
- Python Single Record
Example Row
| state | pos_approved | zip_code | age | actual_label | prediction_score | prediction_ts |
|---|---|---|---|---|---|---|
ca | True | 12345 | 25 | fraud | 0.3 | 1618590882 |
Code Example
Pandas Batch Logging
Default Actuals
For some use cases, it may be important to treat a prediction for which no corresponding actual label has been logged yet as having a default negative class actual label. For example, consider tracking advertisement conversion rates for an ad clickthrough rate model, where the positive class isclick and the negative class is no_click. For ad conversion purposes, a prediction without a corresponding* *actual label for an ad placement is equivalent to logging an explicit no_click actual label for the prediction. In both cases, the result is the same: a user has not converted by clicking on the ad. For AUC-ROC, PR-AUC, and Log Loss performance metrics, Arize supports treating predictions without an explicit actual label as having the negative class actual label by default. In the above example, a click prediction without an actual would be treated as a false positive, because the missing actual for the prediction would, by default, be assigned to the no_click negative class.
This feature can be enabled for monitors and dashboards via the model performance config section of your model’s config page.