Step 1: Set Up Python SDK
Install Arize SDKarize.pandas.logger to call Client.log()
Step 2: Set Model Schema Attributes
A model schema is broken into required and optional parameters. Optional model schema parameters vary based on model types. Learn more about model types here. Gain a comprehensive list of schema attributes and their definitions here.Example Row
| prediction_id | prediction_ts | prediction_label | actual_label | state | states | gender | vector | text | image_link |
|---|---|---|---|---|---|---|---|---|---|
| 1fcd50f4689 | 1637538845 | No Claims | No Claims | ca | [ca, ak] | female | [1.27346, -0.2138, …] | ”This is an example text" | "https://example_ur.jpg” |
Optional: Typed Columns
See Sending Data FAQ for more info on SDK typing features.Optional: Embeddings
Optional: SHAP Values
Optional: Delayed Actuals
If your model receives delayed actuals, log your delayed production data using the same prediction ID, which links your files together in the Arize platform. This can be delivered days or weeks after the prediction is received.Step 3: Log Inferences
Optional: Metrics Validation
Other Supported SDKs
- Python Pandas SDK (log a pandas dataframe)
- Python Single Record SDK (log a single record)
- Java SDK
- R SDK
- Rest API
Tutorials on how to log predictions, actuals, and feature importance.
| Logging Predictions Only | Colab Link |
| Logging Predictions First, Then Logging Delayed Actuals | Colab LInk |
| Logging Predictions First, Then Logging SHAPs After | Colab Link |
| Logging Predictions and Actuals Together | Colab Link |
| Logging Predictions and SHAP Together | Colab Link |
| Logging Predictions, Actuals, and SHAP Together | Colab Link |
| Logging PySpark DataFrames | Colab Link |