The ML
Observability Platform
for Practitioners

Monitor, troubleshoot, and fine tune your
machine learning|LLM|generative|NLP|computer vision|recommender models

Try Phoenix OSS: AI Observability & LLM Evaluation

Top ML companies use Arize

Surface. Resolve. Improve.

Analytics and workflows to catch model issues, troubleshoot the root cause, and continuously improve performance

Monitors

Dashboards

Performance Tracing

Explainability & Fairness

Embeddings Analyzer

LLM Observability

Fine Tune

Phoenix OSS

Embedding & Cluster Evaluation

Monitor embedding drift for NLP, CV, LLM, and generative models alongside tabular data

Interactive 2D and 3D UMAP visualizations isolate problematic clusters for fine-tuning

Understand Drift Impact

Automatically monitor for modal input and output drift

Trace which features contribute the most prediction drift impact on your model’s performance

Generative & LLM Observability

Pinpoint clusters of problems in prompt/response pairs, find similar examples, and resolve issues

Speed up fine-tuning and prompt engineering with purpose-built workflows. Integrates with common LLM agent tools such as LangChain.

ML Performance Tracing

Instantly surface up worst-performing slices of predictions with heatmaps

Workflows to analyze features or slices of data – and A/B compare model versions, environments, and time periods

Automated Model Monitoring

Monitor model perfomance with variety of data quality and performance metrics, including custom metrics

Zero setup for new model versions and features, with adaptive thresholding based on your model’s historical trends

Easy Integration & Deployment

Log training, validation, and production datasets via SDK, cloud storage object store, data connectors, or local file

Automatic model schema detection, import job troubleshooting, delayed actuals support, and API access


#install and import dependencies
!pip install -q arize

import datetime

from arize.pandas.logger import Client
from arize.utils.types import ModelTypes, Environments, Schema, Metrics
import numpy as np
import pandas as pd

#create Arize client
SPACE_KEY = “SPACE_KEY”
API_KEY = “API_KEY”
arize_client = Client(space_key=SPACE_KEY, api_key=API_KEY)

#define schema 
schema = Schema(
    prediction_id_column_name= “prediction_id”,
    timestamp_column_name= “prediction_ts”,
    prediction_label_column_name= “prediction_label”,
    actual_label_column_name= "actual_label",
    feature_column_names=feature_column_names
)

#log data
response = arize_client.log(
    dataframe=df,
    schema=schema,
    model_id=“binary-classifications-example”,
    model_version=“1.0.0”,
    model_type=ModelTypes.SCORE_CATEGORICAL,
    metrics_validation=[Metrics.CLASSIFICATION],
    validate=True,
    environment=Environments.PRODUCTION
)

Enterprise-Grade Scale & Security

Scalable to billions of fully indexed events, with ability to extend monitors into your data lake or warehouse

Securely collaborate across organizations, workspaces and projects with SAML SSO and RBAC controls

Connects Your Entire Production ML Ecosystem

Arize is designed to work seamlessly with any model framework, from any platform, in any environment.

Data Sources
Feature Store
Model Serving
Data Sources Feature Store Model Serving
LLMs
Vector DB (AI Memory)
LLM Frameworks
LLMs Vector DB (AI Memory) LLM Frameworks
Inference data indexed for real-time metrics monitoring, analysis, and performing tracing

Arize SaaS

Arize On-Premise

Monitoring & Alerting

Retraining

Fine-tuning & Improvement

Monitoring & Alerting Retraining Fine-tuning & Improvement

“The strategic importance of ML observability is a lot like unit tests or application performance metrics or logging. We use Arize for observability in part because it allows for this automated setup, has a simple API, and a lightweight package that we are able to easily track into our model-serving API to monitor model performance over time.”

Richard Woolston
Data Science Manager, America First Credit Union

“Arize is a big part of [our project’s] success because we can spend our time building and deploying models instead of worrying – at the end of the day, we know that we are going to have confidence when the model goes live and that we can quickly address any issues that may arise.”

Alex Post
Lead Machine Learning Engineer, Clearcover

“Arize was really the first in-market putting the emphasis firmly on ML observability, and I think why I connect so much to Arize’s mission is that for me observability is the cornerstone of operational excellence in general and it drives accountability.”

Wendy Foster
Director of Engineering and Data Science, Shopify

“I’ve never seen a product I want to buy more.”

Sr. Manager, Machine Learning
Scribd

“Some of the tooling — including Arize — is really starting to mature in helping to deploy models and have confidence that they are doing what they should be doing.”

Anthony Goldbloom
Co-Founder & CEO, Kaggle

“We believe that products like Arize are raising the bar for the industry in terms of ML observability.”

Mihail Douhaniaris & Steven Mi
Data Scientist & MLOps Engineer, Get Your Guide

“It is critical to be proactive in monitoring fairness metrics of machine learning models to ensure safety and inclusion. We look forward to testing Arize’s Bias Tracing in those efforts.”

Christine Swisher
VP of Data Science, Project Ronin

Ready to get started?