Skip to main content
2026-01-31

Enhanced Usage Monitoring

New comprehensive usage tracking and reporting features for better resource management:
  • Datasource-level breakdowns for granular usage visibility
  • Account-based tracking with improved join keys for accurate reporting
  • 10-minute update intervals for near real-time usage insights
  • Automated cleanup of expired data for accurate retention calculations
2026-01-30

Improved Onboarding Experience

Streamlined onboarding with enhanced user flows:
  • Redesigned onboarding cards with clearer visual hierarchy
  • “My First Playground” experience for hands-on experimentation
  • Role collection during signup for personalized setup
  • Custom hover states matching each card’s accent color
2026-01-30

Real-Time Evaluations

Run evaluations immediately on incoming data with real-time ingestion:
  • Instant evaluation of production traces without delays
  • Latent evaluation support for updating earlier spans
  • Seamless cutover between batch and real-time processing
  • Available across all Arize AX tiers by default
2026-01-30

AWS Bedrock Custom Endpoints

Enhanced AWS Bedrock integration for enterprise deployments:
  • Custom base URL support for private endpoints
  • Inference profile ARNs for multi-region routing
  • Custom model configurations for specialized deployments
  • Simplified regional management with unified tracking
2026-01-30

Wildcard Array Path Variables

Access array data more flexibly in templates and experiments:
  • Wildcard (*) patterns to reference all array elements
  • Last-index (-1) access for the most recent item
  • Automatic generation of wildcard variants for convenience
  • Support in task variables and experiment columns
2026-01-29

Improved Queue Management

Better user experience when managing annotation queues:
  • Duplicate detection with clear error messages
  • Added and skipped record counts after bulk operations
  • Actionable feedback when attempting to add existing records
2026-01-23

Session Evaluations with Conversation Context

Evaluate entire conversation flows with new virtual attributes:
  • {conversation} template variable for session-level evaluations
  • Chronologically ordered input/output pairs
  • Automatic aggregation of multi-turn dialogues
  • Root span filtering for accurate session context
2026-01-29

Circuit Breaker for Evaluation Tasks

Protect resources during evaluation failures with intelligent circuit breaking:
  • Immediate abort on authentication errors (401/403)
  • Automatic detection of systemic issues after 10 consecutive failures
  • Failure rate monitoring to stop doomed batches early
  • Resource optimization by preventing guaranteed-to-fail requests
2026-01-23

Tracing Configuration for Evaluation Tasks

Enable detailed debugging for evaluation tasks:
  • Toggle tracing on/off in Advanced Options
  • Automatic trace generation for monitoring and debugging
  • Persistent settings saved with your tasks
  • Production-ready visibility into evaluation execution
2026-01-27

Enhanced RBAC System

Fine-grained access control with the new RBAC system:
  • Custom roles with specific permissions
  • Space-level role bindings for granular access management
  • Coexistence with legacy roles during migration
  • UI support for role assignment across all user management pages
  • Automatic fallback to legacy roles when custom roles are deleted
2026-01-22

Enhanced Dashboard Time Persistence

Your dashboard preferences now persist automatically:
  • Auto-save time range, time zone, and granularity selections
  • Instant restoration when returning to dashboards
  • Per-dashboard settings for customized views
  • Seamless experience across sessions
2026-01-21

Trace Table Performance Improvements

Faster loading times for the tracing table:
  • 30-50% faster initial load times
  • String truncation for large content
  • Lazy loading of full values in tooltips
  • Minimal impact on user experience
2026-01-20

Expandable Trace Hierarchy

View trace structure directly in the table:
  • Expand traces to see child spans inline
  • Hierarchical visualization without opening slideouts
  • Faster navigation through complex traces
  • Contextual understanding of request flow
2026-01-20

Custom Prompt Release Labels

Organize and track prompt versions with custom labels:
  • Tag prompt versions with meaningful identifiers
  • Environment markers like “staging” or “production”
  • Dynamic label suggestions from existing prompts
  • Easy retrieval of specific prompt releases
2026-01-12

Enhanced Annotation Configs

More powerful annotation workflows with improved configs:
  • Color-coded categories based on optimization direction
  • Read-only view for reviewing existing configs
  • Optimization direction control (maximize, minimize, or none)
  • Clear label guidance for consistent evaluations
2026-01-16

Eval Hub Enhancements

Improved evaluation management and visibility:
  • Model information in evaluator listings with provider icons
  • Evaluator counts in running tasks with hover details
  • Automatic save when creating or editing evaluators
  • Streamlined task flow for faster evaluation setup
2026-01-16

Todo List Management Improvements

More reliable task tracking in Alyx conversations:
  • Visual status indicators for all todo states
  • Dynamic reminders with exact update calls needed
  • Plan preservation across human-in-the-loop pauses
  • Clearer instructions positioned near the plan
2026-01-09

Stacked Bar Chart Widgets

Visualize multi-dimensional data with new chart types:
  • Stacked bar charts for comparing categories over time
  • Druid-powered queries for fast rendering
  • Customizable groupings and dimensions
  • Dashboard integration for comprehensive monitoring
2026-01-21

Scatter Plot Widgets

Explore relationships between variables with scatter plots:
  • Correlation analysis for two numeric dimensions
  • Interactive data points for detailed investigation
  • Dashboard integration for visual analytics
  • Customizable axes and filtering
2026-01-21

Enhanced Monitor Configuration

More control over monitor behavior:
  • Configurable auto-threshold lookback windows via feature flag
  • Extended lookback periods for sparse data projects
  • Flexible threshold calculation based on historical patterns
  • Account-specific customization for unique requirements
2026-01-14

Java SDK Space ID Support

Modern authentication for Java applications:
  • Space ID authentication (space keys deprecated)
  • Backward compatibility maintained with existing constructors
  • Updated documentation and examples
  • Test coverage for new authentication method
2026-01-23

Improved Error Handling for Exceptions

Better filtering and debugging capabilities:
  • Filter by exception.type and exception.message in the UI
  • OpenInference semantic convention support for exceptions
  • Consistent data structure across datasources
  • Faster troubleshooting of error patterns
2026-01-23

SAML Role Mapping Search

Navigate large role mapping configurations easily:
  • Client-side search across attributes, spaces, roles, and organizations
  • Visual highlighting of search matches
  • Keyboard navigation through results
  • Improved usability for enterprise customers
2026-01-15

Span-to-Queue Workflow

Add spans and dataset examples to annotation queues seamlessly:
  • Multiple entry points from spans table, trace slideover, and queue records
  • New or existing queue selection
  • Batch operations for efficient queue population
  • Dataclusters integration for reliable processing
2026-01-21

Enhanced Session Slideover

Better conversation visualization and navigation:
  • Trace labels with links to detailed views
  • Visual separators between traces
  • Hover highlighting synchronized between list and conversation
  • Improved readability for multi-turn interactions
2026-01-12

Batch Annotation Updates

Efficiently annotate large volumes of data:
  • Optimization direction support in annotation configs
  • Category-based labeling for issue detection
  • Best practice guidance for naming and structure
  • Streamlined categorization workflows
2026-01-06

Prompt Optimization on Experiments

Run prompt optimization directly on experiment results:
  • Experiment selector in optimization task creation
  • Dynamic column resolution for experiment data
  • Enhanced iteration on proven prompts
  • Seamless workflow from experiments to optimization
2026-01-27

Custom Metrics with LIKE Operator

More powerful filtering in custom metrics:
  • LIKE and ILIKE operators for pattern matching
  • Wildcard support with % syntax
  • Case-insensitive matching with ILIKE
  • Direct Druid mapping for performance
2026-01-27

Dashboard Template Filtering

Cleaner dashboard creation experience:
  • LLM-only space filtering shows only relevant templates
  • Context-aware templates based on project types
  • Reduced clutter in template selection
  • Consistent experience across spaces and projects
2026-01-27

Pivot Table Widget Schema

Foundation for advanced tabular data visualization:
  • Grouped categorical dimensions for organized views
  • Configurable numeric columns with aggregations
  • Flexible filtering and time range support
  • Dashboard integration ready
2026-01-14

Enhanced Space Model Schema

More control over data retention and lookback:
  • Space-level schema lookback overrides for custom retention
  • Model-specific configurations for unique requirements
  • Flexible data management across different use cases
2026-01-14

Exact Match Code Evaluator

New built-in evaluator for validation:
  • String equality checks for exact matches
  • Expected vs actual comparisons for testing
  • Multi-field access with dataset row support
  • Alphabetically sorted evaluator list in UI
2026-01-21

Experiment Task Timeout Configuration

Accommodate long-running evaluations:
  • Configurable timeout parameter beyond 120 seconds
  • Function-level control in run_experiment and evaluate_experiment
  • Backward compatibility with default values
  • Support for complex evaluators requiring extended processing
2026-01-14

Arrow Schema Reconciliation

Improved data handling across distributed segments:
  • Parallel schema fetching from historicals
  • Unified schema reconciliation across partitions
  • Automatic conversion for schema consistency
  • Support for both Druid and Arrow segments
2026-01-15

Atlantis Terraform Automation

Streamlined infrastructure-as-code workflows:
  • Pull request integration for Terraform plans
  • Automated plan posting as PR comments
  • DevOps team permissions for webhook debugging
  • Structured review process before applying changes
2026-01-08

Google Analytics 4 BigQuery Sync

Automated analytics data export:
  • Daily GA4 to BigQuery transfers via Terraform
  • Raw event data access for advanced analysis
  • Overcome GA4 limitations like sampling and retention
  • Custom reporting capabilities with full data access
2026-01-08

Vertex AI Migration

Updated integration with Google Cloud AI:
  • Seamless Vertex AI connectivity for LLM applications
  • Enhanced observability for Google Cloud deployments
  • Modernized instrumentation for better tracing
2026-01-07

Custom Model Migrations

Expanded support for custom integrations:
  • Custom model endpoint support in evaluations
  • Higher traffic model optimization for performance
  • Flexible integration options for enterprise deployments
2026-01-08

Generative Service Monitoring

Comprehensive monitoring for evaluation infrastructure:
  • Uptime and health alerts with paging
  • CPU and memory monitoring with warnings
  • Dedicated Grafana dashboard for visibility
  • Runbook documentation for incident response
2026-01-05

Labeling Queue Annotations

More flexible annotation management:
  • Clear annotations (reset to null) anywhere
  • Support across spans, queues, and experiments for consistent workflows
  • Improved annotation lifecycle management
2026-01-09

Enhanced Eval Hub Empty States

Better guidance for getting started:
  • Improved empty state design with clear next steps
  • Documentation links for learning resources
  • Actionable cards for common workflows
2026-01-22

Resizable Trace Slideover

Customize your viewing experience:
  • Draggable slideover width for optimal layout
  • Persistent sizing preferences across sessions
  • Better content visibility for long traces
2026-01-21

Configurable Experiment Timeout

Handle complex evaluation scenarios:
  • Custom timeout values for long-running tasks
  • Per-experiment configuration for flexibility
  • Backward compatible defaults for existing code
2026-01-31

Enhanced Platform Stability

January 2026Numerous improvements to platform reliability and performance:
  • Configuration drift resolution in GCP Terraform
  • Enhanced error handling across services
  • Improved logging and monitoring for faster troubleshooting
  • Database migration optimizations for schema updates
  • Better resource management for high-volume workloads
2026-01-27

Evaluator Hub: Reusable Evaluators

We’re excited to introduce the Evaluator Hub - a centralized place to create, version, and reuse evaluators across all your evaluation tasks.

Why Reusable Evaluators?

Previously, evaluators were defined inline each time you created a task. This led to duplicated configurations, inconsistent evaluation criteria, and extra setup overhead. With the Evaluator Hub, you define an evaluator once and use it everywhere - ensuring consistent, reliable evaluations across your organization.Key benefits:
  • Consistency: The same evaluator definition is used across tasks, eliminating drift in evaluation criteria
  • Reliability: LLM configuration (model, provider, parameters) is set at the evaluator level, ensuring the evaluator is tested and validated with a specific model before being deployed to production tasks
  • Version Control: Track changes to evaluators over time with commit messages, making it easy to audit and roll back if needed
  • Flexibility: Column mappings let you reuse the same evaluator across datasets with different schemas by mapping template variables to your data columns

What’s New

  • Evaluator Hub tab: Browse, search, and manage all your evaluators in one place
  • Running Tasks tab: View and manage your active evaluation tasks
  • “Use Evaluator” action: Quickly create a task with any evaluator pre-selected
  • Column Mappings: Map evaluator template variables to your datasource columns when adding an evaluator to a task
  • Evaluator Versioning: Create new versions of evaluators with commit messages to track changes

Getting Started

  1. Navigate to Evaluators in the left sidebar
  2. Click New Evaluator to create your first reusable evaluator
  3. Choose from pre-built templates or create a custom evaluation from scratch
  4. Use your evaluator in tasks by clicking Use Evaluator or selecting it when creating a new task
The Evaluator Hub currently supports LLM-as-a-Judge evaluators. Support for reusable code evaluators is coming soon.
2025-12-18

Multi-Span Filters

Filter traces using multiple span conditions with:
  • AND, OR, NOT operators for combining conditions
  • Indirectly Calls (->) and Directly Calls (=>) relationship filters
  • Up to 5 filters to find complex patterns like “spans where A calls B, but not C” or “traces containing both X and Y”
  • Parentheses to build complex queries and pinpoint the traces that matter
Multi-Span Filters
2025-12-15

Support for Opus & Haiku 4.5

Expanded LLM model support to include Opus & Haiku 4.5 models in the playground and online tasks.
2025-12-12

Support for GPT-5.2 and GPT-5.2 Pro

Expanded LLM model support to include GPT-5.2 models in the playground and online tasks.
2025-12-10

Improved Playground Views

The Prompt Playground page is now Playgrounds, where you can use your Playground Views! This change allows you to easily navigate to a configuration of the prompt playground you’ve saved. A playground view saves a complete snapshot of your current prompt playground session, allowing you to preserve your work, share configurations with teammates, or return to previous experiments. Here’s what’s saved in a view:
  • LLM Config (provider, model selection, model params, custom endpoint settings)
  • Prompt Setup (messages, roles, message content, tool and function calls)
  • Generated Results (when setup with datasets)
  • Connected Datasource (dataset or span)
If you want to start a playground from scratch, you can create a view using the + Playground button at the top of the Playgrounds page.
Improved Playground Views
2025-12-05

Realtime Ingestion for all new Arize AX Spaces

Any spaces created on or after 12/5/25 will use realtime ingestion by default! This eliminates the delay between sending traces and seeing them in the platform, giving you instant visibility into your production workloads.
2025-11-20

Structured Outputs Support for Playground

This release adds full structured output support to the playground, giving users precise control over the fields an LLM must return. Models that implement the OpenAI API schema (including OpenAI, Azure, and compatible custom endpoints) now support structured outputs end-to-end. When saving prompts, the structured output JSON is stored alongside other LLM parameters for seamless reuse. Tooltips have also been added to clearly indicate when a model or provider does not support structured outputs.
2025-11-10

Session Annotations

This release introduces Session Annotations, making it easier than ever to capture human insights without disrupting your workflow. You can now add notes directly from the Session Page—no context switching required.Annotations are supported at two levels:
  • Input/Output Level: Attach insights to specific output messages, automatically linked to the root span of the trace.
  • Span Level: Dive deeper into a trace and annotate individual spans for precise, context-rich feedback.
Together, these capabilities make it simple to highlight issues, call out successes, and integrate human feedback seamlessly into your debugging and evaluation process.
2025-11-05

Integrations Revamp

Integrations Revamp
This release delivers major improvements to how integrations are managed, scoped, and configured. Integrations can now be targeted to specific orgs and spaces, and the UI has been refreshed to clearly separate AI Providers from Monitoring Integrations.A new creation flow supports both simple API-based setups and flexible custom endpoints, including multi-model configurations with defaults or custom names. Users can also add multiple keys for the same provider, enabling more granular control and easier management at scale.
2025-11-03

OpenInference TypeScript 2.0

See the OpenInference TypeScript Core package for more details.
  • Added easy manual instrumentation with the same decorators, wrappers, and attribute helpers found in the Python openinference-instrumentation package.
  • Introduced function tracing utilities that automatically create spans for sync/async function execution, including specialized wrappers for chains, agents, and tools.
  • Added decorator-based method tracing, enabling automatic span creation on class methods via the @observe decorator.
  • Expanded attribute helper utilities for standardized OpenTelemetry metadata creation, including helpers for inputs/outputs, LLM operations, embeddings, retrievers, and tool definitions.
  • Overall, tracing workflows, agent behavior, and external tool calls is now significantly simpler and more consistent across languages.
2025-10-30

API-Driven Monitors

API-Driven Monitors
We now have API-Triggered Monitors: A monitor type that only evaluates when triggered via API call, instead of on a fixed schedule. Ideal for teams running evaluations after events like batch ingestions, model retraining, or CI/CD workflows.
2025-10-30

Automatic Threshold Ranges for Monitors

You can now set upper and lower bounds using our new Auto Threshold options!Arize AX can automatically determine the right threshold for your alerts based on your historical data. This is ideal for most users who want to start monitoring without manually tuning thresholds.
2025-10-29

Data Fabric

Data Fabric
We’re excited to introduce Data Fabric, a new capability that automatically synchronizes production trace data, evaluations, and annotations from Arize AX into your cloud data warehouse every 60 minutes in Iceberg format—giving you an always-current, query-ready source of truth.
2025-10-28

New Timeline Tab for Traces

New Timeline Tab for Traces
You can now see a timeline view when you click into a trace! The new timeline view is right next to the Trace Tree and Agent Graph tabs, and it shows the execution flow and duration of each span.
2025-10-24

Tags

Tags UI
Tags are a lightweight way for you to organize and label your entities across the Arize AX Platform. You can use tags to:
  • Describe source (from-prod, EHR-record)
  • Encode purpose (ab-test, regression-test)
  • Indicate readiness (golden, deprecated)
  • Group by config (threshold-0.85, cohort_5)
Tags live at the Space level, under Space Settings. They can be reused across entities that belong to that space (Datasets, Experiments, and more). More on Tags.
2025-10-22

Sort Datasets and Experiment Listing Table

Sort Datasets and Experiment Listing Table
You can now sort your datasets and experiments by name, number of experiments, creation date, and more!
2025-10-15

Support for Tool Call IDs in OpenInference Messages

This update introduces full support for tool_call_id and tool_call.id in OpenInference message semantics. These identifiers are now stored alongside input and output messages. Tool call IDs now appear in the trace slideover’s input/output and attributes tabs.
2025-10-14

Add Data Region to Login Page

Add Data Region to Login Page
We’ve added a Data Region selector to the login page, allowing users to choose their preferred data region during sign-in. This helps ensure compliance and improved performance based on regional data needs.
2025-10-13

Add Auth Failures to Tracing

This release adds tracing for authentication failures, enabling better visibility and debugging of auth-related issues across systems.
2025-10-12

Total Traces on Stats Bar

Total Traces on Stats Bar
Your projects now display the total number of traces directly in the Stats Bar at the top for quicker visibility into overall activity.
2025-10-10

Support for GPT OSS Models on Bedrock

Support for GPT OSS Models on Bedrock
We’ve added support for GPT open-source models available on Bedrock — try them out now in the Prompt Playground!
2025-10-05

Expanded LLM Support

Expanded LLM model support to include Claude models on Bedrock and Vertex, Titan Text Premiere, Amazon Nova Premiere, Gemini 2.5 Flash/Pro, and new GPT OSS and DeepSeek models —offering broader coverage across top providers.
2025-10-03

Dashboard Widget Time Setting

Dashboard Widget Time Setting
You can now define time settings per widget in dashboards! This enhancement adds flexibility by letting you set custom time ranges at the widget level — without losing the ability to apply a global dashboard time range. It’s a powerful way to dig deeper into data and run more tailored analyses.
2025-10-01

Autocomplete for Annotations on Datasets

Autocomplete for Annotations on Datasets
You can now autocomplete annotation variables when editing eval templates in the playground or directly from dataset slideovers. This makes building and managing evals faster and more intuitive.
2025-08-18

Dataset Management Upgrades

The Datasets interface has been improved with CSV upload fixes, search capabilities on the Datasets List Page, and REST API support for dataset deletion.

See more

2026

2025

2024

2023

2022

2021