In May, we expanded access to realtime trace ingestion across all Arize AX tiers, making it easier than ever to monitor LLM performance live. We also rolled out major usability upgrades to the prompt playground and span views, including latency tracking, token counts, variable visibility, and a sleeker UI for debugging. Finally, new model support and attribute-level filtering streamline experimentation and trace analysis across your workflows.
Here’s a look at everything we shipped in May.
Realtime Trace Ingestion for All Arize AX Instances
Realtime trace ingestion is now supported across all Arize AX tiers, including the free tier. Previously, this feature was only available for enterprise AX users and within our open-source platform, Phoenix. It is now fully rolled out to all users of Arize AX.
No configuration changes are required to begin using realtime trace ingestion.
More OpenAI models in prompt playground and tasks
We’ve added support for more OpenAI models in prompt playground and evaluation tasks. Experiment across models and frameworks quickly.

Sleeker display of inputs and outputs on a span
We’ve improved the design of the span page to showcase the functions, inputs, and outputs, to help you debug your traces faster!

Attribute search on traces
Now you can filter your span attributes right on the page, no more CMD+F !
Column selection in prompt playground
You can now view all of your prompt variables and dataset values directly in playground!

Latency and token counts in prompt playground
We’ve added latency and token counts to prompt playground runs! Currently supported for OpenAI, with more providers to come!
