@arizeai/phoenix-cli) is a command-line interface tool for retrieving trace data from your Phoenix projects. It allows you to use Phoenix’s tracing and debugging features directly in your terminal and development workflows.
You can use Phoenix CLI for the following use cases:
- Immediate debugging: Fetch the most recent trace of a failed or unexpected run with a single command.
- Bulk export for analysis: Export large numbers of traces to JSON files for offline analysis, building evaluation datasets, or regression tests.
- Terminal-based workflows: Integrate trace data into your existing tools; for example, piping output to Unix utilities like
jq, or feeding traces into an AI coding assistant for automated analysis. - AI coding assistant integration: Use with Claude Code, Cursor, Windsurf, or other AI-powered development tools to analyze and debug your LLM application traces.
Installation
Setup
1. Set your Phoenix endpoint
2. Set your project name
PHOENIX_PROJECT. Replace your-project-name with the name of your Phoenix project.
3. Set your API key (if required)
If you’re using Phoenix Cloud or a Phoenix instance with authentication enabled, you’ll need to set the
PHOENIX_API_KEY environment variable or use the --api-key flag.Use with AI Coding Assistants
Phoenix CLI is designed to work seamlessly with AI coding assistants like Claude Code, Cursor, Windsurf, and other AI-powered development tools.Claude Code
After setting up the CLI, ask Claude Code questions like:px --help command to discover the CLI capabilities and fetch your traces for analysis.
Cursor / Windsurf
In Cursor or Windsurf, you can:- Run
px traces --limit 1 --format jsonin the terminal - Select the output and ask the AI to analyze it
- Or ask the AI directly to run the command and interpret results
Find Project and Trace IDs
In most cases, you won’t need to find IDs manually (the CLI uses your environment’s project name and latest traces by default). However, if you want to fetch a specific item by ID, you can find the IDs in the Phoenix UI:- Project Name/ID: Each project has a unique name and ID. You can find it in the project selector dropdown or in the project’s URL.
- Trace ID: Every trace has an ID. In the traces view, click on a specific trace to see its Trace ID (copyable from the trace details panel). You can use
px trace <trace-id>to retrieve that exact trace.
Usage
After installation and setup, you can use thepx command to retrieve traces. The general usage is:
| Command | Fetches | Output location |
|---|---|---|
projects | List of all projects | stdout |
trace <id> | A specific trace by ID | stdout (or to a file with --file) |
traces [directory] | Recent traces from the project | Saves each trace as a JSON file in the given directory, or prints to stdout if no directory is provided |
Traces are fetched chronologically with most recent first.
Options
The commands support additional flags to filter and format the output:| Option / Flag | Applies to | Description | Default |
|---|---|---|---|
-n, --limit <int> | traces | Maximum number of traces to fetch | 10 |
--last-n-minutes <int> | traces | Only fetch traces from the last N minutes | No filter |
--since <timestamp> | traces | Only fetch traces since a specific time (ISO 8601 format) | No filter |
--project <name> | trace, traces | Override the configured project | From env |
--format <type> | All commands | Output format: pretty, json, or raw | pretty |
--file <path> | trace | Save the fetched trace to a file instead of printing | stdout |
--max-concurrent <int> | traces | Maximum concurrent fetch requests | 10 |
--no-progress | All commands | Disable progress bar output (useful for scripts and AI assistants) | Progress on |
Output formats
The--format option controls how the fetched data is displayed:
-
pretty(default): A human-readable tree view showing span hierarchy, status, and timing. Great for quick debugging: -
json: Well-formatted JSON output with indentation. Use this if you want to examine the data structure: -
raw: Compact JSON with no extra whitespace. Ideal for piping tojqor other tools:
Fetch a Single Trace
You can fetch a single trace with its ID. The command will output to the terminal by default:--file option:
Fetch Multiple Traces
You can specify a destination directory for bulk exports. For example, the following command will save the 10 most recent traces as JSON files in themy-traces-data directory:
3b0b15fe-1e3a-4aef-afa8-48df15879cfe.json).
Filter by time
Fetch traces from a specific time range:Export to Files
You can fetch traces and export them for offline analysis or building datasets:./exported-traces. This is useful for:
- Building regression test datasets
- Offline analysis and debugging
- Creating evaluation datasets for experiments
Trace Output Structure
When usingjson or raw format, traces are output with the following structure:
OpenInference Semantic Attributes
Each span includes OpenInference semantic attributes in theattributes field:
- LLM spans:
llm.model_name,llm.token_count.prompt,llm.token_count.completion,llm.invocation_parameters - Input/Output:
input.value,output.value,input.mime_type,output.mime_type - Tool calls:
tool.name,tool.description,tool.parameters - Retrieval:
retrieval.documents - Errors:
exception.type,exception.message,exception.stacktrace
AI Coding Assistant Examples
Debug a Failed Agent Run
Analyze Agent Performance
Review Token Usage
Pipeline Examples
The CLI is designed to work seamlessly in shell pipelines:Troubleshooting
”Phoenix endpoint not configured”
Set thePHOENIX_HOST environment variable:
--endpoint flag:
“Project not configured”
Set thePHOENIX_PROJECT environment variable:
--project flag:

