@arizeai/phoenix-client
API for the Phoenix platform
@arizeai/phoenix-otel
OpenTelemetry tracing for Node.js
@arizeai/phoenix-evals
LLM evaluation and metrics toolkit
@arizeai/openinference-core
Instrumentation helpers
@arizeai/phoenix-mcp
Model Context Protocol server
Installation
Install all packages together or individually based on your needs:Environment Variables
All packages respect common Phoenix environment variables for seamless configuration:| Variable | Description | Used By |
|---|---|---|
PHOENIX_COLLECTOR_ENDPOINT | Trace collector URL | OTEL |
PHOENIX_HOST | Phoenix server URL | Client |
PHOENIX_API_KEY | API key for authentication | Client, OTEL, MCP |
PHOENIX_CLIENT_HEADERS | Custom HTTP headers (JSON) | Client |
PHOENIX_BASE_URL | Phoenix base URL | MCP |
@arizeai/phoenix-client
- Prompts — Create, version, and retrieve prompt templates with SDK helpers for OpenAI, Anthropic, and Vercel AI
- Datasets — Create and manage datasets for experiments and evaluation
- Experiments — Run evaluations and track experiment results with automatic tracing
- REST API — Full access to all Phoenix endpoints with strongly-typed requests
@arizeai/phoenix-otel
- Simple setup — Single
register()call to configure tracing - Phoenix-aware defaults — Reads
PHOENIX_COLLECTOR_ENDPOINT,PHOENIX_API_KEY, and other environment variables - Production ready — Built-in batch processing and authentication support
- Auto-instrumentation — Support for HTTP, Express, and other OpenTelemetry instrumentations
- Manual tracing — Create custom spans using the OpenTelemetry API
@arizeai/phoenix-evals
- Vendor agnostic — Works with any AI SDK provider (OpenAI, Anthropic, etc.)
- Pre-built evaluators — Hallucination detection, relevance scoring, and more
- Custom classifiers — Create your own evaluators with custom prompts
- Experiment integration — Works seamlessly with
@arizeai/phoenix-clientfor experiments
@arizeai/openinference-core
@arizeai/openinference-core) offers tracing helpers, decorators, and context attribute propagation.
Tracing Helpers
Convenient wrappers to instrument your functions with OpenInference spans:withSpan— Wrap any function (sync or async) with OpenTelemetry tracingtraceChain— Trace workflow sequences and pipelinestraceAgent— Trace autonomous agentstraceTool— Trace external tool calls
Decorators
@observe— Decorator for automatically tracing class methods with configurable span kinds
Attribute Helpers
Generate properly formatted attributes for common LLM operations:getLLMAttributes— Attributes for LLM inference (model, messages, tokens)getEmbeddingAttributes— Attributes for embedding operationsgetRetrieverAttributes— Attributes for document retrievalgetToolAttributes— Attributes for tool definitions
Context Propagation
Track metadata across your traces with context setters:setSession— Group multi-turn conversations with a session IDsetUser— Track conversations by user IDsetMetadata— Add custom metadata for operational needssetTag— Add tags for filtering spanssetPromptTemplate— Track prompt templates, versions, and variables
Framework Instrumentors
Install additional packages to auto-instrument popular AI frameworks:@arizeai/phoenix-mcp
- Prompts management — Create, list, update, and iterate on prompts
- Datasets — Explore datasets and synthesize new examples
- Experiments — Pull experiment results and visualize them
- MCP compatible — Works with Claude Desktop, Cursor, and other MCP clients

