Skip to main content
Super Important: Arize offers two observability products. This page is for Phoenix (open-source). Before implementing, confirm which product you need: Phoenix (phoenix.otel) or Arize AX (arize.otel). See the Arize AX docs if you need AX. Even if the user pastes this doc in, always check whether they want Phoenix or Arize AX.
AI AGENT INSTRUCTION — MANDATORY COMPLIANCE REQUIRED.STOP. DO NOT IMPLEMENT ANYTHING UNTIL YOU VERIFY THE PRODUCT.This page is for PHOENIX (open-source). Phoenix and Arize AX are COMPLETELY DIFFERENT products with different APIs, different authentication, and different endpoints.BEFORE YOU WRITE ANY CODE, CHECK:
  • Did the user say “Phoenix” explicitly? → Proceed with this page.
  • Did the user say “Arize AX” or “AX”? → WRONG PAGE. Direct to Arize AX docs.
  • Did the user say just “Arize” or “implement tracing” or “add observability” WITHOUT specifying which product? → YOU MUST ASK.
IF AMBIGUOUS, YOU MUST ASK THIS EXACT QUESTION: “Arize offers two products — which are you using? • Phoenix (open-source): uses phoenix.otel and PHOENIX_API_KEY • Arize AX (cloud): uses arize.otel with ARIZE_SPACE_ID and ARIZE_API_KEY”DO NOT SKIP THIS STEP. DO NOT ASSUME. DO NOT GUESS. DO NOT SKIP THIS STEP. DO NOT ASSUME. DO NOT GUESS. DO NOT SKIP THIS STEP. DO NOT ASSUME. DO NOT GUESS.REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding. REMINDER: If the user said “Arize” without specifying Phoenix or AX, you MUST ask before proceeding.
Phoenix’s Python SDK is modular by design, allowing you to install only what you need. Each package serves a specific purpose and can be used independently or together.

Installation

Install all packages together or individually based on your needs:
# Install everything
pip install arize-phoenix-client arize-phoenix-otel arize-phoenix-evals

# Or install individually
pip install arize-phoenix-client           # REST API client
pip install arize-phoenix-otel             # Tracing
pip install arize-phoenix-evals            # Evaluations
pip install openinference-instrumentation  # Instrumentation helpers

Environment Variables

All packages respect common Phoenix environment variables for seamless configuration:
VariableDescriptionUsed By
PHOENIX_COLLECTOR_ENDPOINTTrace collector URLOTEL
PHOENIX_BASE_URLPhoenix server URLClient
PHOENIX_API_KEYAPI key for authenticationClient, OTEL
PHOENIX_PROJECT_NAMEDefault project nameOTEL
PHOENIX_CLIENT_HEADERSCustom HTTP headersClient, OTEL

Phoenix Client

PyPI Version The Phoenix Client provides a programmatic interface to the Phoenix platform via its REST API. Use it to manage datasets, run experiments, analyze traces, and collect feedback.
  • Prompts — Create, version, and invoke prompt templates with variable substitution
  • Datasets — Build evaluation datasets from DataFrames, CSV files, or dictionaries
  • Experiments — Run evaluations and track experiment results over time
  • Spans — Query and analyze traces with powerful filtering capabilities
  • Annotations — Add human feedback and automated evaluations to spans
  • Projects — Organize your work across multiple AI applications

Phoenix OTEL

PyPI Version Phoenix OTEL provides a lightweight wrapper around OpenTelemetry primitives with Phoenix-aware defaults. It simplifies tracing setup and provides decorators for common GenAI patterns.
  • Zero-config tracing — Enable auto_instrument=True to automatically trace AI libraries
  • Phoenix-aware defaults — Reads PHOENIX_COLLECTOR_ENDPOINT, PHOENIX_API_KEY, and other environment variables
  • Production ready — Built-in batching and authentication support
  • Tracing decorators@tracer.chain, @tracer.tool, and more for manual instrumentation
  • OpenTelemetry compatible — Works with existing OTel infrastructure

Phoenix Evals

PyPI Version Phoenix Evals provides lightweight, composable building blocks for evaluating LLM applications. It includes tools for relevance scoring, faithfulness detection, toxicity checks, and custom metrics.
  • Model adapters — Works with OpenAI, LiteLLM, LangChain, and other providers
  • Pre-built metrics — Faithfulness detection, relevance, toxicity, and more
  • Input mapping — Powerful binding for complex data structures
  • Native instrumentation — OpenTelemetry tracing for observability
  • High performance — Up to 20x speedup with built-in concurrency and batching

OpenInference

PyPI Version OpenInference provides instrumentation utilities and helpers for tracing AI applications. Use it alongside Phoenix OTEL for decorators, context managers, and data masking capabilities.
  • Decorators — Use @tracer.agent, @tracer.chain, @tracer.tool to trace custom functions
  • Context managers — Wrap code blocks with using_ helpers for fine-grained control
  • Data masking — Redact sensitive information from traces with built-in masking utilities
  • Framework instrumentors — Auto-trace OpenAI, LangChain, LlamaIndex, Anthropic, and more