# Phoenix ## Documentation - [Arize Phoenix](/docs/phoenix/readme.md): AI Observability and Evaluation - [Quickstarts](/docs/phoenix/quickstart.md): Not sure where to start? Try a quickstart: - [User Guide](/docs/phoenix/user-guide.md) - [Environments](/docs/phoenix/environments.md) - [Overview: Tracing](/docs/phoenix/tracing/llm-traces.md): Tracing the execution of LLM applications using Telemetry - [Quickstart: Tracing](/docs/phoenix/tracing/llm-traces-1.md) - [Quickstart: Tracing (Python)](/docs/phoenix/tracing/llm-traces-1/quickstart-tracing-python.md) - [Quickstart: Tracing (TS)](/docs/phoenix/tracing/llm-traces-1/quickstart-tracing-ts.md) - [Features: Tracing](/docs/phoenix/tracing/features-tracing.md): Tracing is a critical part of AI Observability and should be used both in production and development - [Projects](/docs/phoenix/tracing/features-tracing/projects.md): Use projects to organize your LLM traces - [Annotations](/docs/phoenix/tracing/features-tracing/how-to-annotate-traces.md) - [Sessions](/docs/phoenix/tracing/features-tracing/sessions.md): Track and analyze multi-turn conversations - [How-to: Tracing](/docs/phoenix/tracing/how-to-tracing.md): Guides on how to use traces - [Setup Tracing](/docs/phoenix/tracing/how-to-tracing/setup-tracing.md) - [Setup using Phoenix OTEL](/docs/phoenix/tracing/how-to-tracing/setup-tracing/setup-using-phoenix-otel.md) - [Setup using base OTEL](/docs/phoenix/tracing/how-to-tracing/setup-tracing/custom-spans.md): While the spans created via Phoenix and OpenInference create a solid foundation for tracing your application, sometimes you need to create and customize your LLM spans - [Using Phoenix Decorators](/docs/phoenix/tracing/how-to-tracing/setup-tracing/instrument-python.md): As part of the OpenInference library, Phoenix provides helpful abstractions to make manual instrumentation easier. - [Setup Tracing (TS)](/docs/phoenix/tracing/how-to-tracing/setup-tracing/javascript.md) - [Setup Projects](/docs/phoenix/tracing/how-to-tracing/setup-tracing/setup-projects.md) - [Setup Sessions](/docs/phoenix/tracing/how-to-tracing/setup-tracing/setup-sessions.md): How to track sessions across multiple traces - [Add Metadata](/docs/phoenix/tracing/how-to-tracing/add-metadata.md) - [Add Attributes, Metadata, Users](/docs/phoenix/tracing/how-to-tracing/add-metadata/customize-spans.md) - [Instrument Prompt Templates and Prompt Variables](/docs/phoenix/tracing/how-to-tracing/add-metadata/instrumenting-prompt-templates-and-prompt-variables.md) - [Annotate Traces](/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations.md) - [Annotating in the UI](/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations/annotating-in-the-ui.md): How to annotate traces in the UI for analysis and dataset curation - [Annotating via the Client](/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations/capture-feedback.md): Use the phoenix client to capture end-user feedback - [Running Evals on Traces](/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations/evaluating-phoenix-traces.md): How to use an LLM judge to label and score your application - [Log Evaluation Results](/docs/phoenix/tracing/how-to-tracing/feedback-and-annotations/llm-evaluations.md): This guide shows how LLM evaluation results in dataframes can be sent to Phoenix. - [Importing & Exporting Traces](/docs/phoenix/tracing/how-to-tracing/importing-and-exporting-traces.md) - [Import Existing Traces](/docs/phoenix/tracing/how-to-tracing/importing-and-exporting-traces/importing-existing-traces.md) - [Export Data & Query Spans](/docs/phoenix/tracing/how-to-tracing/importing-and-exporting-traces/extract-data-from-spans.md): Various options for to help you get data out of Phoenix - [Exporting Annotated Spans](/docs/phoenix/tracing/how-to-tracing/importing-and-exporting-traces/exporting-annotated-spans.md) - [Advanced](/docs/phoenix/tracing/how-to-tracing/advanced.md) - [Mask Span Attributes](/docs/phoenix/tracing/how-to-tracing/advanced/masking-span-attributes.md) - [Suppress Tracing](/docs/phoenix/tracing/how-to-tracing/advanced/suppress-tracing.md) - [Filter Spans to Export](/docs/phoenix/tracing/how-to-tracing/advanced/modifying-spans.md) - [Capture Multimodal Traces](/docs/phoenix/tracing/how-to-tracing/advanced/multimodal-tracing.md) - [Overview: Prompts](/docs/phoenix/prompt-engineering/overview-prompts.md) - [Prompt Management](/docs/phoenix/prompt-engineering/overview-prompts/prompt-management.md): Version and track changes made to prompt templates - [Prompt Playground](/docs/phoenix/prompt-engineering/overview-prompts/prompt-playground.md) - [Span Replay](/docs/phoenix/prompt-engineering/overview-prompts/span-replay.md): Replay LLM spans traced in your application directly in the playground - [Prompts in Code](/docs/phoenix/prompt-engineering/overview-prompts/prompts-in-code.md): Pull and push prompt changes via Phoenix's Python and TypeScript Clients - [Quickstart: Prompts](/docs/phoenix/prompt-engineering/quickstart-prompts.md) - [Quickstart: Prompts (UI)](/docs/phoenix/prompt-engineering/quickstart-prompts/quickstart-prompts-ui.md) - [Quickstart: Prompts (Python)](/docs/phoenix/prompt-engineering/quickstart-prompts/quickstart-prompts-python.md): This guide will show you how to setup and use Prompts through Phoenix's Python SDK - [Quickstart: Prompts (TS)](/docs/phoenix/prompt-engineering/quickstart-prompts/quickstart-prompts-ts.md): This guide will walk you through setting up and using Phoenix Prompts with TypeScript. - [How to: Prompts](/docs/phoenix/prompt-engineering/how-to-prompts.md): Guides on how to do prompt engineering with Phoenix - [Configure AI Providers](/docs/phoenix/prompt-engineering/how-to-prompts/configure-ai-providers.md) - [Using the Playground](/docs/phoenix/prompt-engineering/how-to-prompts/using-the-playground.md): General guidelines on how to use Phoenix's prompt playground - [Create a prompt](/docs/phoenix/prompt-engineering/how-to-prompts/create-a-prompt.md): Store and track prompt versions in Phoenix - [Test a prompt](/docs/phoenix/prompt-engineering/how-to-prompts/test-a-prompt.md): Testing your prompts before you ship them is vital to deploying reliable AI applications - [Tag a prompt](/docs/phoenix/prompt-engineering/how-to-prompts/tag-a-prompt.md): How to deploy prompts to different environments safely - [Using a prompt](/docs/phoenix/prompt-engineering/how-to-prompts/using-a-prompt.md) - [Overview: Datasets & Experiments](/docs/phoenix/datasets-and-experiments/overview-datasets.md) - [Quickstart: Datasets & Experiments](/docs/phoenix/datasets-and-experiments/quickstart-datasets.md) - [How-to: Datasets](/docs/phoenix/datasets-and-experiments/how-to-datasets.md): Datasets are critical assets for building robust prompts, evals, fine-tuning, - [Creating Datasets](/docs/phoenix/datasets-and-experiments/how-to-datasets/creating-datasets.md) - [Exporting Datasets](/docs/phoenix/datasets-and-experiments/how-to-datasets/exporting-datasets.md) - [How-to: Experiments](/docs/phoenix/datasets-and-experiments/how-to-experiments.md) - [Run Experiments](/docs/phoenix/datasets-and-experiments/how-to-experiments/run-experiments.md): The following are the key steps of running an experiment illustrated by simple example. - [Using Evaluators](/docs/phoenix/datasets-and-experiments/how-to-experiments/using-evaluators.md) - [Overview: Evals](/docs/phoenix/evaluation/llm-evals.md) - [Agent Evaluation](/docs/phoenix/evaluation/llm-evals/agent-evaluation.md) - [Quickstart: Evals](/docs/phoenix/evaluation/evals.md) - [How to: Evals](/docs/phoenix/evaluation/how-to-evals.md) - [Pre-Built Evals](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals.md) - [Hallucinations](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/hallucinations.md) - [Q\&A on Retrieved Data](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/q-and-a-on-retrieved-data.md) - [Retrieval (RAG) Relevance](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/retrieval-rag-relevance.md) - [Summarization](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/summarization-eval.md) - [Code Generation](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/code-generation-eval.md) - [Toxicity](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/toxicity.md) - [AI vs Human (Groundtruth)](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/ai-vs-human-groundtruth.md): This LLM evaluation is used to compare AI answers to Human answers. Its very useful in RAG system benchmarking to compare the human generated groundtruth. - [Reference (citation) Link](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/reference-link-evals.md) - [User Frustration](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/user-frustration.md) - [SQL Generation Eval](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/sql-generation-eval.md) - [Agent Function Calling Eval](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/tool-calling-eval.md) - [Agent Path Convergence](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/agent-path-convergence.md) - [Agent Planning](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/agent-planning.md) - [Agent Reflection](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/agent-reflection.md) - [Audio Emotion Detection](/docs/phoenix/evaluation/how-to-evals/running-pre-tested-evals/audio-emotion-detection.md) - [Eval Models](/docs/phoenix/evaluation/how-to-evals/evaluation-models.md): Evaluation model classes powering your LLM Evals - [Build an Eval](/docs/phoenix/evaluation/how-to-evals/bring-your-own-evaluator.md): This guide shows you how to build and improve an LLM as a Judge Eval from scratch. - [Build a Multimodal Eval](/docs/phoenix/evaluation/how-to-evals/multimodal-evals.md) - [Online Evals](/docs/phoenix/evaluation/how-to-evals/online-evals.md) - [Evals API Reference](/docs/phoenix/evaluation/how-to-evals/evals-reference.md): Evals are LLM-powered functions that you can use to evaluate the output of your LLM or generative application - [Overview: Retrieval](/docs/phoenix/retrieval/overview-retrieval.md) - [Quickstart: Retrieval](/docs/phoenix/retrieval/quickstart-retrieval.md): Debug your Search and Retrieval LLM workflows - [Quickstart: Inferences](/docs/phoenix/inferences/phoenix-inferences.md): Observability for all model types (LLM, NLP, CV, Tabular) - [How-to: Inferences](/docs/phoenix/inferences/how-to-inferences.md) - [Import Your Data](/docs/phoenix/inferences/how-to-inferences/define-your-schema.md): How to create Phoenix inferences and schemas for common data formats - [Prompt and Response (LLM)](/docs/phoenix/inferences/how-to-inferences/define-your-schema/prompt-and-response-llm.md): How to import prompt and response from Large Large Model (LLM) - [Retrieval (RAG)](/docs/phoenix/inferences/how-to-inferences/define-your-schema/retrieval-rag.md): How to import data for the Retrieval-Augmented Generation (RAG) use case - [Corpus Data](/docs/phoenix/inferences/how-to-inferences/define-your-schema/corpus-data.md): How to create Phoenix inferences and schemas for the corpus data - [Export Data](/docs/phoenix/inferences/how-to-inferences/export-your-data.md): How to export your data for labeling, evaluation, or fine-tuning - [Generate Embeddings](/docs/phoenix/inferences/how-to-inferences/generating-embeddings.md) - [Manage the App](/docs/phoenix/inferences/how-to-inferences/manage-the-app.md): How to define your inference set(s), launch a session, open the UI in your notebook or browser, and close your session when you're done - [Use Example Inferences](/docs/phoenix/inferences/how-to-inferences/use-example-inferences.md): Quickly explore Phoenix with concrete examples - [API: Inferences](/docs/phoenix/inferences/inference-and-schema.md): Detailed descriptions of classes and methods related to Phoenix inferences and schemas - [Access Control (RBAC)](/docs/phoenix/settings/access-control-rbac.md) - [API Keys](/docs/phoenix/settings/api-keys.md) - [Data Retention](/docs/phoenix/settings/data-retention.md) * [Arize Phoenix](/docs/phoenix/documentation/jp/readme.md) - [Arize Phoenix](/docs/phoenix/documentation/zh/readme.md) ## Self-Hosting - [Self-Hosting](/docs/phoenix/self-hosting/readme.md): How to self-host a Phoenix instance - [License](/docs/phoenix/self-hosting/license.md): Arize Phoenix is fully open-source free to self-host - [Configuration](/docs/phoenix/self-hosting/configuration.md): How to customize your self-hosted deployment of Phoenix - [Docker](/docs/phoenix/self-hosting/deployment-options/docker.md): Deploy using docker compose for a local or cloud deployment - [Kubernetes (kustomize)](/docs/phoenix/self-hosting/deployment-options/kubernetes.md): Phoenix can be deployed on Kubernetes with PostGres - [Kubernetes (helm)](/docs/phoenix/self-hosting/deployment-options/kubernetes-helm.md): Deploy Phoenix via Helm - [AWS with CloudFormation](/docs/phoenix/self-hosting/deployment-options/aws-with-cloudformation.md): Phoenix can be deployed on AWS Fargate using CloudFormation - [Railway](/docs/phoenix/self-hosting/deployment-options/railway.md): Use this guide to deploy Arize Phoenix on Railway via the prebuilt template. - [Provisioning](/docs/phoenix/self-hosting/features/provisioning.md): How to provision Phoenix at deploy time - [Authentication](/docs/phoenix/self-hosting/features/authentication.md) - [Email](/docs/phoenix/self-hosting/features/email.md) - [Management](/docs/phoenix/self-hosting/features/management.md): How to manage your Phoenix instance - [Migrations](/docs/phoenix/self-hosting/upgrade/migrations.md): Migrations are run at boot - [FAQs](/docs/phoenix/self-hosting/misc/frequently-asked-questions.md) ## SDK and API Reference - [Overview](/docs/phoenix/sdk-api-reference/readme.md) - [Overview](/docs/phoenix/sdk-api-reference/python/overview.md) - [arize-phoenix-evals](/docs/phoenix/sdk-api-reference/python-pacakges/arize-phoenix-evals.md): Tooling to evaluate LLM applications including RAG relevance, answer relevance, and more. - [arize-phoenix-client](/docs/phoenix/sdk-api-reference/python-pacakges/arize-phoenix-client.md): Phoenix Client is a lightweight package for interacting with the Phoenix server. - [arize-phoenix-otel](/docs/phoenix/sdk-api-reference/python-pacakges/arize-phoenix-otel.md) - [Overview](/docs/phoenix/sdk-api-reference/typescript/overview.md): This package provides a TypeSript client for the Arize Phoenix API. - [MCP Server](/docs/phoenix/sdk-api-reference/typescript-packages/mcp-server.md): MCP server implementation for Arize Phoenix providing unified interface to Phoenix's capabilities. - [Overview](/docs/phoenix/sdk-api-reference/rest-api/overview.md) - [Datasets](/docs/phoenix/sdk-api-reference/datasets.md): REST API methods for interacting with Phoenix datasets - [Experiments](/docs/phoenix/sdk-api-reference/experiments.md): REST API methods for interacting with Phoenix experiments - [Spans](/docs/phoenix/sdk-api-reference/spans.md): REST API methods for interacting with Phoenix spans - [Traces](/docs/phoenix/sdk-api-reference/traces.md): REST API methods for interacting with Phoenix traces - [Prompts](/docs/phoenix/sdk-api-reference/prompts.md): REST API methods for interacting with Phoenix prompts - [Projects](/docs/phoenix/sdk-api-reference/projects.md): REST API methods for interacting with Phoenix projects - [Users](/docs/phoenix/sdk-api-reference/users.md) ## Integrations - [Overview](/docs/phoenix/integrations/readme.md) - [Amazon Bedrock](/docs/phoenix/integrations/llm-providers/amazon-bedrock.md): Amazon Bedrock is a managed service that provides access to top AI models for building scalable applications. - [Amazon Bedrock Tracing](/docs/phoenix/integrations/llm-providers/amazon-bedrock/amazon-bedrock-tracing.md): Instrument LLM calls to AWS Bedrock via the boto3 client using the BedrockInstrumentor - [Amazon Bedrock Evals](/docs/phoenix/integrations/llm-providers/amazon-bedrock/amazon-bedrock-evals.md): Configure and run Bedrock for evals - [Amazon Bedrock Agents Tracing](/docs/phoenix/integrations/llm-providers/amazon-bedrock/amazon-bedrock-agents-tracing.md): Instrument LLM calls to AWS Bedrock via the boto3 client using the BedrockInstrumentor - [Anthropic](/docs/phoenix/integrations/llm-providers/anthropic.md): Anthropic is an AI research company that develops LLMs, including Claude, with a focus on alignment and reliable behavior. - [Anthropic Tracing](/docs/phoenix/integrations/llm-providers/anthropic/anthropic-tracing.md) - [Anthropic Evals](/docs/phoenix/integrations/llm-providers/anthropic/anthropic-evals.md): Configure and run Anthropic for evals - [Google Gen AI](/docs/phoenix/integrations/llm-providers/google-gen-ai.md): Google GenAI is a suite of AI tools and models from Google Cloud, designed to help businesses build, deploy, and scale AI applications. - [Google GenAI Tracing](/docs/phoenix/integrations/llm-providers/google-gen-ai/google-genai-tracing.md): Instrument LLM calls made using the Google Gen AI Python SDK - [Gemini Evals](/docs/phoenix/integrations/llm-providers/google-gen-ai/gemini-evals.md): Configure and run Gemini for evals - [LiteLLM](/docs/phoenix/integrations/llm-providers/litellm.md): LiteLLM is an open-source platform that provides a unified interface to manage and access over 100 LLMs from various providers. - [LiteLLM Tracing](/docs/phoenix/integrations/llm-providers/litellm/litellm-tracing.md) - [LiteLLM Evals](/docs/phoenix/integrations/llm-providers/litellm/litellm-evals.md): Configure and run LiteLLM for evals - [MistralAI](/docs/phoenix/integrations/llm-providers/mistralai.md): Mistral AI develops open-weight large language models, focusing on efficiency, customization, and cost-effective AI solutions. - [MistralAI Tracing](/docs/phoenix/integrations/llm-providers/mistralai/mistralai-tracing.md): Instrument LLM calls made using MistralAI's SDK via the MistralAIInstrumentor - [MistralAI Evals](/docs/phoenix/integrations/llm-providers/mistralai/mistralai-evals.md): Configure and run MistralAI for evals - [Groq](/docs/phoenix/integrations/llm-providers/groq.md): Groq provides ultra-low latency inference for LLMs through its custom-built LPU™ architecture. - [Groq Tracing](/docs/phoenix/integrations/llm-providers/groq/groq-tracing.md): Instrument LLM applications built with Groq - [OpenAI](/docs/phoenix/integrations/llm-providers/openai.md): OpenAI provides state-of-the-art LLMs for natural language understanding and generation. - [OpenAI Tracing](/docs/phoenix/integrations/llm-providers/openai/openai-tracing.md) - [OpenAI Evals](/docs/phoenix/integrations/llm-providers/openai/openai-evals.md): Configure and run OpenAI for evals - [OpenAI Agents SDK Tracing](/docs/phoenix/integrations/llm-providers/openai/openai-agents-sdk-tracing.md): Use Phoenix and OpenAI Agents SDK for powerful multi-agent tracing - [OpenAI Node.js SDK](/docs/phoenix/integrations/llm-providers/openai/openai-node.js-sdk.md) - [VertexAI](/docs/phoenix/integrations/llm-providers/vertexai.md): Vertex AI is a fully managed platform by Google Cloud for building, deploying, and scaling machine learning models. - [VertexAI Tracing](/docs/phoenix/integrations/llm-providers/vertexai/vertexai-tracing.md): Instrument LLM calls made using VertexAI's SDK via the VertexAIInstrumentor - [VertexAI Evals](/docs/phoenix/integrations/llm-providers/vertexai/vertexai-evals.md): Configure and run VertexAI for evals - [Agno](/docs/phoenix/integrations/frameworks/agno.md): Agno is an open-source Python framework for building lightweight, model-agnostic AI agents with built-in memory, knowledge, tools, and reasoning capabilities - [Agno Tracing](/docs/phoenix/integrations/frameworks/agno/agno-tracing.md) - [AutoGen](/docs/phoenix/integrations/frameworks/autogen.md): AutoGen is an open-source Python framework for orchestrating multi-agent LLM interactions with shared memory and tool integrations to build scalable AI workflows - [AutoGen Tracing](/docs/phoenix/integrations/frameworks/autogen/autogen-tracing.md) - [BeeAI](/docs/phoenix/integrations/frameworks/beeai.md): BeeAI is an open-source platform that enables developers to discover, run, and compose AI agents from any framework, facilitating the creation of interoperable multi-agent systems - [BeeAI Tracing (JS)](/docs/phoenix/integrations/frameworks/beeai/beeai-tracing-js.md) - [CrewAI](/docs/phoenix/integrations/frameworks/crewai.md): CrewAI is an open-source Python framework for orchestrating role-playing, autonomous AI agents into collaborative “crews” and “flows,” combining high-level simplicity with fine-grained control. - [CrewAI Tracing](/docs/phoenix/integrations/frameworks/crewai/crewai-tracing.md): Instrument multi agent applications using CrewAI - [DSPy](/docs/phoenix/integrations/frameworks/dspy.md): DSPy is an open-source Python framework for declaratively programming modular LLM pipelines and automatically optimizing prompts and model weights - [DSPy Tracing](/docs/phoenix/integrations/frameworks/dspy/dspy-tracing.md): Instrument and observe your DSPy application via the DSPyInstrumentor - [Flowise](/docs/phoenix/integrations/frameworks/flowise.md): Flowise is a low-code platform for building customized chatflows and agentflows. - [Flowise Tracing](/docs/phoenix/integrations/frameworks/flowise/flowise-tracing.md) - [Guardrails AI](/docs/phoenix/integrations/frameworks/guardrails-ai.md): Guardrails is an open-source Python framework for adding programmable input/output validators to LLM applications, ensuring safe, structured, and compliant model interactions - [Guardrails AI Tracing](/docs/phoenix/integrations/frameworks/guardrails-ai/guardrails-ai-tracing.md): Instrument LLM applications that use the Guardrails AI framework - [Haystack](/docs/phoenix/integrations/frameworks/haystack.md): Haystack is an open-source framework for building scalable semantic search and QA pipelines with document indexing, retrieval, and reader components - [Haystack Tracing](/docs/phoenix/integrations/frameworks/haystack/haystack-tracing.md): Instrument LLM applications built with Haystack - [Hugging Face smolagents](/docs/phoenix/integrations/frameworks/hugging-face-smolagents.md): Hugging Face smolagents is a minimalist Python library for building powerful AI agents with simple abstractions, tool integrations, and flexible LLM support - [smolagents Tracing](/docs/phoenix/integrations/frameworks/hugging-face-smolagents/smolagents-tracing.md): How to use the SmolagentsInstrumentor to trace smolagents by Hugging Face - [Instructor](/docs/phoenix/integrations/frameworks/instructor.md): Instructor is a library that helps you define structured output formats for LLMs. - [Instructor Tracing](/docs/phoenix/integrations/frameworks/instructor/instructor-tracing.md) - [LlamaIndex](/docs/phoenix/integrations/frameworks/llamaindex.md): LlamaIndex is an open-source framework that streamlines connecting, ingesting, indexing, and retrieving structured or unstructured data to power efficient, data-aware language model applications. - [LlamaIndex Tracing](/docs/phoenix/integrations/frameworks/llamaindex/llamaindex-tracing.md): How to use the python LlamaIndexInstrumentor to trace LlamaIndex - [LlamaIndex Workflows Tracing](/docs/phoenix/integrations/frameworks/llamaindex/llamaindex-workflows-tracing.md): How to use the python LlamaIndexInstrumentor to trace LlamaIndex Workflows - [LangChain](/docs/phoenix/integrations/frameworks/langchain.md): LangChain is an open-source framework for building language model applications with prompt chaining, memory, and external integrations - [LangChain Tracing](/docs/phoenix/integrations/frameworks/langchain/langchain-tracing.md): How to use the python LangChainInstrumentor to trace LangChain - [LangChain.js](/docs/phoenix/integrations/frameworks/langchain/langchain.js.md) - [LangGraph](/docs/phoenix/integrations/frameworks/langgraph.md): LangGraph is an open-source framework for building graph-based LLM pipelines with modular nodes and seamless data integrations - [LangGraph Tracing](/docs/phoenix/integrations/frameworks/langgraph/langgraph-tracing.md) - [LangFlow](/docs/phoenix/integrations/langflow.md): Langflow is an open-source visual framework that enables developers to rapidly design, prototype, and deploy custom applications powered by large language models (LLMs) - [LangFlow Tracing](/docs/phoenix/integrations/langflow/langflow-tracing.md) - [Mastra](/docs/phoenix/integrations/mastra.md): Mastra is an open-source TypeScript AI agent framework designed for building production-ready AI applications with agents, workflows, RAG, and observability - [Mastra Tracing](/docs/phoenix/integrations/mastra/mastra-tracing.md) - [Model Context Protocol](/docs/phoenix/integrations/model-context-protocol.md): Anthropic's Model Context Protocol is a standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. - [Phoenix MCP Server](/docs/phoenix/integrations/model-context-protocol/phoenix-mcp-server.md): Phoenix MCP Server is an implementation of the Model Context Protocol for the Arize Phoenix platform. It provides a unified interface to Phoenix's capabilites. - [MCP Tracing](/docs/phoenix/integrations/model-context-protocol/mcp-tracing.md): Phoenix provides tracing for MCP clients and servers through OpenInference. This includes the unique capability to trace client to server interactions under a single trace in the correct hierarchy. - [Portkey](/docs/phoenix/integrations/portkey.md): Portkey is an AI Gateway and observability platform that provides routing, guardrails, caching, and monitoring for 200+ LLMs with enterprise-grade security and reliability features. - [Portkey Tracing](/docs/phoenix/integrations/portkey/portkey-tracing.md): How to trace Portkey AI Gateway requests with Phoenix for comprehensive LLM observability - [Prompt Flow](/docs/phoenix/integrations/prompt-flow.md): PromptFlow is a framework for designing, orchestrating, testing, and monitoring end-to-end LLM prompt workflows with built-in versioning and analytics - [Prompt Flow Tracing](/docs/phoenix/integrations/prompt-flow/prompt-flow-tracing.md): Create flows using Microsoft PromptFlow and send their traces to Phoenix - [Pydantic AI](/docs/phoenix/integrations/pydantic.md): PydanticAI is a Python agent framework designed to make it less painful to build production-grade applications with Generative AI, built by the team behind Pydantic with type-safe structured outputs - [Pydantic AI Tracing](/docs/phoenix/integrations/pydantic/pydantic-tracing.md): How to use the python PydanticAIInstrumentor to trace PydanticAI agents - [Pydantic AI Evals](/docs/phoenix/integrations/pydantic/pydantic-evals.md): How to use Pydantic Evals with Phoenix to evaluate AI applications using structured evaluation frameworks - [Vercel](/docs/phoenix/integrations/vercel.md): Vercel is a cloud platform that simplifies building, deploying, and scaling modern web applications with features like serverless functions, edge caching, and seamless Git integration - [Vercel AI SDK Tracing (JS)](/docs/phoenix/integrations/vercel/vercel-ai-sdk-tracing-js.md) - [Cleanlab](/docs/phoenix/integrations/evaluation-libraries/cleanlab.md) - [Ragas](/docs/phoenix/integrations/evaluation-libraries/ragas.md) - [MongoDB](/docs/phoenix/integrations/vector-databases/mongodb.md): MongoDB is a database platform. Their Atlas product is built for GenAI applications. - [Pinecone](/docs/phoenix/integrations/vector-databases/pinecone.md): Pinecone is a vector database that can be used to power RAG in various applications. - [Qdrant](/docs/phoenix/integrations/vector-databases/qdrant.md): Qdrant is an open-source vector database built for high-dimensional vectors and large scale workflows - [Weaviate](/docs/phoenix/integrations/vector-databases/weaviate.md): Weaviate is an open source, AI-native vector database. - [Zilliz / Milvus](/docs/phoenix/integrations/vector-databases/zilliz-milvus.md): Milvus is an open-source vector database built for GenAI applications. ## Cookbooks - [Featured Tutorials](/docs/phoenix/cookbook/readme.md) - [Agent Cookbooks](/docs/phoenix/cookbook/agent-cookbooks.md) - [Agent Demos](/docs/phoenix/cookbook/agent-demos.md): Example agents are fully instrumented with OpenInference and utilize end-to-end tracing with Phoenix for comprehensive performance analysis. Enter your Phoenix and OpenAI keys to view traces. - [Cookbooks](/docs/phoenix/cookbook/tracing/cookbooks.md) - [Structured Data Extraction](/docs/phoenix/cookbook/tracing/structured-data-extraction.md) - [Few Shot Prompting](/docs/phoenix/cookbook/prompt-engineering/few-shot-prompting.md) - [ReAct Prompting](/docs/phoenix/cookbook/prompt-engineering/react-prompting.md) - [Chain-of-Thought Prompting](/docs/phoenix/cookbook/prompt-engineering/chain-of-thought-prompting.md) - [Prompt Optimization](/docs/phoenix/cookbook/prompt-engineering/prompt-optimization.md) - [LLM as a Judge Prompt Optimization](/docs/phoenix/cookbook/prompt-engineering/llm-as-a-judge-prompt-optimization.md) - [Cookbooks](/docs/phoenix/cookbook/datasets-and-experiments/cookbooks.md) - [Summarization](/docs/phoenix/cookbook/datasets-and-experiments/summarization.md) - [Text2SQL](/docs/phoenix/cookbook/datasets-and-experiments/text2sql.md) - [Cookbooks](/docs/phoenix/cookbook/evaluation/cookbooks.md) - [Evaluate RAG](/docs/phoenix/cookbook/evaluation/evaluate-rag.md): Building a RAG pipeline and evaluating it with Phoenix Evals. - [Evaluate an Agent](/docs/phoenix/cookbook/evaluation/evaluate-an-agent.md) - [OpenAI Agents SDK Cookbook](/docs/phoenix/cookbook/evaluation/openai-agents-sdk-cookbook.md) - [Cookbooks](/docs/phoenix/cookbook/retrieval-and-inferences/cookbooks.md) - [Embeddings Analysis](/docs/phoenix/cookbook/retrieval-and-inferences/embeddings-analysis.md) ## Learn - [Agent Workflow Patterns](/docs/phoenix/learn/agents/readme.md) - [AutoGen](/docs/phoenix/learn/agents/readme/autogen.md): Use Phoenix to trace and evaluate AutoGen agents - [CrewAI](/docs/phoenix/learn/agents/readme/crewai.md): Use Phoenix to trace and evaluate different CrewAI agent patterns - [Google GenAI SDK (Manual Orchestration)](/docs/phoenix/learn/agents/readme/google-genai-sdk.md): Everything you need to know about Google's GenAI framework - [OpenAI Agents](/docs/phoenix/learn/agents/readme/openai-agents.md): Build multi-agent workflows with OpenAI Agents - [LangGraph](/docs/phoenix/learn/agents/readme/langgraph.md): Use Phoenix to trace and evaluate agent frameworks built using Langgraph - [Smolagents](/docs/phoenix/learn/agents/readme/smolagents.md) - [What are Traces](/docs/phoenix/learn/tracing/what-are-traces.md): A deep dive into the details of a trace - [How Tracing Works](/docs/phoenix/learn/tracing/how-tracing-works.md): The components behind tracing - [FAQs: Tracing](/docs/phoenix/learn/tracing/faqs-tracing.md) - [Prompts Concepts](/docs/phoenix/learn/prompt-engineering/prompts-concepts.md) - [Datasets Concepts](/docs/phoenix/learn/datasets-and-experiments/datasets-concepts.md) - [Evaluators](/docs/phoenix/learn/evaluation/evaluators.md) - [Eval Data Types](/docs/phoenix/learn/evaluation/eval-data-types.md) - [Evals With Explanations](/docs/phoenix/learn/evaluation/evals-with-explanations.md) - [LLM as a Judge](/docs/phoenix/learn/evaluation/llm-as-a-judge.md) - [Custom Task Evaluation](/docs/phoenix/learn/evaluation/custom-task-evaluation.md) - [Retrieval with Embeddings](/docs/phoenix/learn/retrieval-and-inferences/retrieval-with-embeddings.md) - [Benchmarking Retrieval](/docs/phoenix/learn/retrieval-and-inferences/benchmarking-retrieval.md): Benchmarking Chunk Size, K and Retrieval Approach - [Retrieval Evals on Document Chunks](/docs/phoenix/learn/retrieval-evals-on-document-chunks.md) - [Inferences Concepts](/docs/phoenix/learn/inferences-concepts.md) - [Frequently Asked Questions](/docs/phoenix/learn/resources/faqs.md) - [What is the difference between Phoenix and Arize?](/docs/phoenix/learn/resources/faqs/what-is-the-difference-between-phoenix-and-arize.md) - [What is my Phoenix Endpoint?](/docs/phoenix/learn/resources/faqs/what-is-my-phoenix-endpoint.md) - [What is LlamaTrace vs Phoenix Cloud?](/docs/phoenix/learn/resources/faqs/what-is-llamatrace-vs-phoenix-cloud.md) - [Will Phoenix Cloud be on the latest version of Phoenix?](/docs/phoenix/learn/resources/faqs/will-phoenix-cloud-be-on-the-latest-version-of-phoenix.md) - [Can I add other users to my Phoenix Instance?](/docs/phoenix/learn/resources/faqs/can-i-add-other-users-to-my-phoenix-instance.md) - [Can I use Azure OpenAI?](/docs/phoenix/learn/resources/faqs/can-i-use-azure-openai.md) - [Can I use Phoenix locally from a remote Jupyter instance?](/docs/phoenix/learn/resources/faqs/can-i-use-phoenix-locally-from-a-remote-jupyter-instance.md) - [How can I configure the backend to send the data to the phoenix UI in another container?](/docs/phoenix/learn/resources/faqs/how-can-i-configure-the-backend-to-send-the-data-to-the-phoenix-ui-in-another-container.md) - [Can I run Phoenix on Sagemaker?](/docs/phoenix/learn/resources/faqs/can-i-run-phoenix-on-sagemaker.md) - [Can I persist data in a notebook?](/docs/phoenix/learn/resources/faqs/can-i-persist-data-in-a-notebook.md) - [What is the difference between GRPC and HTTP?](/docs/phoenix/learn/resources/faqs/what-is-the-difference-between-grpc-and-http.md) - [Can I use gRPC for trace collection?](/docs/phoenix/learn/resources/faqs/can-i-use-grpc-for-trace-collection.md) - [How do I resolve Phoenix Evals showing NOT\_PARSABLE?](/docs/phoenix/learn/resources/faqs/how-do-i-resolve-phoenix-evals-showing-not_parsable.md) - [Langfuse alternative? Arize Phoenix vs Langfuse: key differences](/docs/phoenix/learn/resources/faqs/langfuse-alternatives.md) - [Langsmith alternatives? Arize Phoenix vs LangSmith: key differences](/docs/phoenix/learn/resources/faqs/langsmith-alternatives.md) - [Contribute to Phoenix](/docs/phoenix/learn/resources/contribute-to-phoenix.md) ## Release Notes - [Release Notes](/docs/phoenix/release-notes/readme.md): The latest from the Phoenix team. - [05.30.2025: xAI and Deepseek Support in Playground](/docs/phoenix/release-notes/05.30.2025-xai-and-deepseek-support-in-playground.md) - [05.20.2025: Datasets and Experiment Evaluations in the JS Client](/docs/phoenix/release-notes/05.20.2025-datasets-and-experiment-evaluations-in-the-js-client.md) - [05.14.2025: Experiments in the JS Client](/docs/phoenix/release-notes/05.14.2025-experiments-in-the-js-client.md) - [05.09.2025: Annotations, Data Retention Policies, Hotkeys 📓](/docs/phoenix/release-notes/05.09.2025-annotations-data-retention-policies-hotkeys.md) - [05.05.2025: OpenInference Google GenAI Instrumentation](/docs/phoenix/release-notes/05.05.2025-openinference-google-genai-instrumentation.md) - [04.30.2025: Span Querying & Data Extraction for Phoenix Client 📊](/docs/phoenix/release-notes/04.30.2025-span-querying-and-data-extraction-for-phoenix-client.md): Available in Phoenix 8.30+ - [04.28.2025: TLS Support for Phoenix Server 🔐](/docs/phoenix/release-notes/04.28.2025-tls-support-for-phoenix-server.md): Available in Phoenix 8.29+ - [04.28.2025: Improved Shutdown Handling 🛑](/docs/phoenix/release-notes/04.28.2025-improved-shutdown-handling.md): Available in Phoenix 8.28+ - [04.25.2025: Scroll Selected Span Into View 🖱️](/docs/phoenix/release-notes/04.25.2025-scroll-selected-span-into-view.md): Available in Phoenix 8.27+ - [04.18.2025: Tracing for MCP Client-Server Applications 🔌](/docs/phoenix/release-notes/04.18.2025-tracing-for-mcp-client-server-applications.md): Available in Phoenix 8.26+ - [04.16.2025: API Key Generation via API 🔐](/docs/phoenix/release-notes/04.16.2025-api-key-generation-via-api.md): Available in Phoenix 8.26+ - [04.15.2025: Display Tool Call and Result IDs in Span Details 🫆](/docs/phoenix/release-notes/04.15.2025-display-tool-call-and-result-ids-in-span-details.md): Available in Phoenix 8.25+ - [04.09.2025: Project Management API Enhancements ✨](/docs/phoenix/release-notes/04.09.2025-project-management-api-enhancements.md): Available in Phoenix 8.24+ - [04.09.2025: New REST API for Projects with RBAC 📽️](/docs/phoenix/release-notes/04.09.2025-new-rest-api-for-projects-with-rbac.md): Available in Phoenix 8.23+ - [04.03.2025: Phoenix Client Prompt Tagging 🏷️](/docs/phoenix/release-notes/04.03.2025-phoenix-client-prompt-tagging.md): Available in Phoenix 8.22+ - [04.02.2025 Improved Span Annotation Editor ✍️](/docs/phoenix/release-notes/04.02.2025-improved-span-annotation-editor.md): Available in Phoenix 8.21+ - [04.01.2025: Support for MCP Span Tool Info in OpenAI Agents SDK 🔨](/docs/phoenix/release-notes/04.01.2025-support-for-mcp-span-tool-info-in-openai-agents-sdk.md): Available in Phoenix 8.20+ - [03.27.2025 Span View Improvements 👀](/docs/phoenix/release-notes/03.27.2025-span-view-improvements.md): Available in Phoenix 8.20+ - [03.24.2025: Tracing Configuration Tab 🖌️](/docs/phoenix/release-notes/03.24.2025-tracing-configuration-tab.md): Available in Phoenix 8.19+ - [03.21.2025: Environment Variable Based Admin User Configuration 🗝️](/docs/phoenix/release-notes/03.21.2025-environment-variable-based-admin-user-configuration.md): Available in Phoenix 8.17+ - [03.20.2025: Delete Experiment from Action Menu 🗑️](/docs/phoenix/release-notes/03.20.2025-delete-experiment-from-action-menu.md): Available in Phoenix 8.19+ - [03.19.2025: Access to New Integrations in Projects 🔌](/docs/phoenix/release-notes/03.19.2025-access-to-new-integrations-in-projects.md): Available in Phoenix 8.15+ - [03.18.2025: Resize Span, Trace, and Session Tables 🔀](/docs/phoenix/release-notes/03.18.2025-resize-span-trace-and-session-tables.md): Available in Phoenix 8.14+ - [03.14.2025: OpenAI Agents Instrumentation 📡](/docs/phoenix/release-notes/03.14.2025-openai-agents-instrumentation.md): Available in Phoenix 8.13+ - [03.07.2025: Model Config Enhancements for Prompts 💡](/docs/phoenix/release-notes/03.07.2025-model-config-enhancements-for-prompts.md): Available in Phoenix 8.11+ - [03.07.2025: New Prompt Playground, Evals, and Integration Support 🦾](/docs/phoenix/release-notes/03.07.2025-new-prompt-playground-evals-and-integration-support.md): Available in Phoenix 8.9+ - [03.06.2025: Project Improvements 📽️](/docs/phoenix/release-notes/03.06.2025-project-improvements.md): Available in Phoenix 8.5+ - [02.19.2025: Prompts 📃](/docs/phoenix/release-notes/02.19.2025-prompts.md): Available in Phoenix 8.0+ - [02.18.2025: One-Line Instrumentation⚡️](/docs/phoenix/release-notes/02.18.2025-one-line-instrumentation.md): Available in Phoenix 8.0+ - [01.18.2025: Automatic & Manual Span Tracing ⚙️](/docs/phoenix/release-notes/01.18.2025-automatic-and-manual-span-tracing.md): Available in Phoenix 7.9+ - [12.09.2024: Sessions 💬](/docs/phoenix/release-notes/12.09.2024-sessions.md): Available in Phoenix 7.0+ - [11.18.2024: Prompt Playground 🛝](/docs/phoenix/release-notes/11.18.2024-prompt-playground.md): Available in Phoenix 6.0+ - [09.26.2024: Authentication & RBAC 🔐](/docs/phoenix/release-notes/09.26.2024-authentication-and-rbac.md): Available in Phoenix 5.0+ - [07.18.2024: Guardrails AI Integrations💂](/docs/phoenix/release-notes/07.18.2024-guardrails-ai-integrations.md): Available in Phoenix 4.11+ - [07.11.2024: Hosted Phoenix and LlamaTrace 💻](/docs/phoenix/release-notes/07.11.2024-hosted-phoenix-and-llamatrace.md): Phoenix is now available for deployment as a fully hosted service. - [07.03.2024: Datasets & Experiments 🧪](/docs/phoenix/release-notes/07.03.2024-datasets-and-experiments.md): Available in Phoenix 4.6+ - [07.02.2024: Function Call Evaluations ⚒️](/docs/phoenix/release-notes/07.02.2024-function-call-evaluations.md): Available in Phoenix 4.6+