All pages
Powered by GitBook
1 of 3

Loading...

Loading...

Loading...

LlamaIndex Tracing

How to use the python LlamaIndexInstrumentor to trace LlamaIndex

LlamaIndex is a data framework for your LLM application. It's a powerful framework by which you can build an application that leverages RAG (retrieval-augmented generation) to super-charge an LLM with your own data. RAG is an extremely powerful LLM application model because it lets you harness the power of LLMs such as OpenAI's GPT but tuned to your data and use-case.

For LlamaIndex, tracing instrumentation is added via an OpenTelemetry instrumentor aptly named the LlamaIndexInstrumentor . This callback is what is used to create spans and send them to the Phoenix collector.

Launch Phoenix

Phoenix supports LlamaIndex's latest instrumentation paradigm. This paradigm requires LlamaIndex >= 0.10.43. For legacy support, see below.

Install

pip install openinference-instrumentation-llama_index llama-index>=0.11.0

Setup

Initialize the LlamaIndexInstrumentor before your application code.

from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from phoenix.otel import register

tracer_provider = register()
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Run LlamaIndex

You can now use LlamaIndex as normal, and tracing will be automatically captured and sent to your Phoenix instance.

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
import os

os.environ["OPENAI_API_KEY"] = "YOUR OPENAI API KEY"

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)

Observe

View your traces in Phoenix:

Resources

  • Example notebook

  • Instrumentation Package

Legacy Integrations (<0.10.43)

Legacy One-Click (<0.10.43)

Using phoenix as a callback requires an install of `llama-index-callbacks-arize-phoenix>0.1.3'

llama-index 0.10 introduced modular sub-packages. To use llama-index's one click, you must install the small integration first:

pip install 'llama-index-callbacks-arize-phoenix>0.1.3'
# Phoenix can display in real time the traces automatically
# collected from your LlamaIndex application.
import phoenix as px
# Look for a URL in the output to open the App in a browser.
px.launch_app()
# The App is initially empty, but as you proceed with the steps below,
# traces will appear automatically as your LlamaIndex application runs.

from llama_index.core import set_global_handler

set_global_handler("arize_phoenix")

# Run all of your LlamaIndex applications as usual and traces
# will be collected and displayed in Phoenix.

Legacy (<0.10.0)

If you are using an older version of llamaIndex (pre-0.10), you can still use phoenix. You will have to be using arize-phoenix>3.0.0 and downgrade openinference-instrumentation-llama-index<1.0.0

# Phoenix can display in real time the traces automatically
# collected from your LlamaIndex application.
import phoenix as px
# Look for a URL in the output to open the App in a browser.
px.launch_app()
# The App is initially empty, but as you proceed with the steps below,
# traces will appear automatically as your LlamaIndex application runs.

import llama_index
llama_index.set_global_handler("arize_phoenix")

# Run all of your LlamaIndex applications as usual and traces
# will be collected and displayed in Phoenix.

LlamaIndex Workflows Tracing

How to use the python LlamaIndexInstrumentor to trace LlamaIndex Workflows

LlamaIndex Workflows are a subset of the LlamaIndex package specifically designed to support agent development.

Our LlamaIndexInstrumentor automatically captures traces for LlamaIndex Workflows agents. If you've already enabled that instrumentor, you do not need to complete the steps below.

We recommend using llama_index >= 0.11.0

Launch Phoenix

Install

pip install openinference-instrumentation-llama_index

Setup

Initialize the LlamaIndexInstrumentor before your application code. This instrumentor will trace both LlamaIndex Workflows calls, as well as calls to the general LlamaIndex package.

from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
from phoenix.otel import register

tracer_provider = register()
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Run LlamaIndex Workflows

By instrumenting LlamaIndex, spans will be created whenever an agent is invoked and will be sent to the Phoenix server for collection.

Observe

Now that you have tracing setup, all invocations of chains will be streamed to your running Phoenix for observability and evaluation.

Resources

  • Example project

  • OpenInference package

LlamaIndex

LlamaIndex is an open-source framework that streamlines connecting, ingesting, indexing, and retrieving structured or unstructured data to power efficient, data-aware language model applications.

Website:

Featured Tutorials

https://www.llamaindex.ai/
Cover

Cover

Cover

Cover

Cover

Cover

Cover

Cover

Cover

Cover

LlamaIndex Tracing
LlamaIndex Workflows Tracing
Tracing & Eval of LlamaIndex Application
LlamaIndex Workflows
Evaluating a LlamaIndex RAG pipeline
Tracing a LlamaIndex OpenAI Agent
Evaluating a LlamaIndex RAG pipeline in a cloud instance of Phoenix
Running experiments with LlamaIndex
Tracing a LlamaIndex query engine in a cloud instance of Phoenix
Tracing a LlamaIndex SQL Retriever

Sign up for Phoenix:

  1. Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login

  2. Click Create Space, then follow the prompts to create and launch your space.

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint and API Key:

From your new Phoenix Space

  1. Create your API key from the Settings page

  2. Copy your Hostname from the Settings page

  3. In your code, set your endpoint and API key:

import os

os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX HOSTNAME"

# If you created your Phoenix Cloud instance before June 24th, 2025,
# you also need to set the API key as a header:
# os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={os.getenv('PHOENIX_API_KEY')}"

Having trouble finding your endpoint? Check out Finding your Phoenix Endpoint

Launch your local Phoenix instance:

pip install arize-phoenix
phoenix serve

For details on customizing a local terminal deployment, see Terminal Setup.

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

See Terminal for more details.

Pull latest Phoenix image from Docker Hub:

docker pull arizephoenix/phoenix:latest

Run your containerized instance:

docker run -p 6006:6006 arizephoenix/phoenix:latest

This will expose the Phoenix on localhost:6006

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

For more info on using Phoenix with Docker, see Docker.

Install packages:

pip install arize-phoenix

Launch Phoenix:

import phoenix as px
px.launch_app()

By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See self-hosting or use one of the other deployment options to retain traces.

Sign up for Phoenix:

  1. Sign up for an Arize Phoenix account at https://app.phoenix.arize.com/login

  2. Click Create Space, then follow the prompts to create and launch your space.

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint and API Key:

From your new Phoenix Space

  1. Create your API key from the Settings page

  2. Copy your Hostname from the Settings page

  3. In your code, set your endpoint and API key:

import os

os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX HOSTNAME"

# If you created your Phoenix Cloud instance before June 24th, 2025,
# you also need to set the API key as a header:
# os.environ["PHOENIX_CLIENT_HEADERS"] = f"api_key={os.getenv('PHOENIX_API_KEY')}"

Having trouble finding your endpoint? Check out Finding your Phoenix Endpoint

Launch your local Phoenix instance:

pip install arize-phoenix
phoenix serve

For details on customizing a local terminal deployment, see Terminal Setup.

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

See Terminal for more details.

Pull latest Phoenix image from Docker Hub:

docker pull arizephoenix/phoenix:latest

Run your containerized instance:

docker run -p 6006:6006 arizephoenix/phoenix:latest

This will expose the Phoenix on localhost:6006

Install packages:

pip install arize-phoenix-otel

Set your Phoenix endpoint:

import os

os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "http://localhost:6006"

For more info on using Phoenix with Docker, see Docker.

Install packages:

pip install arize-phoenix

Launch Phoenix:

import phoenix as px
px.launch_app()

By default, notebook instances do not have persistent storage, so your traces will disappear after the notebook is closed. See self-hosting or use one of the other deployment options to retain traces.

Google Colaboratory
Troubleshooting an LLM application using the OpenInferenceTraceCallback
Logo