LlamaIndex Tracing

How to use the OpenInference LlamaIndexInstrumentor to trace LlamaIndex applications and send data to Arize.

Tutorial on instrumenting a LlamaIndex application and sending traces to Arize
See here for more tutorials

LlamaIndex is a data framework for your LLM application. Arize helps you observe your LlamaIndex applications by capturing traces using the OpenInference LlamaIndexInstrumentor.

This guide focuses on LlamaIndex versions >=0.10.43 which support the current OpenInference instrumentation paradigm.

Install

Install LlamaIndex, the OpenInference instrumentor for LlamaIndex, Arize OTel, and supporting OpenTelemetry packages:

pip install llama-index openinference-instrumentation-llama-index arize-otel

Ensure your llama-index version is compatible with the instrumentor (typically >=0.10.43 or as specified by OpenInference).

Setup Tracing

Initialize the LlamaIndexInstrumentor after setting up the Arize OpenTelemetry exporter.

import os
from arize.otel import register
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor

# Setup OTel via Arize's convenience function
tracer_provider = register(
    space_id=os.getenv("ARIZE_SPACE_ID"),
    api_key=os.getenv("ARIZE_API_KEY"),
    project_name="my-llamaindex-app" # Choose a project name
)

# Instrument LlamaIndex
LlamaIndexInstrumentor().instrument(tracer_provider=tracer_provider)

Run LlamaIndex Example

You can now use LlamaIndex as normal. Tracing will be automatically captured and sent to Arize.

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
import os

os.environ["OPENAI_API_KEY"] = "YOUR OPENAI API KEY"

documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("Some question about the data should go here")
print(response)

Observe in Arize

After running your LlamaIndex application, traces will be sent to your Arize project. You can then log in to Arize to visualize the full trace of your LLM application, including inputs, embeddings, retrieval steps, and final outputs.

Resources

Last updated

Was this helpful?