LlamaIndex Tracing

How to use the python LlamaIndexInstrumentor to trace LlamaIndex

Troubleshooting an LLM application using the OpenInferenceTraceCallback

LlamaIndex is a data framework for your LLM application. It's a powerful framework by which you can build an application that leverages RAG (retrieval-augmented generation) to super-charge an LLM with your own data. RAG is an extremely powerful LLM application model because it lets you harness the power of LLMs such as OpenAI's GPT but tuned to your data and use-case.

For LlamaIndex, tracing instrumentation is added via an OpenTelemetry instrumentor aptly named the LlamaIndexInstrumentor . This callback is what is used to create spans and send them to the Phoenix collector.

Launch Phoenix

Phoenix supports LlamaIndex's latest instrumentation paradigm. This paradigm requires LlamaIndex >= 0.10.43. For legacy support, see below.

Install

Setup

Initialize the LlamaIndexInstrumentor before your application code.

Run LlamaIndex

You can now use LlamaIndex as normal, and tracing will be automatically captured and sent to your Phoenix instance.

Observe

View your traces in Phoenix:

Resources

Legacy Integrations (<0.10.43)

Legacy One-Click (<0.10.43)

Using phoenix as a callback requires an install of `llama-index-callbacks-arize-phoenix>0.1.3'

llama-index 0.10 introduced modular sub-packages. To use llama-index's one click, you must install the small integration first:

Legacy (<0.10.0)

If you are using an older version of llamaIndex (pre-0.10), you can still use phoenix. You will have to be using arize-phoenix>3.0.0 and downgrade openinference-instrumentation-llama-index<1.0.0

Last updated

Was this helpful?