Skip to main content
Legacy Evaluator: This evaluator is from phoenix-evals 1.x and is not available as a built-in metric in evals 2.0. For RAG evaluation, consider using the Document Relevance evaluator instead. You can still use these templates with older versions of the library (see API Reference), or migrate them to custom evaluators as shown below.
You can use the legacy template with a custom ClassificationEvaluator:
from phoenix.evals import ClassificationEvaluator
from phoenix.evals.llm import LLM

RAG_RELEVANCY_TEMPLATE = """You are comparing a reference text to a question and trying to determine if the reference text
contains information relevant to answering the question. Here is the data:
    [BEGIN DATA]
    ************
    [Question]: {query}
    ************
    [Reference text]: {reference}
    [END DATA]

Compare the Question above to the Reference text. You must determine whether the Reference text
contains information that can answer the Question. Please focus on whether the very specific
question can be answered by the information in the Reference text.
"unrelated" means that the reference text does not contain an answer to the Question.
"relevant" means the reference text contains an answer to the Question."""

rag_relevance_evaluator = ClassificationEvaluator(
    name="rag_relevance",
    prompt_template=RAG_RELEVANCY_TEMPLATE,
    model=LLM(provider="openai", model="gpt-4o"),
    choices={"unrelated": 0, "relevant": 1},
)

result = rag_relevance_evaluator.evaluate({
    "query": "What is the capital of France?",
    "reference": "Paris is the capital and largest city of France."
})

When To Use RAG Eval Template

This Eval evaluates whether a retrieved chunk contains an answer to the query. It’s extremely useful for evaluating retrieval systems.

RAG Eval Template

You are comparing a reference text to a question and trying to determine if the reference text
contains information relevant to answering the question. Here is the data:
    [BEGIN DATA]
    ************
    [Question]: {query}
    ************
    [Reference text]: {reference}
    [END DATA]

Compare the Question above to the Reference text. You must determine whether the Reference text
contains information that can answer the Question. Please focus on whether the very specific
question can be answered by the information in the Reference text.
Your response must be single word, either "relevant" or "unrelated",
and should not contain any text or characters aside from that word.
"unrelated" means that the reference text does not contain an answer to the Question.
"relevant" means the reference text contains an answer to the Question.
We are continually iterating our templates, view the most up-to-date template on GitHub.

How To Run the RAG Relevance Eval

from phoenix.evals import (
    RAG_RELEVANCY_PROMPT_RAILS_MAP,
    RAG_RELEVANCY_PROMPT_TEMPLATE,
    OpenAIModel,
    download_benchmark_dataset,
    llm_classify,
)

model = OpenAIModel(
    model_name="gpt-4",
    temperature=0.0,
)

#The rails are used to hold the output to specific values based on the template
#It will remove text such as ",,," or "..."
#Will ensure the binary value expected from the template is returned
rails = list(RAG_RELEVANCY_PROMPT_RAILS_MAP.values())
relevance_classifications = llm_classify(
    dataframe=df,
    template=RAG_RELEVANCY_PROMPT_TEMPLATE,
    model=model,
    rails=rails,
    provide_explanation=True, #optional to generate explanations for the value produced by the eval LLM
)
The above runs the RAG relevancy LLM template against the dataframe df.

Benchmark Results

This benchmark was obtained using notebook below. It was run using the WikiQA dataset as a ground truth dataset. Each example in the dataset was evaluating using the RAG_RELEVANCY_PROMPT_TEMPLATE above, then the resulting labels were compared against the ground truth label in the WikiQA dataset to generate the confusion matrices below.

GPT-4 Result

RAG EvalGPT-4oGPT-4
Precision

0.60

0.70

Recall

0.77

0.88

F1

0.67

0.78

ThroughputGPT-4
100 Samples113 Sec