Skip to main content

Overview

The Correctness evaluator assesses whether an LLM’s response is factually accurate, complete, and logically consistent. It evaluates the quality of answers without requiring external context or reference responses.

When to Use

Use the Correctness evaluator when you need to:
  • Validate factual accuracy - Ensure responses contain accurate information
  • Check answer completeness - Verify responses address all parts of the question
  • Detect logical inconsistencies - Identify contradictions within responses
  • Evaluate general knowledge responses - Assess answers that don’t rely on retrieved context
  • Get a quick gut-check - Capture a wide range of potential problems quickly
For evaluating responses against retrieved documents, use the Faithfulness evaluator instead. Correctness is best suited for evaluating general knowledge.

Supported Levels

The level of an evaluator determines the scope of the evaluation in OpenTelemetry terms. Some evaluations are applicable to individual spans, some to full traces or sessions, and some are applicable at multiple levels.
LevelSupportedNotes
SpanYesApply to LLM spans where you want to evaluate the response quality.
TraceYesEvaluate the final response of the entire trace.
SessionYesEvaluate responses across a conversation session.
Relevant span kinds: LLM spans, particularly ones where the LLM response is not grounded in retrieved context.

Input Requirements

The Correctness evaluator requires two inputs:
FieldTypeDescription
inputstringThe user’s query or question
outputstringThe LLM’s response to evaluate

Formatting Tips

For best results:
  • Use human-readable strings rather than raw JSON for all inputs
  • For multi-turn conversations, format input as a readable conversation:
    User: What is the capital of France?
    Assistant: Paris is the capital of France.
    User: What is its population?
    

Output Interpretation

The evaluator returns a Score object with the following properties:
PropertyValueDescription
label"correct" or "incorrect"Classification result
score1.0 or 0.0Numeric score (1.0 = correct, 0.0 = incorrect)
explanationstringLLM-generated reasoning for the classification
direction"maximize"Higher scores are better
metadataobjectAdditional information such as the model name. When tracing is enabled, includes the trace_id for the evaluation.
Interpretation:
  • Correct (1.0): The response is factually accurate, complete, and logically consistent
  • Incorrect (0.0): The response contains factual errors, is incomplete, or has logical inconsistencies

Usage Examples

from phoenix.evals import LLM
from phoenix.evals.metrics import CorrectnessEvaluator

# Initialize the LLM client
llm = LLM(provider="openai", model="gpt-4o")

# Create the evaluator
correctness_eval = CorrectnessEvaluator(llm=llm)

# Inspect the evaluator's requirements
print(correctness_eval.describe())

# Evaluate a single example
eval_input = {
    "input": "What is the capital of France?",
    "output": "Paris is the capital of France."
}

scores = correctness_eval.evaluate(eval_input)
print(scores[0])
# Score(name='correctness', score=1.0, label='correct', ...)

Using Input Mapping

When your data has different field names or requires transformation, use input mapping.
from phoenix.evals import LLM
from phoenix.evals.metrics import CorrectnessEvaluator

llm = LLM(provider="openai", model="gpt-4o")
correctness_eval = CorrectnessEvaluator(llm=llm)

# Example with different field names
eval_input = {
    "question": "What is the speed of light?",
    "answer": "The speed of light is approximately 299,792 km/s."
}

# Use input mapping to match expected field names
input_mapping = {
    "input": "question",
    "output": "answer"
}

scores = correctness_eval.evaluate(eval_input, input_mapping)
For more details on input mapping options, see Input Mapping.

Configuration

For LLM client configuration options, see Configuring the LLM.

Viewing and Modifying the Prompt

You can view the latest versions of our prompt templates on GitHub. The evaluators are designed to work well in a variety of contexts, but we highly recommend modifying the prompt to be more specific to your use case. Feel free to adapt them.
from phoenix.evals.metrics import CorrectnessEvaluator
from phoenix.evals import LLM, ClassificationEvaluator

llm = LLM(provider="openai", model="gpt-4o")
evaluator = CorrectnessEvaluator(llm=llm)

# View the prompt template
print(evaluator.prompt_template)

# Create a custom evaluator based on the built-in template
custom_evaluator = ClassificationEvaluator(
    name="correctness",
    prompt_template=evaluator.prompt_template,  # Modify as needed
    llm=llm,
    choices={"correct": 1.0, "incorrect": 0.0},
    direction="maximize",
)

Using with Phoenix

Evaluating Traces

Run evaluations on traces collected in Phoenix and log results as annotations:

Running Experiments

Use the Correctness evaluator in Phoenix experiments:

API Reference