Arize Templates

If you don’t want to start from scratch, Arize has predefined evaluation templates. These prompts are tested against benchmarked datasets and target precision at 70-90% and F1 at 70-85%.

These are built into Phoenix Evals and are an easy way to get reliable evals up and running fast. You can access these templates directly when creating an evaluation in the Arize UI, or use them programmatically in code.

To use our evaluators, follow these steps below.

Choose an evaluator

create_classifier defines an LLM-as-a-Judge across your LLM outputs. You can use any of the evaluation templates below. You can also see notebook tutorials on how to use these in our Phoenix repo.

Evaluator
Required Columns
Output Labels
Use

Hallucination

Evaluator

input, reference, output

factual, hallucinated

Evaluates whether an output contains information not available in the reference text given an input query.

QA

Evaluator

input, reference, output

correct, incorrect

Evaluates whether an output fully answers a question correctly given an input query and reference documents.

Relevance

Evaluator

input, reference

relevant, unrelated

Evaluates whether a reference document is relevant or irrelevant to the corresponding input.

Toxicity

Evaluator

input

toxic, non-toxic

Evaluates whether an input string contains racist, sexist, chauvinistic, biased, or otherwise toxic content.

Summarization

Evaluator

input, output

good, bad

Evaluates whether an output summary provides an accurate synopsis of an input document.

Code Generation

query, code

readable, unreadable

Evaluates whether code correctly implements the query

Toxicity

text

toxic, non-toxic

Evaluates whether text is toxic

Human Vs AI

question, correct_answer, ai_generated_answer

correct, incorrect

Compares human text vs generated text

Citation Evals

conversation, document_text

correct, incorrect

Check if the citation correctly answers the question by looking at the text on the cited page & conversation

User Frustration

conversation

frustrated, ok

Check if the user is frustrated in the conversation

SQL Generation

question, query_gen, response

correct, incorrect

Check if SQL Generation is correct based on the question

Tool Calling Eval

question, tool_call

correect, incorrect

Check if tool calling function calls and extracted params are correct.

Have Alyx Choose an Evaluator

If you are unsure where eval to choose,✨Alyx can choose for you. Navigate to the main chat in the UI and ask Copilot to suggest a Phoenix eval for your application.

Using Phoenix Evals

Arize uses the Phoenix Evals open source library to run LLM as a Judge. Below we will run through a simple example. You can reference more examples here.

Setup the evaluation library

All of our evaluators are easily imported with the phoenix library, which you can install using this command below.

pip install -q "arize-phoenix-evals>=2"
pip install -q openai

Import the pre-tested evaluators along with the helper functions using this code snippet.

from phoenix.evals.llm import LLM
from phoenix.evals.metrics import HallucinationEvaluator

llm = LLM(model="gpt-4o", provider="openai")
hallucination = HallucinationEvaluator(llm=llm)
hallucination.bind({"input": "query", "output": "response", "context": "reference"})

# let's test on one example
scores = hallucination.evaluate(df.iloc[0].to_dict())
print(scores[0])
>>> Score(name='hallucination', score=1.0, label='factual', explanation='The response correctly identifies the location of the Eiffel Tower as stated in the context.', metadata={'model': 'gpt-4o'}, source='llm', direction='maximize')

Example Evaluation Results

data (not included)
label
explanation
"input": "What is the capital of California?"

"reference": "Sacramento is the capital of California."

"output": "Sacramento"

factual

The query asks for the capital of California. The reference text directly states that "Sacramento is the capital of California." The answer provided, "Sacramento," directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination.

"input": "What is the capital of California?",
            "reference": "Carson City is the Capital of Nevada."
            "output": "Carson City"

hallucinated

The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, "Carson City," is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text.

If you'd like, you can log those evaluations back to Arize to save the results.

Last updated

Was this helpful?