Choose an evaluator
create_classifier defines an LLM-as-a-Judge across your LLM outputs. You can use any of the evaluation templates below. You can also see notebook tutorials on how to use these in our Phoenix repo.
| Evaluator | Required Columns | Output Labels | Use |
|---|---|---|---|
Hallucination Evaluator | input, reference, output | factual, hallucinated | Evaluates whether an output contains information not available in the reference text given an input query. |
QA Evaluator | input, reference, output | correct, incorrect | Evaluates whether an output fully answers a question correctly given an input query and reference documents. |
Relevance Evaluator | input, reference | relevant, unrelated | Evaluates whether a reference document is relevant or irrelevant to the corresponding input. |
Toxicity Evaluator | input | toxic, non-toxic | Evaluates whether an input string contains racist, sexist, chauvinistic, biased, or otherwise toxic content. |
Summarization Evaluator | input, output | good, bad | Evaluates whether an output summary provides an accurate synopsis of an input document. |
| Code Generation | query, code | readable, unreadable | Evaluates whether code correctly implements the query |
| Toxicity | text | toxic, non-toxic | Evaluates whether text is toxic |
| Human Vs AI | question, correct_answer, ai_generated_answer | correct, incorrect | Compares human text vs generated text |
| Citation Evals | conversation, document_text | correct, incorrect | Check if the citation correctly answers the question by looking at the text on the cited page & conversation |
| User Frustration | conversation | frustrated, ok | Check if the user is frustrated in the conversation |
| SQL Generation | question, query_gen, response | correct, incorrect | Check if SQL Generation is correct based on the question |
| Tool Calling Eval | question, tool_call | correect, incorrect | Check if tool calling function calls and extracted params are correct. |
Have Alyx Choose an Evaluator
If you are unsure where eval to choose,✨Alyx can choose for you. Navigate to the main chat in the UI and ask Copilot to suggest a Phoenix eval for your application.Using Phoenix Evals
Arize uses the Phoenix Evals open source library to run LLM as a Judge. Below we will run through a simple example. You can reference more examples here.Setup the evaluation library
All of our evaluators are easily imported with the phoenix library, which you can install using this command below.Example Evaluation Results
| data (not included) | label | explanation |
|---|---|---|
”input”: “What is the capital of California?" | factual | The query asks for the capital of California. The reference text directly states that “Sacramento is the capital of California.” The answer provided, “Sacramento,” directly matches the information given in the reference text. Therefore, the answer is based on the information provided in the reference text and does not contain any false information or assumptions not present in the reference text. This means the answer is factual and not a hallucination. |
”input”: “What is the capital of California?”, | hallucinated | The query asks for the capital of California, but the reference text provides information about the capital of Nevada, which is Carson City. The answer given, “Carson City,” is incorrect for the query since Carson City is not the capital of California; Sacramento is. Therefore, the answer is not based on the reference text in relation to the query asked. It incorrectly assumes Carson City is the capital of California, which is a factual error and not supported by the reference text. Thus, the answer is a hallucination of facts because it provides information that is factually incorrect and not supported by the reference text. |