Custom Eval Templates
Custom evaluation criteria and prompt templates let you measure what actually matters for your application. For example, you might create a custom eval to check for regulatory compliance, tone consistency, or task completion accuracy.
In this guide, we show how to build 3 types of custom LLM-as-a-Judge style evaluators:
A custom
ClassificationEvaluatorthat returns categorical labelsA custom
ClassificationEvaluatorthat returns numeric scoresA fully custom
LLMEvaluatorfor any complex eval use cases
These can be implemented directly through the eval functions in the Phoenix library.
Why Use Custom Evals?
Install Phoenix Evals
pip install -q "arize-phoenix-evals>=2"
pip install -q openaiCustom Evals using Categorical Labels
The ClassificationEvaluator is a special LLM-based evaluator designed for classification (both binary and multi-class). This evaluator will only respond with one of the provided label choices and, optionally, an explanation for the judgement.
A classification prompt template looks like the following with instructions for the evaluation as well as placeholders for the evaluation input data:
CATEGORICAL_TEMPLATE = '''You are comparing a reference text to a question and trying to determine if the reference text
contains information relevant to answering the question. Here is the data:
[BEGIN DATA]
************
[Question]: {query}
************
[Reference text]: {reference}
[END DATA]
Compare the Question above to the Reference text. You must determine whether the Reference text
contains information that can answer the Question. Please focus on whether the very specific
question can be answered by the information in the Reference text.
"irrelevant" means that the reference text does not contain an answer to the Question.
"relevant" means the reference text contains an answer to the Question. '''Label Choices
While the prompt template contains instructions for the LLM, the label choices tell it how to format its response.
The choices of a ClassificationEvaluator can be structured in a couple of ways:
A list of string labels only:
choices=["relevant", "irrelevant"]*String labels mapped to numeric scores:
choices = {"irrelevant": 0, "relevant": 1}
*Note: if no score mapping is provided, the returned Score objects will have a label but not a numeric score component.
The ClassificationEvaluator also supports multi-class labels and scores, for example: choices = {"good": 1.0, "bad": 0.0, "neutral": 0.5}
There is no limit to the number of label choices you can provide, and you can specify any numeric scores (not limited to values between 0 and 1). For example, you can set choices = {"one": 1, "two": 2, "three": 3, "four": 4, "five": 5} for a numeric rating task.
It ensures the output is clean and is one of the classes you want or UNPARSABLE.
Defining the Evaluator
For the relevance evaluation, we define the evaluator as follows:
Custom Evals using Numeric Scores
The ClassificationEvaluator is a flexible LLM-as-a-Judge construct that can also be used to produce numeric ratings.
Here is a prompt that asks the LLM to rate the spelling/grammatical correctness of some input context on a scale from 1-10:
Defining the Evaluator
This numeric rating task can be framed as a classification task where the set of labels is the set of numbers on the rating scale (here, 1-10). Then we can set up a custom ClassificationEvaluator for our evaluation task, similar to how we did above. Make sure to set the optimization direction = "minimize" here since a lower score is better on this task (fewer spelling errors).
Alternative: Fully Custom LLM Evaluator
Alternatively, for LLM-as-a-judge tasks that don't fit the classification paradigm, it is also possible to create a custom evaluator that implements the base LLMEvaluator class. We can implement our own LLMEvaluator for almost any complex eval that doesn't fit into the classification type.
Steps to create a custom evaluator:
Create a new class that inherits the base (
LLMEvaluator)Define your prompt template and a JSON schema for the structured output.
Initialize the base class with a name, LLM, prompt template, and direction.
Implement the
_evaluatemethod that takes aneval_inputand returns a list ofScoreobjects. The base class handles theinput_mappinglogic so you can assume the input here has the required input fields.
Last updated
Was this helpful?

