Code Evals

Run Python code as background tasks against your span data

When your evaluation criteria is deterministic and clear, code-based evaluators provide a consistent and efficient way to assess results. They are useful when you need to check for objective conditions, such as whether a keyword appears, a URL is valid, or a format follows a rule.

Arize offers off-the-shelf code evaluators for common evaluation tasks. When you need more control, you can create custom evaluators that align with your unique business logic or quality criteria.


Create a Code Evaluator

To create a code evaluator, choose Code Evaluator when creating a new Evaluation Task. Now, the evaluator can be created in just 3 steps:

  1. Name the task and define the data it will be run on.

More Task Configuration Details
  1. Sampling Rate (%): Define the percentage of data the task should run on (0–100).

    1. Sampling is applied at the highest evaluator scope in the task: session > trace > span

    2. Lower-level evaluators will run on all matching data within that sampled set

  1. Task Filters allow you to specify the data this task will run on. This matches spans, or traces/sessions that contain matching spans.

  2. When running on historical data, the maximum number of items is based on the highest eval scope

  1. Provide a unique Eval Column Name for the evaluator in plaintext. Ensure that this name is distinct from other evaluators across all tasks. Here, you can also set Evaluator Scope and Filters.

  2. Define any required parameters for the selected Code Evaluator


Arize Managed Code Evaluators

Arize manages a set of off-the-shelf code evaluators on your behalf. Simply select the evaluator name from a drop-down and the evaluator code will be provided. Users can customize the evaluators by specifying the arguments that should be passed in as parameters. Currently, we support all of the evaluators below and new evaluators can be added upon request.

Matches Regex

This evaluator checks whether the text matches a specified regex pattern. For example, the evaluator can be used to determine whether a competitor's name is included in an LLM response to a customer. Parameters include:

  • span attribute: The validation check will be applied to the content in the span attribute listed. For example, if the span attribute is attributes.llm.output, then the regex operation will apply to the LLM response. Refer to the Attributes tab of your spans on the LLM Tracing page for a list of available attributes and their content.

  • pattern: The compiled regex pattern used for matching against the span attribute value.

JSON Parseable

This evaluator checks whether the LLM data is a valid JSON-parsable string. For example, this evaluator can validate that the output of an LLM can be parsed as JSON, which is a common requirement for structured data processing. Parameters include:

  • span attribute: The validation check will be applied to the content in the span attribute listed. For example, if the span attribute is attributes.llm.output, then the json parseable operation will apply to the LLM response. Refer to the Attributes tab of your spans on the LLM Tracing page for a list of available attributes and their content.

Contains any Keyword

This evaluator checks whether any specified keywords are present in the LLM data. This evaluator is useful for identifying if the output contains specific terms or phrases of interest, enabling targeted validation or analysis. Parameters include:

  • span attribute: The validation check will be applied to the content in the span attribute listed. For example, if the span attribute is attributes.llm.output, then the contains keyword check will apply to the LLM response. Refer to the Attributes tab of your spans on the LLM Tracing page for a list of available attributes and their content.

  • keywords: A list of keyword strings to search for in the span attribute. If any keyword matches, then the evaluator will flag the data as a match.

Contains all Keywords

This evaluator is similar to the one above, except it checks that all keywords are present, rather than any. Parameters are the same as Contains any Keyword above.


Custom Code Evaluators

Custom Code Evaluators are only available in Arize AX Enterprise. Request a demo here.

Custom Code Evals allow you to define your own evaluation logic in Python (with JavaScript coming soon) to score and label LLM traces based on span attributes. This is ideal for use cases that require highly customized and deterministic rules—such as business logic validation, structured output parsing, or expected keyword presence.

Once you select CustomArizeEvaluator from the "Select an Eval" drop-down, you’ll define the logic in the right-hand panel of the task creation interface.

Step 1: Imports

Start by importing the necessary classes and functions you'll need in your evaluator. This is easiest to do in full screen view, using the expand button on the top right of each code cell.

Currently we support the packages listed below. If you need an additional package installed, please notify the customer support team and we'll do our best to address your requirements!

numpy
pandas
scipy
pyarrow
arize[Datasets]==7.25.7
pydantic==2.11.7
jellyfish==1.2.0

Step 2: Test in Code

While it's possible to write the code in the UI, it's typically easier to iterate in a python script or Colab notebook. We provide the necessary starter code with a Test in Code button. The starter code will port over the span attributes, evaluator class and import sections from your task, with buttons to copy the code snippets and run the code over your own data.

Step 3: Update Evaluator Class and Span Attributes

Once you're seeing the desired results with your evaluation code, we recommend copy-pasting the updated CodeEvaluator child class into the UI, along with the span attributes and imports, if needed. You're now ready to kick off your task!

Last updated

Was this helpful?