Skip to main content
Checks whether two strings are identical. Returns true if they match, false otherwise.

Parameters

ParameterTypeRequiredDefaultDescription
expectedstringYesThe reference value to match against
actualstringYesThe value to evaluate
case_sensitivebooleanNotrueWhether the comparison is case-sensitive

Output

PropertyValueDescription
labeltrue or falseWhether the strings are identical
score1.0 or 0.0Numeric score (1.0 = match, 0.0 = no match)
OptimizationMaximizeHigher scores are better

Configuring Inputs

Each evaluator parameter can be set to either a path (a JSONPath expression that extracts a value from the evaluation parameters) or a literal (a fixed value typed directly). Use paths to pull from dataset inputs, task outputs, reference data, or metadata. Use literals for fixed expected values that apply to every example. See Input Mapping for full details on mapping modes, resolution order, and examples.

Usage Examples

Classification label validation — A model that must output exactly one of a fixed set of labels (e.g., "positive", "negative", "neutral"), where any deviation indicates a problem. Actual is the model’s output — use output for a plain string response, or output.label if the response is a JSON object with a label field. Expected is the ground-truth label stored per-example in your dataset, typically a path like reference.label or reference.expected. Templated response checking — A pipeline that should return a fixed string for certain inputs (a canned reply, a status code, or a pass-through value). Actual is the model’s output; Expected can be typed as a literal value if every example uses the same target string, or mapped to a dataset field if the expected value varies per example.

Notes

The comparison is whitespace-sensitive. Leading/trailing spaces and different line endings will cause a mismatch. If your dataset fields may have inconsistent whitespace, consider using contains or regex instead.

See Also