Skip to main content
POST
/
v1
/
experiment_evaluations
Create or update evaluation for an experiment run
curl --request POST \
  --url https://api.example.com/v1/experiment_evaluations \
  --header 'Content-Type: application/json' \
  --data '
{
  "experiment_run_id": "<string>",
  "name": "<string>",
  "annotator_kind": "LLM",
  "start_time": "2023-11-07T05:31:56Z",
  "end_time": "2023-11-07T05:31:56Z",
  "result": {
    "label": "<string>",
    "score": 123,
    "explanation": "<string>"
  },
  "error": "<string>",
  "metadata": {},
  "trace_id": "<string>"
}
'
{
  "data": {
    "id": "<string>"
  }
}

Body

application/json
experiment_run_id
string
required

The ID of the experiment run being evaluated

name
string
required

The name of the evaluation

annotator_kind
enum<string>
required

The kind of annotator used for the evaluation

Available options:
LLM,
CODE,
HUMAN
start_time
string<date-time>
required

The start time of the evaluation in ISO format

end_time
string<date-time>
required

The end time of the evaluation in ISO format

result
ExperimentEvaluationResult · object

The result of the evaluation. Either result or error must be provided.

error
string | null

Error message if the evaluation encountered an error. Either result or error must be provided.

metadata
Metadata · object

Metadata for the evaluation

trace_id
string | null

Optional trace ID for tracking

Response

Successful Response

data
UpsertExperimentEvaluationResponseBodyData · object
required