Create a new experiment with a list of JSON objects (or runs). Empty experiments are not allowed.
Each run (JSON object) must include an example_id field that corresponds to an example in the dataset,
and a output field that contains the task’s output for the example (the input).
The name of the experiment must be unique within a given dataset.
Body containing experiment creation parameters.
Rules
name must be unique within the target dataset.experimentRuns.example_id — the ID of an existing example in the dataset/versionoutput — the model/task output for that examplemodel, latency_ms,
temperature, prompt, tool_calls, etc.). These are stored and can be used
for analysis/filters.⚠️ Beta Warning: This endpoint is in beta, read more here.
Most Arize AI endpoints require authentication. For those endpoints that require authentication, include your API key in the request header using the format
Authorization: Bearer <api-key>Body containing experiment creation parameters
An experiment object
Experiments combine a dataset (example inputs/expected outputs), a task (the function that produces model outputs), and one or more evaluators (code or LLM judges) to measure performance. Each run is stored independently so you can compare runs, track progress, and validate improvements over time. See the full definition on the Experiments page.
Use an experiment to run tasks on a dataset, attach evaluators to score outputs, and compare runs to confirm improvements.
Unique identifier for the experiment
Name of the experiment
Unique identifier for the dataset this experiment belongs to
Unique identifier for the dataset version this experiment belongs to
Timestamp for when the experiment was created
Timestamp for the last update of the experiment
Unique identifier for the experiment traces project this experiment belongs to (if it exists)