Get Started: Datasets & Experiments
Now that you have Phoenix up and running, one of the next steps you can take is creating a Dataset & Running Experiments.
Datasets let you curate and organize examples to test your application systematically.
Experiments let you compare different model versions or configurations on the same dataset to see which performs best.
Datasets
Launch Phoenix
Before setting up your first dataset, make sure Phoenix is running. For more step by step instructions, check out this Get Started guide.
Before sending traces, make sure Phoenix is running. For more step by step instructions, check out this Get Started guide.
Log in, create a space, navigate to the settings page in your space, and create your API keys.
In your code, set your environment variables.
import os
os.environ["PHOENIX_API_KEY"] = "ADD YOUR PHOENIX API KEY"
os.environ["PHOENIX_COLLECTOR_ENDPOINT"] = "ADD YOUR PHOENIX Collector endpoint"
You can find your collector endpoint here:

Your Collector Endpoint is: https://app.phoenix.arize.com/s/ + your space name.
Creating a Dataset
You can either create a Dataset in the UI, or via code.
For this quickstart, you can download this sample.csv as a starter to run you through how to use datasets.
In the UI, you can either create a empty dataset and then populate data or upload from a CSV.
Once you've downloaded the above csv file, you can follow the video below to upload your first dataset.
That's it! You've now successfully created your first dataset.
Experiments
Once you have a dataset, you're now able to run experiments. Experiments are made of tasks &, optionally, evals. While running evals is optional, they provide valuable metrics to help you compare each of your experiments quickly — such as comparing models, catching regressions, and understanding which version performs best.
Load your Dataset in Code
The first step is to pull down your dataset into your code.
If you made your dataset in the UI, you can follow this code snippet:
from phoenix.client import AsyncClient
client = AsyncClient()
dataset = await client.datasets.get_dataset(dataset="sample", version_id= {your version id here})
To get the version_id of your dataset, please navigate to the Versions tab and copy the version you want to run an experiment on.
If you created your dataset programmatically, you should already have it available as an instance assigned to your dataset variable.
Create your Task
Create a Task to evaluate.
Your task can be any function with any definition & does not have to use an LLM. However, for our experiment we want to run our list of input questions through a new prompt, and will need to start by setting our API Keys:
from openai import OpenAI
openai_client = OpenAI()
if not (openai_api_key := os.getenv("OPENAI_API_KEY")):
openai_api_key = getpass("🔑 Enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = openai_api_key
from phoenix.experiments.types import Example
task_prompt_template = "Answer this question: {question}"
def task(example: Example) -> str:
question = example.input
message_content = task_prompt_template.format(question=question)
response = openai_client.chat.completions.create(
model="gpt-4o", messages=[{"role": "user", "content": message_content}]
)
return response.choices[0].message.content
Create your Evaluator
Next step is to create your Evaluator. If you have already defined your Q&A Correctness eval from the last quick start, you won't need to redefine it. If not, you can follow along with these code snippets.
from phoenix.evals.llm import LLM
from phoenix.evals import create_classifier
llm = LLM(model="gpt-4o", provider="openai")
CORRECTNESS_TEMPLATE = """
You are given a question and an answer. Decide if the answer is fully correct.
Rules: The answer must be factually accurate, complete, and directly address the question.
If it is, respond with "correct". Otherwise respond with "incorrect".
[BEGIN DATA]
************
[Question]: {attributes.llm.input_messages}
************
[Answer]: {attributes.llm.output_messages}
[END DATA]
Your response must be a single word, either "correct" or "incorrect",
and should not contain any text or characters aside from that word.
"correct" means that the question is correctly and fully answered by the answer.
"incorrect" means that the question is not correctly or only partially answered by the
answer.
"""
correctness_evaluator = create_classifier(
name="correctness",
prompt_template=CORRECTNESS_TEMPLATE,
llm=llm,
choices={"correct": 1.0, "incorrect": 0.0},
)
You can run multiple evaluators at once. Let's define a custom Completeness Eval.
from phoenix.evals import ClassificationEvaluator
completeness_prompt = """
You are an expert at judging the completeness of a response to a query.
Given a query and response, rate the completeness of the response.
A response is complete if it fully answers all parts of the query.
A response is partially complete if it only answers part of the query.
A response is incomplete if it does not answer any part of the query or is not related to the query.
Query: {{input}}
Response: {{output}}
Is the response complete, partially complete, or incomplete?
"""
completeness = ClassificationEvaluator(
llm=llm,
name="completeness",
prompt_template=completeness_prompt,
choices={"complete": 1.0, "partially complete": 0.5, "incomplete": 0.0},
)
Run your Experiment
Now that we have defined our Task & our Evaluators, we're now ready to run our experiment.
from phoenix.client.experiments import async_run_experiment
experiment = await async_run_experiment(
dataset=dataset,
task=task,
evaluators=[correctness_evaluator, completeness])
After running multiple experiments, you can compare the experiment output & evals side by side!
Optional: If you wanted to run even more evaluators after this experiment, you can do so following this code:
from phoenix.client.experiments import evaluate_experiment
experiment = evaluate_experiment(experiment, evaluators=[{add your evals}])
Learn More:
Last updated
Was this helpful?