Quickstart: Prompts (UI)

Getting Started

Prompt playground can be accessed from the left navbar of Phoenix.

From here, you can directly prompt your model by modifying either the system or user prompt, and pressing the Run button on the top right.

Basic Example Use Case

Let's start by comparing a few different prompt variations. Add two additional prompts using the +Prompt button, and update the system and user prompts like so:

System prompt #1:

You are a summarization tool. Summarize the provided paragraph.

System prompt #2:

You are a summarization tool. Summarize the provided paragraph in 2 sentences or less.

System prompt #3:

You are a summarization tool. Summarize the provided paragraph. Make sure not to leave out any key points.

User prompt (use this for all three):

In software engineering, more specifically in distributed computing, observability is the ability to collect data about programs' execution, modules' internal states, and the communication among components.[1][2] To improve observability, software engineers use a wide range of logging and tracing techniques to gather telemetry information, and tools to analyze and use it. Observability is foundational to site reliability engineering, as it is the first step in triaging a service outage. One of the goals of observability is to minimize the amount of prior knowledge needed to debug an issue.

Your playground should look something like this:

Let's run it and compare results:

Creating a Prompt

It looks like the second option is doing the most concise summary. Go ahead and save that prompt to your Prompt Hub.

Your prompt will be saved in the Prompts tab:

Now you're ready to see how that prompt performs over a larger dataset of examples.

Running over a dataset

Prompt playground can be used to run a series of dataset rows through your prompts. To start off, we'll need a dataset. Phoenix has many options to upload a dataset, to keep things simple here, we'll directly upload a CSV. Download the articles summaries file linked below:

Next, create a new dataset from the Datasets tab in Phoenix, and specify the input and output columns like so:

Uploading a CSV dataset

Now we can return to Prompt Playground, and this time choose our new dataset from the "Test over dataset" dropdown.

You can also load in your saved Prompt:

We'll also need to update our prompt to look for the {{input_article}} column in our dataset. After adding this in, be sure to save your prompt once more!

Now if we run our prompt(s), each row of the dataset will be run through each variation of our prompt.

And if you return to view your dataset, you'll see the details of that run saved as an experiment.

From here, you could evaluate that experiment to test its performance, or add complexity to your prompts by including different tools, output schemas, and models to test against.

Updating a Prompt

You can now easily modify you prompt or compare different versions side-by-side. Let's say you've found a stronger version of the prompt. Save your updated prompt once again, and you'll see it added as a new version under your existing prompt:

You can also tag which version you've deemed ready for production, and view code to access your prompt in code further down the page.

Next Steps

Now you're ready to create, test, save, and iterate on your Prompts in Phoenix! Check out our other quickstarts to see how to use Prompts in code.

Last updated

Was this helpful?