Store and track prompt versions in Phoenix
Prompts with Phoenix can be created using the playground as well as via the phoenix-clients.
Navigate to the Prompts in the navigation and click the add prompt button on the top right. This will navigate you to the Playground.
The playground is like the IDE where you will develop your prompt. The prompt section on the right lets you add more messages, change the template format (f-string or mustache), and an output schema (JSON mode).
To the right you can enter sample inputs for your prompt variables and run your prompt against a model. Make sure that you have an API key set for the LLM provider of your choosing.
To save the prompt, click the save button in the header of the prompt on the right. Name the prompt using alpha numeric characters (e.x. `my-first-prompt`) with no spaces. The model configuration you selected in the Playground will be saved with the prompt. When you re-open the prompt, the model and configuration will be loaded along with the prompt.
You just created your first prompt in Phoenix! You can view and search for prompts by navigating to Prompts in the UI.
Prompts can be loaded back into the Playground at any time by clicking on "open in playground"
To view the details of a prompt, click on the prompt name. You will be taken to the prompt details view. The prompt details view shows all the that has been saved (ex: the model used, the invocation parameters, etc.)
Once you've crated a prompt, you probably need to make tweaks over time. The best way to make tweaks to a prompt is using the playground. Depending on how destructive a change you are making you might want to just create a new or clone the prompt.
To make edits to a prompt, click on the edit in Playground on the top right of the prompt details view.
When you are happy with your prompt, click save. You will be asked to provide a description of the changes you made to the prompt. This description will show up in the history of the prompt for others to understand what you did.
In some cases, you may need to modify a prompt without altering its original version. To achieve this, you can clone a prompt, similar to forking a repository in Git.
Cloning a prompt allows you to experiment with changes while preserving the history of the main prompt. Once you have made and reviewed your modifications, you can choose to either keep the cloned version as a separate prompt or merge your changes back into the main prompt. To do this, simply load the cloned prompt in the playground and save it as the main prompt.
This approach ensures that your edits are flexible and reversible, preventing unintended modifications to the original prompt.
🚧 Prompt labels and metadata is still under construction.
Starting with prompts, Phoenix has a dedicated client that lets you programmatically. Make sure you have installed the appropriate phoenix-client before proceeding.
Creating a prompt in code can be useful if you want a programatic way to sync prompts with the Phoenix server.
Below is an example prompt for summarizing articles as bullet points. Use the Phoenix client to store the prompt in the Phoenix server. The name of the prompt is an identifier with lowercase alphanumeric characters plus hyphens and underscores (no spaces).
import phoenix as px
from phoenix.client.types import PromptVersion
content = """\
You're an expert educator in {{ topic }}. Summarize the following article
in a few concise bullet points that are easy for beginners to understand.
{{ article }}
"""
prompt_name = "article-bullet-summarizer"
prompt = px.Client().prompts.create(
name=prompt_name,
version=PromptVersion(
[{"role": "user", "content": content}],
model_name="gpt-4o-mini",
),
)
A prompt stored in the database can be retrieved later by its name. By default the latest version is fetched. Specific version ID or a tag can also be used for retrieval of a specific version.
prompt = px.Client().prompts.get(prompt_identifier=prompt_name)
If a version is tagged with, e.g. "production", it can retrieved as follows.
prompt = px.Client().prompts.get(prompt_identifier=prompt_name, tag="production")
Below is an example prompt for summarizing articles as bullet points. Use the Phoenix client to store the prompt in the Phoenix server. The name of the prompt is an identifier with lowercase alphanumeric characters plus hyphens and underscores (no spaces).
import { createPrompt, promptVersion } from "@arizeai/phoenix-client";
const promptTemplate = `
You're an expert educator in {{ topic }}. Summarize the following article
in a few concise bullet points that are easy for beginners to understand.
{{ article }}
`;
const version = createPrompt({
name: "article-bullet-summarizer",
version: promptVersion({
modelProvider: "OPENAI",
modelName: "gpt-3.5-turbo",
template: [
{
role: "user",
content: promptTemplate,
},
],
}),
});
A prompt stored in the database can be retrieved later by its name. By default the latest version is fetched. Specific version ID or a tag can also be used for retrieval of a specific version.
import { getPrompt } from "@arizeai/phoenix-client/prompts";
const prompt = await getPrompt({ name: "article-bullet-summarizer" });
// ^ you now have a strongly-typed prompt object, in the Phoenix SDK Prompt type
If a version is tagged with, e.g. "production", it can retrieved as follows.
const promptByTag = await getPrompt({ tag: "production", name: "article-bullet-summarizer" });
// ^ you can optionally specify a tag to filter by