Quickstart: Prompt Playground
Now that you’re up and running with Arize AX, you can use the Prompt Playground and Prompt Library to design, test, and manage your prompts with ease.
The Prompt Playground lets you experiment with prompts in real time — explore variations across models, tweak parameters, and compare outputs side-by-side, all within an interactive workspace.
The Prompt Library helps you organize, version, and reuse prompts across projects so your best iterations stay discoverable and consistent.
Before you dive in, make sure your AX environment is connected.
Choose your path to getting started: make your first prompt through the UI, or the Prompt Hub API.
Prompts (UI)
To create a prompt, visit Prompt Hub, click + New Prompt.

This will take you to the Prompt Playground, where you can define and save your prompt.

Here are some parameters you can define
LLM Provider: Pick the LLM you want to use for this prompt.
Prompt Messages: Add any system, user, or assistant messages to your prompt. For more info on what prompt messages are, you can read this guide.
Function calling: Any function/tools you want the LLM to be aware of should be added here.
Invocation Parameters: Different LLM providers have different invocation hyperparameters, that affect the LLMs output. You can attach these to your prompts.
Save Prompt: In order to create your prompt and store it in the Prompt Hub, you MUST save it with this button. Save it under a chosen name and description, with tags if you like.
Alyx: Use our assistant to create prompts for you.
After saving your prompt, you can view it in the Prompt Hub.
You can also edit, add, or delete any of the settings we configured by clicking Edit in Prompt Playground. Just make sure to save your prompt once you edit it.

Congratulations! You just made, saved, and ran your first prompt. Next Steps
Prompt Hub API
For full info & examples of our Prompt Hub API, you can jump ahead here.
Define your prompt template
from arize.experimental.prompt_hub import Prompt, LLMProvider
new_prompt = Prompt(
name="customer_service_greeting",
description="Greets a customer and asks a clarifying question.",
messages=[
{"role": "system", "content": "You are a helpful customer service assistant."},
{"role": "user", "content": "Customer query: {query}"}
],
provider=LLMProvider.OPENAI, # choose your provider
model_name="gpt-4o", # choose your model
tags=["support", "greeting"], # optional
# input_variable_format defaults to F_STRING; you can use MUSTACHE if you prefer
)Key fields:
name,messages(chat-style list),provider,model_name,
Optional Fields:
description,tags,input_variable_format
Format and run it (OpenAI example)
from openai import OpenAI
oai = OpenAI(api_key="YOUR_OPENAI_API_KEY")
prompt_vars = {"query": "When will my order arrive?"}
formatted = new_prompt.format(prompt_vars) # expands {query} in your messages
# Execute with your provider SDK
resp = oai.chat.completions.create(**formatted)
print(resp.choices[0].message.content)
Use .format({...}) to substitute variables, then pass the formatted messages/model to your provider’s SDK.
Congratulations! You just made, saved, and ran your first prompt.
Next steps
Dive deeper into the following topics to keep improving your LLM application!
Last updated
Was this helpful?



