Phoenix's Prompt Playground makes the process of iterating and testing prompts quick and easy. Phoenix's playground supports various AI providers (OpenAI, Anthropic, Gemini, Azure) as well as custom model endpoints, making it the ideal prompt IDE for you to build experiment and evaluate prompts and models for your task.
Speed: Rapidly test variations in the , model, invocation parameters, , and output format.
Reproducibility: All runs of the playground are recorded as traces and experiments, unlocking annotations and evaluation.
Datasets: Use dataset examples as a fixture to run a prompt variant through its paces and to evaluate it systematically.
Prompt Management: Load, edit, and save prompts directly within the playground.
To learn more on how to use the playground, see Using the Playground