Replay spans

Any span on the LLM Tracing page can be loaded into the Prompt Playground for replay and iteration. The LLM parameters, prompt template, input messages, variables, and function definitions are automatically populated in the playground. From there, you can replay the LLM call precisely and iteratively refine the prompt for continuous improvements.

The Invocation Params on the trace can include the model, temperature, max completion tokens and other parameters that will automatically be populated in the playground
Click the 'Prompt Playground' button to import the span into the playground, allowing you to reproduce the LLM call for detailed testing and refinement.
The LLM parameters, prompt template, input messages, variables, and function definitions are automatically populated in the playground to match the span.
The temperature is automatically set to zero to match the LLM parameters on the original span.

Last updated

Was this helpful?