Annotate Traces

Applying the scientific method to building AI products - By Eugene Yan

Annotating traces is a crucial aspect of evaluating and improving your LLM-based applications. By systematically recording qualitative or quantitative feedback on specific interactions or entire conversation flows, you can:

  1. Track performance over time

  2. Identify areas for improvement

  3. Compare different model versions or prompts

  4. Gather data for fine-tuning or retraining

  5. Provide stakeholders with concrete metrics on system effectiveness

Phoenix allows you to annotate traces through the Client, the REST API, or the UI.

Guides

For more background on the concept of annotations, see Annotations

Adding manual annotations to traces

Last updated

Was this helpful?