Annotations

Capture feedback in the form of annotations from humans and LLMs

In order to improve your LLM application iteratively, it's vital to collect feedback, annotate data during human review, as well as to establish an evaluation pipeline so that you can monitor your application. In Phoenix we capture this type of feedback in the form of annotations.

Phoenix gives you the ability to annotate traces with feedback from the UI, your application, or wherever you would like to perform evaluation. Phoenix's annotation model is simple yet powerful - given an entity such as a span that is collected, you can assign a label and/or a score to that entity.

Next Steps

  • Learn more about the concepts: Annotations Concepts

  • Configure Annotation Configs to guide human annotations.

  • Learn how to log annotations via the client from your app or in a notebook

Last updated

Was this helpful?