Annotations

Capture feedback in the form of annotations from humans and LLMs

In order to improve your LLM application iteratively, it's vital to collect feedback, annotate data during human review, as well as to establish an evaluation pipeline so that you can monitor your application. In Phoenix we capture this type of feedback in the form of annotations.

Phoenix gives you the ability to annotate traces with feedback from the UI, your application, or wherever you would like to perform evaluation. Phoenix's annotation model is simple yet powerful - given an entity such as a span that is collected, you can assign a label and/or a score to that entity.

Navigate to the Feedback tab in this demo trace to see how LLM-based evaluations appear in Phoenix:

Next Steps

Last updated

Was this helpful?