how-to-annotate-traces.md•1.22 kB
# Annotations
{% embed url="https://storage.googleapis.com/arize-phoenix-assets/assets/videos/span_annotations.mp4" %}
Capture feedback in the form of annotations from humans and LLMs
{% endembed %}
In order to improve your LLM application iteratively, it's vital to collect feedback, annotate data during human review, as well as to establish an evaluation pipeline so that you can monitor your application. In Phoenix we capture this type of feedback in the form of **annotations**.
Phoenix gives you the ability to annotate traces with feedback from the UI, your application, or wherever you would like to perform evaluation. Phoenix's annotation model is simple yet powerful - given an entity such as a span that is collected, you can assign a `label` and/or a `score` to that entity. 
## Next Steps
* Learn more about the concepts: [Annotations Concepts](https://app.gitbook.com/s/fqGNxHHFrgwnCxgUBNsJ/tracing/annotations-concepts "mention")
* Configure Annotation Configs to guide human annotations.
* How to run [evaluating-phoenix-traces.md](../how-to-tracing/feedback-and-annotations/evaluating-phoenix-traces.md "mention")
* Learn how to log annotations via the client from your app or in a notebook