Skip to main content
Glama

@arizeai/phoenix-mcp

Official
by Arize-ai
README.md1.66 kB
# Annotate Traces <figure><img src="https://storage.googleapis.com/arize-phoenix-assets/assets/images/annotation_process.png" alt=""><figcaption><p>Applying the scientific method to building AI products - By Eugene Yan</p></figcaption></figure> Annotating traces is a crucial aspect of evaluating and improving your LLM-based applications. By systematically recording qualitative or quantitative feedback on specific interactions or entire conversation flows, you can: 1. Track performance over time 2. Identify areas for improvement 3. Compare different model versions or prompts 4. Gather data for fine-tuning or retraining 5. Provide stakeholders with concrete metrics on system effectiveness Phoenix allows you to annotate traces through the Client, the REST API, or the UI. ## Guides * To learn how to configure annotations and to annotate through the UI, see [annotating-in-the-ui.md](annotating-in-the-ui.md "mention") * To learn how to add human labels to your traces, either manually or programmatically, see [capture-feedback.md](capture-feedback.md "mention") * To learn how to evaluate traces captured in Phoenix, see [evaluating-phoenix-traces.md](evaluating-phoenix-traces.md "mention") * To learn how to upload your own evaluation labels into Phoenix, see [llm-evaluations.md](llm-evaluations.md "mention") For more background on the concept of annotations, see [how-to-annotate-traces.md](../../llm-traces/how-to-annotate-traces.md "mention") <figure><img src="https://storage.googleapis.com/arize-assets/phoenix/assets/images/annotation_flow.gif" alt=""><figcaption><p>Adding manual annotations to traces</p></figcaption></figure>

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server