Skip to main content
Glama
Arize-ai

@arizeai/phoenix-mcp

Official
by Arize-ai

get-span-annotations

Retrieve annotations for specific span IDs, including scores and labels, to analyze and categorize spans.

Instructions

Get span annotations for a list of span IDs.

Span annotations provide additional metadata, scores, or labels for spans. They can be created by humans, LLMs, or code and help in analyzing and categorizing spans.

Example usage: Get annotations for spans ["span1", "span2"] from project "my-project" Get quality score annotations for span "span1" from project "my-project"

Expected return: Object containing annotations array and optional next cursor for pagination. Example: { "annotations": [ { "id": "annotation123", "span_id": "span1", "name": "quality_score", "result": { "label": "good", "score": 0.95, "explanation": null }, "annotator_kind": "LLM", "metadata": { "model": "gpt-4" } } ], "nextCursor": "cursor_for_pagination" }

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_identifierNo
span_idsYes
include_annotation_namesNo
exclude_annotation_namesNo
cursorNo
limitNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden. It details the return structure including pagination cursor and example output. It does not mention destructive effects or authentication needs, but the read-only nature is implied. Additional behavioral context (e.g., rate limits) is missing but not critical for a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise, well-structured with a main sentence, explanation, examples, and expected return. It is front-loaded with the core purpose and provides relevant detail without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking annotations and output schema, the description covers the return format and pagination. It provides usage examples but does not fully explain all parameters (e.g., cursor, limit) or error scenarios. For a tool with 6 parameters, it is reasonably complete but leaves some gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage. The description only mentions 'span_ids' and 'project_identifier' implicitly via examples, but fails to explain other parameters like 'include_annotation_names', 'exclude_annotation_names', 'cursor', and 'limit'. The schema names are somewhat self-explanatory, but the description adds minimal value beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and resource 'span annotations', and distinguishes from sibling tools like 'get-spans' by focusing on annotations for specific span IDs. The explanation of what span annotations are further clarifies the purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Example usage is provided for retrieving annotations by span IDs and project, but the description does not explicitly state when to use this tool versus alternatives (e.g., get-spans or list-annotation-configs). The context is clear but lacks explicit exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Arize-ai/phoenix'

If you have feedback or need assistance with the MCP directory API, please join our Discord server