Skip to main content
Glama
mshegolev

jaeger-mcp

by mshegolev

jaeger_get_trace

Read-onlyIdempotent

Retrieve full trace details including all spans, service breakdowns, and execution trees to analyze performance issues or identify error sources in distributed systems.

Instructions

Retrieve full trace detail with all spans, service breakdown, and execution tree.

Wraps GET /api/traces/{traceID}. Returns every span in the trace, per-service statistics, and a flat execution tree (each node lists its child span IDs) that summarises the call hierarchy.

Error spans are identified by tags["error"] = "true".

Examples: - Use when: "Why is trace abc123... slow — show me the span breakdown" → trace_id='abc123...'; inspect services for the heaviest service and execution_tree for the call hierarchy. - Use when: "Which service caused the error in trace xyz...?" → check spans where is_error=true. - Use when: You found a slow/failed trace in jaeger_search_traces and need full detail. - Don't use when: You don't have a specific traceID — use jaeger_search_traces to find one first. - Don't use when: You only want aggregate data across many traces (use jaeger_search_traces with filters instead).

Returns: dict with trace_id / span_count / service_count / root_operation / root_service / start_time_us / total_duration_us / errors_count / services (per-service stats) / spans (all spans) / execution_tree.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
trace_idYesTrace ID as a hex string (16 or 32 hex chars). Example: 'abcdef1234567890abcdef1234567890'. Obtain from jaeger_search_traces.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
trace_idYes
span_countYes
service_countYes
root_operationYes
root_serviceYes
start_time_usYes
total_duration_usYes
errors_countYes
servicesYes
spansYes
execution_treeYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, idempotent, and open-world behavior, but the description adds valuable context: it specifies the underlying API endpoint ('Wraps GET /api/traces/{traceID}'), explains how error spans are identified ('tags["error"] = "true"'), and details the return structure (e.g., per-service statistics, execution tree). This enhances transparency without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with a clear purpose statement, followed by API details, error identification, usage examples, and return format. Each section is concise and adds value, with no redundant or wasted sentences, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (retrieving detailed trace data), the description is highly complete: it covers purpose, usage guidelines, behavioral context, and return values. With annotations providing safety hints and an output schema detailing the return structure, the description fills all necessary gaps, ensuring the agent has sufficient information for correct tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the single parameter (trace_id) with format, length, and example. The description does not add further parameter details beyond what the schema provides, but it reinforces usage context (e.g., 'Obtain from jaeger_search_traces' in schema). Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Retrieve full trace detail') and resource ('trace'), distinguishing it from siblings like jaeger_search_traces (which searches) and jaeger_list_services (which lists). It explicitly mentions retrieving 'all spans, service breakdown, and execution tree', making the purpose highly specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance with 'Use when' and 'Don't use when' examples, including specific scenarios (e.g., analyzing a slow trace or error) and clear alternatives (e.g., use jaeger_search_traces when lacking a traceID or needing aggregate data). This directly addresses when to use this tool versus its siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/jaeger-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server