Skip to main content
Glama

jaeger_get_trace

Read-onlyIdempotent

Retrieve complete trace details including all spans, per-service statistics, and execution tree to identify slow services or error causes.

Instructions

Retrieve full trace detail with all spans, service breakdown, and execution tree.

Wraps GET /api/traces/{traceID}. Returns every span in the trace, per-service statistics, and a flat execution tree (each node lists its child span IDs) that summarises the call hierarchy.

Error spans are identified by tags["error"] = "true".

Examples: - Use when: "Why is trace abc123... slow — show me the span breakdown" → trace_id='abc123...'; inspect services for the heaviest service and execution_tree for the call hierarchy. - Use when: "Which service caused the error in trace xyz...?" → check spans where is_error=true. - Use when: You found a slow/failed trace in jaeger_search_traces and need full detail. - Don't use when: You don't have a specific traceID — use jaeger_search_traces to find one first. - Don't use when: You only want aggregate data across many traces (use jaeger_search_traces with filters instead).

Returns: dict with trace_id / span_count / service_count / root_operation / root_service / start_time_us / total_duration_us / errors_count / services (per-service stats) / spans (all spans) / execution_tree.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
trace_idYesTrace ID as a hex string (16 or 32 hex chars). Example: 'abcdef1234567890abcdef1234567890'. Obtain from jaeger_search_traces.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
trace_idYes
span_countYes
service_countYes
root_operationYes
root_serviceYes
start_time_usYes
total_duration_usYes
errors_countYes
servicesYes
spansYes
execution_treeYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, openWorldHint=true. The description adds context beyond annotations by detailing the API endpoint, structure of the return value, and identification of error spans, which is valuable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections, examples, and a returns list. It is front-loaded with purpose. While slightly verbose, every sentence adds value. Could be slightly more concise but still effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (single parameter, 100% schema coverage, and existing output schema), the description is complete. It covers purpose, usage, return structure, and error handling without gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already fully describes the `trace_id` parameter with constraints and example. The description does not add new semantic meaning beyond the schema; it only provides usage context. Thus, baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Retrieve full trace detail with all spans, service breakdown, and execution tree.' It uses specific verb+resource and distinguishes itself from sibling tools like `jaeger_search_traces`.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides multiple concrete examples of when to use (e.g., investigating slow or failed traces) and explicitly states when not to use, directing to `jaeger_search_traces` for finding traces or aggregate data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mshegolev/jaeger-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server