Skip to main content
Glama

get_stacks

Analyze CUDA/driver call stacks to identify code paths causing operations. Returns frequent stacks with symbols, source files, and timing for performance troubleshooting.

Instructions

Get resolved call stacks for CUDA/driver operations. Returns top stacks by frequency with symbol names, source files, and timing stats. One call answers 'what code path caused this operation?' For older DBs without resolved symbols, falls back to raw IPs (hex addresses).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
sourceNoSource filter: 1=CUDA, 3=HOST, 4=DRIVER
opNoOperation name (e.g. cudaMalloc, cuLaunchKernel)
pidNoProcess ID filter
sinceNoTime window (e.g. 5m, 1h). Default: all data
limitNoMax stacks returned (default 10)
tscNotelegraphic compression (default: true)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The `get_stacks` method in `MCPClient` is a handler that acts as an MCP client-side wrapper to invoke the `get_stacks` MCP tool.
    def get_stacks(self, since: str = "0", op: str = "") -> dict:
        """Get resolved call stacks (120s timeout — stack aggregation can be slow)."""
        args = {"since": since, "tsc": False}
        if op:
            args["op"] = op
        return self.call("get_stacks", args, timeout=120)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and adds valuable behavioral context: it discloses that the tool returns top stacks by frequency with symbol names, source files, and timing stats, and includes a fallback behavior for older DBs (raw IPs). However, it lacks details on permissions, rate limits, or error handling, which are important for a tool with multiple parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by details on returns and fallback behavior in just three sentences. Every sentence adds value without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations, but with an output schema), the description is mostly complete: it covers purpose, return format, and a key behavioral fallback. However, it could improve by addressing usage guidelines relative to siblings or more behavioral traits like performance implications. The output schema likely handles return values, reducing the need for description details there.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description does not add any parameter-specific semantics beyond what the schema provides, such as explaining interactions between parameters or usage examples. Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get resolved call stacks') and resources ('CUDA/driver operations'), distinguishing it from siblings by focusing on call stack analysis rather than causal chains, checks, reports, stats, demos, or SQL queries. It explicitly answers 'what code path caused this operation?' which reinforces its unique role.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by stating the tool answers 'what code path caused this operation?' and mentions a fallback for older DBs, but it does not explicitly guide when to use this tool versus alternatives like 'get_causal_chains' or 'get_trace_stats'. No exclusions or clear alternatives are provided, leaving usage context somewhat vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ingero-io/ingero'

If you have feedback or need assistance with the MCP directory API, please join our Discord server