Skip to main content
Glama

get_test_report

Retrieve GPU integration test results in JSON format, including test status, timing data, and system information for performance analysis.

Instructions

Get the GPU integration test report (JSON). Generated by gpu-test.sh after a full test run. Includes per-test status, timing, and system info.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tscNotelegraphic compression (default: true)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The file 'scripts/gpu-investigation-analysis.py' does not define a tool named 'get_test_report'. It performs multiple data analysis tasks by calling existing MCP tools (e.g., 'run_sql', 'get_causal_chains', 'get_trace_stats', 'get_stacks') and generates a final report through 'generate_report'. The requested tool does not exist in this codebase.
    SUM(arg0) / (SUM(duration) / 1e9 + 0.001) / 1e9 as bw_gbps
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the report includes 'per-test status, timing, and system info,' which adds some context about the output content. However, it lacks critical behavioral details such as whether this is a read-only operation, if it requires specific permissions, how it handles errors, or if there are rate limits. For a tool with no annotation coverage, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose ('Get the GPU integration test report (JSON)') followed by additional context in two concise sentences. Every sentence adds value: the first defines the tool, the second explains the source, and the third details the content. There is no wasted verbiage or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple retrieval with one optional parameter), the presence of an output schema (which handles return values), and high schema description coverage, the description is mostly complete. It specifies the report format, source, and content. However, it lacks behavioral context (e.g., safety, permissions) which is not covered by annotations or output schema, leaving a minor gap for a tool with no annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with one optional parameter 'tsc' documented as 'telegraphic compression (default: true).' The description does not add any meaning beyond this schema, as it makes no mention of parameters. With high schema coverage, the baseline score is 3, as the description does not compensate but also doesn't detract from the schema's documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the GPU integration test report (JSON)' with specific verb ('Get') and resource ('GPU integration test report'). It distinguishes the report format (JSON) and mentions it's generated by a specific script after a full test run. However, it doesn't explicitly differentiate from sibling tools like 'get_check' or 'get_trace_stats' that might also retrieve reports or data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating the report is 'Generated by gpu-test.sh after a full test run,' suggesting this tool should be used when such a test has been completed. However, it provides no explicit guidance on when to use this tool versus alternatives like 'get_check' or 'run_demo,' nor does it mention any prerequisites or exclusions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ingero-io/ingero'

If you have feedback or need assistance with the MCP directory API, please join our Discord server