Skip to main content
Glama

get_run

Retrieve a specific test run from QASE test management platform using project code and run ID to access test execution details.

Instructions

Get a specific test run

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes
idYes
includeNo

Implementation Reference

  • The handler function for the 'get_run' tool in the CallToolRequestSchema. It parses the input arguments using GetRunSchema and delegates to the getRun helper function.
    .with({ name: 'get_run' }, ({ arguments: args }) => {
      const { code, id, include } = GetRunSchema.parse(args);
      return getRun(code, id, include);
    })
  • Zod schema defining the input parameters for the 'get_run' tool: project code, run ID, and optional include flag.
    export const GetRunSchema = z.object({
      code: z.string(),
      id: z.number(),
      include: z.enum(['cases']).optional(),
    });
  • src/index.ts:196-199 (registration)
    Registration of the 'get_run' tool in the ListToolsRequestSchema response, specifying name, description, and input schema.
      name: 'get_run',
      description: 'Get a specific test run',
      inputSchema: zodToJsonSchema(GetRunSchema),
    },
  • Helper function that wraps the client.runs.getRun API call with pipe and toResult utility for handling the actual data fetching.
    export const getRun = pipe(client.runs.getRun.bind(client.runs), toResult);
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states 'Get a specific test run', which implies a read-only operation but does not clarify permissions, rate limits, error handling, or what data is returned (e.g., run details, status). For a tool with zero annotation coverage, this lack of behavioral information is a significant gap, making it inadequate for safe and effective use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, 'Get a specific test run', which is appropriately sized and front-loaded with the core action. There is no wasted text or unnecessary elaboration, making it efficient for quick understanding, though it lacks depth.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is incomplete. It does not explain the return values, parameter usage, or behavioral traits, leaving gaps that could hinder the agent's ability to invoke the tool correctly. For a retrieval tool with undocumented inputs and outputs, more context is needed to ensure reliable operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 3 parameters (code, id, include) with 0% description coverage, meaning no parameter details are documented in the schema. The description does not add any meaning beyond the schema—it does not explain what 'code' and 'id' represent (e.g., project code and run ID) or how 'include' works (e.g., to embed cases). Since the schema coverage is low and the description fails to compensate, the score reflects this deficiency.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose as 'Get a specific test run', which includes a verb ('Get') and resource ('test run'), making it clear what it does. However, it lacks specificity about what 'specific' means (e.g., by ID/code) and does not distinguish it from sibling tools like 'get_runs' (which likely lists multiple runs) or 'get_result' (which might retrieve results within runs). This vagueness prevents a higher score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing a run ID), exclusions, or comparisons to siblings like 'get_runs' for listing runs or 'get_result' for run results. Without such context, the agent must infer usage from the tool name and schema alone, which is insufficient for effective selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rikuson/mcp-qase'

If you have feedback or need assistance with the MCP directory API, please join our Discord server