Skip to main content
Glama
uarlouski

TestRail MCP Server

add_results

Add test results to a test run. Specify status, comment, and defects for each test. Send multiple results in one request.

Instructions

Add one or more test results to a test run

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
run_idYesThe ID of the test run
resultsYesArray of results to add. Each result must have test_id and status_id

Implementation Reference

  • The add_results tool handler implementation. Defines parameters schema (run_id, results array with test_id, status_id, optional comment/defects) and a handler that calls client.addResults() and returns parsed results.
    import { TestRailClient } from "../client/testrail.js";
    import { z } from "zod";
    import { ResultSchema } from "../types/testrail.js";
    import { ToolDefinition } from "../types/custom.js";
    
    const parameters = {
        run_id: z.number().describe("The ID of the test run"),
        results: z.array(z.object({
            test_id: z.number().describe("The ID of the test. Use get_tests with a run_id to retrieve available test IDs"),
            status_id: z.number().describe("The ID of the test status (e.g. Passed, Failed). Use get_statuses to retrieve available status IDs"),
            comment: z.string().optional().describe("Optional comment/description for the result"),
            defects: z.string().optional().describe("Optional comma-separated list of defect IDs"),
        })).describe("Array of results to add. Each result must have test_id and status_id"),
    }
    
    export const addResultsTool: ToolDefinition<typeof parameters, TestRailClient> = {
        name: "add_results",
        description: "Add one or more test results to a test run",
        parameters,
        handler: async ({ run_id, results }, client: TestRailClient) => {
            const response = await client.addResults(run_id, results);
            return {
                success: true,
                added_count: response.length,
                results: response.map(r => ResultSchema.parse(r)),
            };
        },
    };
  • Input validation schema for add_results tool: requires run_id (number) and results (array of objects with test_id, status_id, optional comment and defects).
    const parameters = {
        run_id: z.number().describe("The ID of the test run"),
        results: z.array(z.object({
            test_id: z.number().describe("The ID of the test. Use get_tests with a run_id to retrieve available test IDs"),
            status_id: z.number().describe("The ID of the test status (e.g. Passed, Failed). Use get_statuses to retrieve available status IDs"),
            comment: z.string().optional().describe("Optional comment/description for the result"),
            defects: z.string().optional().describe("Optional comma-separated list of defect IDs"),
        })).describe("Array of results to add. Each result must have test_id and status_id"),
    }
  • src/index.ts:72-73 (registration)
    The add_results tool is registered in the tools array at src/index.ts line 72, and the generic registration loop (lines 87-110) calls server.registerTool() to register each tool by name.
    addResultsTool,
    addResultsForCasesTool,
  • The TestRailClient.addResults() method that makes the POST API call to TestRail's /add_results/{run_id} endpoint.
    async addResults(runId: number, results: Array<Record<string, any>>): Promise<Result[]> {
        return this.post<Result[]>(`${API_BASE_V2}/add_results/${runId}`, { results });
    }
  • ResultSchema zod definition used to validate the response from add_results (id, test_id, status_id, comment, defects).
    export const ResultSchema = z.object({
        id: z.number(),
        test_id: z.number(),
        status_id: z.number(),
        comment: z.string().nullable(),
        defects: z.string().nullable(),
    });
    
    export type Result = z.infer<typeof ResultSchema>
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It only states the action ('add') without detailing side effects, such as whether existing results are overwritten, validation of status IDs, or any required permissions. This is insufficient for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no unnecessary words. It is front-loaded with the key action and resource, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool without annotations, the description lacks completeness. It does not explain the response format, error conditions, or idempotency. The schema is detailed, but the description should complement it with usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (all parameters have descriptions). The tool description adds no additional meaning beyond what is already in the schema, so it meets the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Add'), the resource ('test results'), and the context ('to a test run'). It uniquely identifies the tool's purpose and distinguishes it from siblings like 'add_attachment_to_run' or 'add_run' by specifying the exact resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool (when you need to add results to a test run), but does not provide explicit guidance on when not to use it or mention alternatives such as 'add_results_for_cases' which might serve a similar purpose.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/uarlouski/testrail-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server