Skip to main content
Glama
Derrbal

TestRail MCP Server

by Derrbal

Get TestRail Tests

get_tests

Retrieve test cases from TestRail test runs with filtering options for status, labels, and pagination to analyze test execution data.

Instructions

Returns a list of tests for a test run.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
run_idYesThe ID of the test run
status_idNoA comma-separated list of status IDs to filter by
limitNoThe number that sets the limit of tests to be shown on the response (max 250, default 250)
offsetNoThe number that sets the position where the response should start from (pagination offset)
label_idNoIDs of labels as comma separated values to filter by

Implementation Reference

  • MCP tool handler function: destructures input parameters, constructs filters, calls the service layer getTests function, logs progress, and returns the result as formatted JSON text content for MCP.
      async ({ run_id, status_id, limit, offset, label_id }) => {
        logger.debug(`Get tests tool called with run_id: ${run_id}`);
        const filters = {
          run_id,
          status_id,
          limit,
          offset,
          label_id,
        };
        const result = await getTests(filters);
        logger.debug(`Get tests tool completed. Found ${result.tests.length} tests.`);
        return {
          content: [
            {
              type: 'text',
              text: JSON.stringify(result, null, 2),
            },
          ],
        };
      },
    );
  • src/server.ts:720-721 (registration)
    Registration of the 'get_tests' MCP tool with the server, specifying name, metadata, input schema, and handler reference.
    server.registerTool(
      'get_tests',
  • Input schema definition for the get_tests tool using Zod validators, defining required run_id and optional filters for status, limit, offset, and labels.
    {
      title: 'Get TestRail Tests',
      description: 'Returns a list of tests for a test run.',
      inputSchema: {
        run_id: z.number().int().positive().describe('The ID of the test run'),
        status_id: z.array(z.number().int().positive()).optional().describe('A comma-separated list of status IDs to filter by'),
        limit: z.number().int().positive().optional().describe('The number that sets the limit of tests to be shown on the response (max 250, default 250)'),
        offset: z.number().int().min(0).optional().describe('The number that sets the position where the response should start from (pagination offset)'),
        label_id: z.array(z.number().int().positive()).optional().describe('IDs of labels as comma separated values to filter by'),
      },
    },
  • Service layer helper: getTests function transforms tool filters to client params, calls TestRailClient.getTests, normalizes response by extracting custom fields, and returns structured TestsResponse.
    export async function getTests(filters: GetTestsFilters): Promise<TestsResponse> {
      // Transform service filters to client parameters
      const clientParams: GetTestsParams = {
        run_id: filters.run_id,
        status_id: filters.status_id,
        limit: filters.limit,
        offset: filters.offset,
        label_id: filters.label_id,
      };
    
      const response: TestRailTestsResponse = await testRailClient.getTests(clientParams);
      
      // Transform tests to normalized format
      const transformedTests: TestSummary[] = response.tests.map((test) => {
        // Extract custom fields (any fields not in the standard interface)
        const standardFields = ['id', 'title'];
        const custom: Record<string, unknown> = {};
        
        Object.keys(test).forEach((key) => {
          if (!standardFields.includes(key)) {
            custom[key] = test[key];
          }
        });
    
        return {
          id: test.id,
          title: test.title,
          custom: Object.keys(custom).length > 0 ? custom : undefined,
        };
      });
    
      return {
        offset: response.offset,
        limit: response.limit,
        size: response.size,
        _links: response._links,
        tests: transformedTests,
      };
    }
  • Client layer HTTP client method: constructs TestRail API URL /get_tests/{run_id} with query params for filters, performs GET request with axios, handles errors, and returns raw API response.
    async getTests(params: GetTestsParams): Promise<TestRailTestsResponse> {
      try {
        // Build query parameters
        const queryParams = new URLSearchParams();
        
        // Handle status_id filter (comma-separated list)
        if (params.status_id && params.status_id.length > 0) {
          queryParams.append('status_id', params.status_id.join(','));
        }
        
        // Handle pagination parameters
        if (params.limit !== undefined) {
          queryParams.append('limit', params.limit.toString());
        }
        if (params.offset !== undefined) {
          queryParams.append('offset', params.offset.toString());
        }
        
        // Handle label_id filter (comma-separated list)
        if (params.label_id && params.label_id.length > 0) {
          queryParams.append('label_id', params.label_id.join(','));
        }
        
        const queryString = queryParams.toString();
        const url = `/get_tests/${params.run_id}${queryString ? `&${queryString}` : ''}`;
        
        const res = await this.http.get(url);
        if (res.status >= 200 && res.status < 300) {
          logger.info({
            message: 'Successfully retrieved tests for run',
            runId: params.run_id,
            testCount: res.data.tests?.length || 0,
            responseSize: JSON.stringify(res.data).length,
          });
          return res.data;
        } else {
          throw new Error(`HTTP ${res.status}: ${res.statusText}`);
        }
      } catch (error) {
        const normalized = this.normalizeError(error);
        const safeDetails = this.getSafeErrorDetails(error);
        logger.error({
          message: 'Failed to retrieve tests for run',
          runId: params.run_id,
          error: normalized,
          details: safeDetails,
        });
        throw normalized;
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers minimal information. It doesn't mention authentication requirements, rate limits, pagination behavior beyond parameters, response format, error conditions, or whether this is a read-only operation (though 'Returns' implies it). For a tool with 5 parameters and no annotation coverage, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that states the core purpose without unnecessary words. It's appropriately sized for a list-retrieval tool and front-loads the essential information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't address key contextual aspects like authentication needs, rate limits, pagination behavior, response format, or error handling. The agent would need to guess about important behavioral characteristics when invoking this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema - it doesn't explain relationships between parameters (like how status_id and label_id interact) or provide examples. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Returns a list') and resource ('tests for a test run'), making the purpose immediately understandable. It distinguishes from sibling tools like 'get_test' (singular) by specifying it returns multiple tests, but doesn't explicitly differentiate from other list tools like 'get_cases' or 'get_runs' beyond the resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when filtering by status or label is appropriate, when pagination is needed, or how this differs from similar tools like 'get_cases' which might return related data. The agent must infer usage from parameter names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Derrbal/testrail-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server