Skip to main content
Glama
bun913

TestRail MCP Server

by bun913

getCases

Retrieve a filtered list of test cases from TestRail projects and suites, showing basic fields for efficient browsing and management.

Instructions

Retrieves test cases list with basic fields only (excludes steps/expected results for performance). REQUIRED: projectId, suiteId. OPTIONAL: createdBy, filter, limit (default 50), milestoneId, offset (default 0), priorityId, refs, sectionId, templateId, typeId, updatedBy, labelId. Use getCase for full details.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesTestRail Project ID
suiteIdYesTestRail Suite ID
createdByNoA comma-separated list of creators (user IDs) to filter by
filterNoOnly return cases with matching filter string in the case title
limitNoThe number of test cases the response should return (The response size is 50 by default) —requires TestRail 6.7 or later
milestoneIdNoA comma-separated list of milestone IDs to filter by (not available if the milestone field is disabled for the project)
offsetNoWhere to start counting the tests cases from (the offset)—requires TestRail 6.7 or later
priorityIdNoA comma-separated list of priority IDs to filter by
refsNoA single Reference ID (e.g. TR-1, 4291, etc.) —requires TestRail 6.5.2 or later
sectionIdNoThe ID of a test case section
templateIdNoA comma-separated list of template IDs to filter by —requires TestRail 5.2 or later
typeIdNoA comma-separated list of case type IDs to filter by
updatedByNoA comma-separated list of user IDs who updated test cases to filter by
labelIdNoA comma-separated list of label IDs to filter by
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses performance optimization ('excludes steps/expected results for performance') and pagination defaults, but lacks details on rate limits, error handling, or authentication requirements expected for unannotated tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences efficiently convey purpose, parameter organization, and sibling distinction. The REQUIRED/OPTIONAL structure front-loads critical constraints. Minor density in the parameter list prevents a perfect 5, but no extraneous content exists.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 14-parameter list operation without output schema, the description adequately covers the tool's limitation (basic fields only) and pagination behavior. Absence of return value description is noted, though schema richness compensates for input parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description adds value by organizing 14 parameters into REQUIRED/OPTIONAL categories and highlighting defaults (limit=50, offset=0), making the schema more scannable despite already comprehensive field descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Retrieves' with resource 'test cases list' and clearly defines scope limitations ('basic fields only'). It effectively distinguishes from sibling 'getCase' by noting this returns partial data versus full details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names alternative tool 'getCase for full details' and explains when to use this tool ('for performance' when steps/expected results aren't needed). Clear REQUIRED/OPTIONAL parameter categorization guides invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bun913/mcp-testrail'

If you have feedback or need assistance with the MCP directory API, please join our Discord server