Skip to main content
Glama

list_test_runs

Retrieve test runs from Zebrunner with filtering by project, name, milestone, build number, status, and sorting options for QA analysis.

Instructions

🏃 List Test Runs from Public API with advanced filtering

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectNoProject alias ('web', 'android', 'ios', 'api') or project keyweb
pageTokenNoToken for pagination (from previous response)
maxPageSizeNoNumber of test runs per page (max 100)
nameFilterNoFilter by test run name (partial match)
milestoneFilterNoFilter by milestone ID (use get_project_milestones to find ID) or milestone name (will be converted to ID)
buildNumberFilterNoFilter by build number (searches in configurations, title, and description)
closedFilterNoFilter by closed status (true=closed, false=open)
sortByNoSort order: -createdAt (newest first), createdAt (oldest first), -title (Z-A), title (A-Z)-createdAt
formatNoOutput format: raw API response or formatted dataformatted
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only mentions 'advanced filtering' without detailing behavioral aspects. It doesn't disclose pagination behavior (implied by pageToken), rate limits, authentication requirements, response format expectations, or whether this is a read-only operation. The description adds minimal value beyond what's obvious from the name.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with an emoji, which is reasonably concise but front-loads style over substance. While it efficiently conveys the core purpose, it could be more structured by separating key capabilities. The emoji adds character but doesn't enhance functional understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 9 parameters, no annotations, and no output schema, the description is insufficient. It doesn't explain what constitutes a 'test run', what data is returned, how pagination works, or any error conditions. The lack of behavioral context and output information leaves significant gaps for an agent to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema thoroughly documents all 9 parameters. The description adds no specific parameter information beyond 'advanced filtering', which is already evident from the parameter names. The baseline score of 3 reflects adequate coverage through schema alone, with no additional value from the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('Test Runs'), and specifies the source ('from Public API') and capability ('with advanced filtering'). It distinguishes from siblings like 'get_test_run_by_id' by indicating it returns multiple items with filtering, but doesn't explicitly contrast with other list-like tools like 'get_all_launches_for_project'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing filtered test runs from the public API, but doesn't explicitly state when to choose this tool over alternatives like 'get_all_launches_for_project' or 'list_test_run_test_cases'. The parameter descriptions provide some context (e.g., referencing 'get_project_milestones' for milestone IDs), but no explicit guidance on tool selection is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maksimsarychau/mcp-zebrunner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server