Skip to main content
Glama

list_testcase

Filter and list test cases from TestDino projects using multiple criteria like status, browser, tags, runtime, or test run attributes to identify specific test results.

Instructions

List test cases with comprehensive filtering options. You can filter by test run (ID or counter), status, spec file, error category, browser, tags, runtime, artifacts, error messages, attempt number, branch, time interval, environment, author, or commit hash. When using test run filters (by_branch, by_commit, by_author, by_environment, by_time_interval, by_pages, page, limit, get_all), the tool first lists test runs matching those criteria, then returns test cases from those filtered test runs. Use this to find specific test cases across your test runs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID (Required). The TestDino project identifier.
by_testrun_idNoTest run ID(s). Single ID or comma-separated for multiple runs (max 20). Example: 'test_run_123' or 'run1,run2,run3'. Not required when using test run filters (by_branch, by_commit, by_author, by_environment, by_time_interval, by_pages, page, limit, get_all).
counterNoTest run counter number. Alternative to by_testrun_id. Not required when using test run filters (by_branch, by_commit, by_author, by_environment, by_time_interval, by_pages, page, limit, get_all). Example: 43.
by_statusNoFilter by status: 'passed', 'failed', 'skipped', or 'flaky'.(ID/Counter is required while using this parameter)
by_spec_file_nameNoFilter by spec file name. Example: 'login.spec.js' or 'user-profile.spec.ts'. (ID/Counter is required while using this parameter)
by_error_categoryNoFilter by error category. Example: 'timeout_issues', 'element_not_found', 'assertion_failures', 'network_issues'. (ID/Counter is required while using this parameter)
by_browser_nameNoFilter by browser name. Example: 'chromium', 'firefox', 'webkit'. (ID/Counter is required while using this parameter)
by_tagNoFilter by tag(s). Single tag or comma-separated. Example: 'smoke' or 'smoke,regression'. (ID/Counter is required while using this parameter)
by_total_runtimeNoFilter by total runtime. Use '<60' for less than 60 seconds, '>100' for more than 100 seconds. Example: '<60', '>100', '<30'. (ID/Counter is required while using this parameter)
by_artifactsNoFilter test cases that have artifacts available (screenshots, videos, traces). Set to true to list only test cases with artifacts. (ID/Counter is required while using this parameter)
by_error_messageNoFilter by error message (partial match, case-insensitive). Example: 'Test timeout of 60000ms exceeded'. (ID/Counter is required while using this parameter)
by_attempt_numberNoFilter by attempt number. Example: 1 for first attempt, 2 for second attempt. (ID/Counter is required while using this parameter)
by_pagesNoList test cases by page number. Does not require testrun_id or counter. Returns test cases from all test runs on the specified page.
by_branchNoFilter by git branch name. Does not require testrun_id or counter. First lists test runs on the specified branch, then returns test cases from those test runs. Example: 'main', 'develop'.
by_time_intervalNoFilter by time interval. Does not require testrun_id or counter. First lists test runs in the specified time period, then returns test cases from those test runs. Supports: '1d' (last day), '3d' (last 3 days), 'weekly' (last 7 days), 'monthly' (last 30 days), or '2024-01-01,2024-01-31' (date range).
limitNoNumber of results per page (default: 1000, max: 1000). Does not require testrun_id or counter. When used alone, first lists test runs, then returns test cases from those test runs.
by_environmentNoFilter by environment. Does not require testrun_id or counter. First lists test runs in the specified environment, then returns test cases from those test runs. Example: 'production', 'staging', 'development'.
by_authorNoFilter by commit author name (case-insensitive, partial match). Does not require testrun_id or counter. First lists test runs by the specified author, then returns test cases from those test runs.
by_commitNoFilter by git commit hash (full or partial). Does not require testrun_id or counter. First lists test runs with the specified commit, then returns test cases from those test runs.
pageNoPage number for pagination (default: 1). Does not require testrun_id or counter. When used alone, first lists test runs on the specified page, then returns test cases from those test runs.
get_allNoGet all results up to 1000 (default: false). Does not require testrun_id or counter. When used alone, first lists all test runs, then returns test cases from those test runs.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It explains the two-step filtering logic for test run filters, which is valuable context. However, it doesn't mention pagination behavior (implied by 'page' and 'limit' parameters but not described), rate limits, authentication requirements, or what happens when no filters are applied. For a complex tool with 21 parameters, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by a list of filtering options, then the key behavioral explanation. However, the long list of filter types in the first sentence could be streamlined, and the second sentence is quite dense. Overall, it's efficient but could be slightly more polished.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (21 parameters, no annotations, no output schema), the description is incomplete. It explains the filtering logic well but doesn't cover response format, error handling, pagination details, or performance considerations. For a tool with this many options and no structured output documentation, the description should provide more guidance on what to expect when invoking it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 21 parameters thoroughly. The description adds marginal value by grouping parameters into two categories (those requiring test run ID/counter vs. those that don't) and explaining the two-step filtering logic. However, it doesn't provide additional semantic context beyond what's in the schema descriptions, such as interaction effects between parameters or practical examples of combined filters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List test cases with comprehensive filtering options.' It specifies the verb ('List') and resource ('test cases'), and distinguishes it from siblings like 'list_testruns' by focusing on test cases rather than test runs. However, it doesn't explicitly differentiate from 'list_manual_test_cases' or 'get_testcase_details', which slightly reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: 'When using test run filters... the tool first lists test runs matching those criteria, then returns test cases from those filtered test runs.' It distinguishes between two filtering approaches (direct test run ID/counter vs. broader test run filters) and advises 'Use this to find specific test cases across your test runs.' However, it doesn't explicitly state when NOT to use this tool or name specific alternatives like 'list_testruns' for run-level queries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/testdino-inc/testdino-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server