Skip to main content
Glama

get_testcase_details

Retrieve detailed test execution data including error messages, stack traces, steps, and logs to debug failures and analyze test behavior in TestDino projects.

Instructions

Get detailed information about a specific test case. You can identify the test case in two ways: 1) By testcase_id (can be used alone), or 2) By testcase_name combined with testrun_id or counter (required because test cases can have the same name across different test runs). Shows error messages, stack traces, test steps, console logs, and optional artifacts (screenshots, videos, traces). Use this to debug why a test failed or understand how it executed. Example: 'Get test case details for "Verify user can logout and login" in testrun #43'.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectIdYesProject ID (Required). The TestDino project identifier.
testcase_idNoTest case ID. Can be used alone to get test case details. Example: 'test_case_123'.
testcase_nameNoTest case name/title. Must be combined with either testrun_id or counter to identify which test run's test case you want. Example: 'Verify user can logout and login'.
testrun_idNoTest run ID. Required when using testcase_name to specify which test run's test case you want. Example: 'test_run_6901b2abc6b187e63f536a6b'.
counterNoTest run counter number. Required when using testcase_name (if testrun_id is not provided) to specify which test run's test case you want. Example: 43.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns (error messages, stack traces, test steps, console logs, optional artifacts) and its purpose for debugging, which covers key behavioral traits. However, it lacks details on potential limitations like rate limits, authentication needs, or error handling, leaving some gaps in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose, followed by identification methods, returned details, usage context, and an example. Each sentence adds essential information without redundancy, making it efficient and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (5 parameters, no annotations, no output schema), the description does a good job covering purpose, usage, and parameter logic. However, it lacks details on output format (e.g., structure of returned data) and behavioral aspects like error responses or pagination, which would be needed for full completeness in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds value by explaining the two identification methods (testcase_id alone vs. testcase_name with testrun_id/counter) and providing an example, which clarifies parameter semantics beyond the schema. However, it doesn't add syntax or format details for parameters like projectId, so it's not a full 5.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get detailed information') and resources ('about a specific test case'), and distinguishes it from siblings like 'list_testcase' by focusing on detailed retrieval rather than listing. It explicitly mentions what information is returned (error messages, stack traces, etc.), making the purpose highly specific and differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('to debug why a test failed or understand how it executed') and includes a concrete example. It also details two alternative ways to identify the test case (by testcase_id alone or by testcase_name with testrun_id/counter), offering clear usage scenarios without misleading information.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/testdino-inc/testdino-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server