Skip to main content
Glama

testmo_get_case

Retrieve detailed information about a test case, including custom fields and Gherkin scenarios, by providing project and case IDs.

Instructions

Get full details of a specific test case, including custom fields and Gherkin scenarios.

Args: project_id: The project ID. case_id: The test case ID.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_idYes
case_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The actual handler function for the 'testmo_get_case' tool. It fetches all pages of cases from the project and iterates to find the case matching the given case_id, then returns its full details including custom fields and Gherkin scenarios.
    @mcp.tool()
    async def testmo_get_case(project_id: int, case_id: int) -> dict[str, Any]:
        """Get full details of a specific test case, including custom fields and Gherkin scenarios.
    
        Args:
            project_id: The project ID.
            case_id: The test case ID.
        """
        page = 1
        while True:
            params: dict[str, Any] = {"page": page, "per_page": 100}
            result = await _request("GET", f"/projects/{project_id}/cases", params=params)
            for case in result.get("result", []):
                if case["id"] == case_id:
                    return case
            if result.get("next_page") is None:
                break
            page += 1
            await asyncio.sleep(RATE_LIMIT_DELAY)
        raise RuntimeError(f"Case {case_id} not found in project {project_id}")
  • The tool is registered via the @mcp.tool() decorator (line 56) on the testmo_get_case async function. The 'mcp' instance is a FastMCP server (defined in testmo/server.py line 6) and tools are automatically registered when the decorator is applied.
    @mcp.tool()
  • The _request helper function used by testmo_get_case to make HTTP requests to the Testmo API. It handles authentication, JSON serialization, error handling, and response parsing.
    async def _request(
        method: str,
        endpoint: str,
        data: dict[str, Any] | None = None,
        params: dict[str, Any] | None = None,
    ) -> dict[str, Any]:
        async with _get_client() as client:
            response = await client.request(
                method=method,
                url=endpoint,
                json=data,
                params=params,
            )
            if response.status_code == 204:
                return {"success": True}
            if response.status_code >= 400:
                try:
                    error_body = response.json()
                except Exception:
                    error_body = response.text
                raise RuntimeError(
                    f"Testmo API error {response.status_code}: "
                    f"{json.dumps(error_body) if isinstance(error_body, dict) else error_body}"
                )
            return response.json()
  • The FastMCP server instance used for tool registration. The @mcp.tool() decorator registers functions as MCP tools.
    from dotenv import load_dotenv
    from mcp.server.fastmcp import FastMCP
    
    load_dotenv()
    
    mcp = FastMCP("testmo-mcp")
  • testmo-mcp.py:1-23 (registration)
    Entry point that imports testmo.tools.cases (which triggers the @mcp.tool() decorator registration). All tool modules are imported here to ensure tools are registered on the mcp instance before running.
    """
    Testmo MCP Server — FastMCP implementation.
    
    Provides tools for AI assistants to manage test cases, folders, projects,
    runs, automation runs, attachments, and more via the Testmo REST API.
    """
    
    from testmo.server import mcp
    
    # Import tool modules to register all tools on the mcp instance
    import testmo.tools.projects  # noqa: F401
    import testmo.tools.folders  # noqa: F401
    import testmo.tools.milestones  # noqa: F401
    import testmo.tools.cases  # noqa: F401
    import testmo.tools.runs  # noqa: F401
    import testmo.tools.attachments  # noqa: F401
    import testmo.tools.automation  # noqa: F401
    import testmo.tools.issues  # noqa: F401
    import testmo.tools.composite  # noqa: F401
    import testmo.tools.utility  # noqa: F401
    
    if __name__ == "__main__":
        mcp.run(transport="stdio")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description reveals that the tool returns custom fields and Gherkin scenarios but omits other behaviors such as read-only nature, required permissions, or potential side effects. The behavioral transparency is adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief and front-loaded with the main purpose, but the parameter list repeats information from the schema without adding value. It earns its place but could be more efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has an output schema, so return values need not be described. The description mentions returned content (custom fields, Gherkin scenarios). However, it lacks context about how this tool fits with siblings and any constraints (e.g., project required).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage. The description lists parameters with minimal explanations ('The project ID', 'The test case ID') that add little beyond the parameter names and types already in the schema. No constraints, formats, or examples are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get full details'), the resource ('specific test case'), and the scope ('including custom fields and Gherkin scenarios'). It distinguishes from sibling tools like testmo_get_all_cases which retrieve lists.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives (e.g., testmo_list_cases, testmo_search_cases). The description does not specify prerequisites, context, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/strelec00/testmo-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server