Skip to main content
Glama

testmo_append_automation_run_thread

Append test results, artifacts, or custom fields to an existing automation run thread to update its test outcomes and metadata.

Instructions

Append test results, artifacts, or fields to an automation run thread.

Each test in the 'tests' array: {key, name, folder, status, elapsed, file, line, assertions, artifacts, fields}. Status values: 'passed', 'failed', 'skipped', etc.

Args: thread_id: The automation run thread ID. elapsed_observed: Partial observed time in microseconds to add. elapsed_computed: Partial computed time in microseconds to add. artifacts: External test artifacts to append. fields: Custom fields to append. tests: Test results to submit [{name, status, ...}].

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
thread_idYes
elapsed_observedNo
elapsed_computedNo
artifactsNo
fieldsNo
testsNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The async function testmo_append_automation_run_thread that implements the tool logic. It accepts thread_id, elapsed_observed, elapsed_computed, artifacts, fields, and tests, builds a data payload, and POSTs to /automation/runs/threads/{thread_id}/append via _request().
    @mcp.tool()
    async def testmo_append_automation_run_thread(
        thread_id: int,
        elapsed_observed: int | None = None,
        elapsed_computed: int | None = None,
        artifacts: list[dict[str, Any]] | None = None,
        fields: list[dict[str, Any]] | None = None,
        tests: list[dict[str, Any]] | None = None,
    ) -> dict[str, Any]:
        """Append test results, artifacts, or fields to an automation run thread.
    
        Each test in the 'tests' array: {key, name, folder, status, elapsed, file, line, assertions, artifacts, fields}.
        Status values: 'passed', 'failed', 'skipped', etc.
    
        Args:
            thread_id: The automation run thread ID.
            elapsed_observed: Partial observed time in microseconds to add.
            elapsed_computed: Partial computed time in microseconds to add.
            artifacts: External test artifacts to append.
            fields: Custom fields to append.
            tests: Test results to submit [{name, status, ...}].
        """
        data: dict[str, Any] = {}
        if elapsed_observed is not None:
            data["elapsed_observed"] = elapsed_observed
        if elapsed_computed is not None:
            data["elapsed_computed"] = elapsed_computed
        if artifacts:
            data["artifacts"] = artifacts
        if fields:
            data["fields"] = fields
        if tests:
            data["tests"] = tests
        return await _request(
            "POST", f"/automation/runs/threads/{thread_id}/append", data=data
        )
  • Imports for the @mcp.tool() decorator (from server) and the _request helper (from client).
    from ..server import mcp
    from ..client import _request
  • The @mcp.tool() decorator on line 261 registers testmo_append_automation_run_thread as an MCP tool with the FastMCP server.
    @mcp.tool()
  • The _request helper function that performs the actual HTTP request to the Testmo API.
    async def _request(
        method: str,
        endpoint: str,
        data: dict[str, Any] | None = None,
        params: dict[str, Any] | None = None,
    ) -> dict[str, Any]:
        async with _get_client() as client:
            response = await client.request(
                method=method,
                url=endpoint,
                json=data,
                params=params,
            )
            if response.status_code == 204:
                return {"success": True}
            if response.status_code >= 400:
                try:
                    error_body = response.json()
                except Exception:
                    error_body = response.text
                raise RuntimeError(
                    f"Testmo API error {response.status_code}: "
                    f"{json.dumps(error_body) if isinstance(error_body, dict) else error_body}"
                )
            return response.json()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description mentions appending but does not disclose side effects, idempotency, or error handling. Partial time accumulation is unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is verbose with bullet points and code block; could be more concise. Useful details but not optimally structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 6 parameters and no output schema shown, the description lacks return value info. Among many sibling tools, it doesn't fully contextualize its role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but description adds some meaning for 'tests' parameter with format. Other parameters get brief descriptions (e.g., 'Partial observed time'), but not enough to fully compensate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it appends test results, artifacts, or fields to an automation run thread, specifying the format for tests array. It distinguishes from sibling 'testmo_append_automation_run' which likely appends to a run.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs alternatives like testmo_append_automation_run. No prerequisites mentioned (e.g., thread must exist).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/strelec00/testmo-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server