testmo_append_automation_run_thread
Append test results, artifacts, or custom fields to an existing automation run thread to update its test outcomes and metadata.
Instructions
Append test results, artifacts, or fields to an automation run thread.
Each test in the 'tests' array: {key, name, folder, status, elapsed, file, line, assertions, artifacts, fields}. Status values: 'passed', 'failed', 'skipped', etc.
Args: thread_id: The automation run thread ID. elapsed_observed: Partial observed time in microseconds to add. elapsed_computed: Partial computed time in microseconds to add. artifacts: External test artifacts to append. fields: Custom fields to append. tests: Test results to submit [{name, status, ...}].
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| thread_id | Yes | ||
| elapsed_observed | No | ||
| elapsed_computed | No | ||
| artifacts | No | ||
| fields | No | ||
| tests | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- testmo/tools/automation.py:261-296 (handler)The async function testmo_append_automation_run_thread that implements the tool logic. It accepts thread_id, elapsed_observed, elapsed_computed, artifacts, fields, and tests, builds a data payload, and POSTs to /automation/runs/threads/{thread_id}/append via _request().
@mcp.tool() async def testmo_append_automation_run_thread( thread_id: int, elapsed_observed: int | None = None, elapsed_computed: int | None = None, artifacts: list[dict[str, Any]] | None = None, fields: list[dict[str, Any]] | None = None, tests: list[dict[str, Any]] | None = None, ) -> dict[str, Any]: """Append test results, artifacts, or fields to an automation run thread. Each test in the 'tests' array: {key, name, folder, status, elapsed, file, line, assertions, artifacts, fields}. Status values: 'passed', 'failed', 'skipped', etc. Args: thread_id: The automation run thread ID. elapsed_observed: Partial observed time in microseconds to add. elapsed_computed: Partial computed time in microseconds to add. artifacts: External test artifacts to append. fields: Custom fields to append. tests: Test results to submit [{name, status, ...}]. """ data: dict[str, Any] = {} if elapsed_observed is not None: data["elapsed_observed"] = elapsed_observed if elapsed_computed is not None: data["elapsed_computed"] = elapsed_computed if artifacts: data["artifacts"] = artifacts if fields: data["fields"] = fields if tests: data["tests"] = tests return await _request( "POST", f"/automation/runs/threads/{thread_id}/append", data=data ) - testmo/tools/automation.py:3-4 (registration)Imports for the @mcp.tool() decorator (from server) and the _request helper (from client).
from ..server import mcp from ..client import _request - testmo/tools/automation.py:261-261 (registration)The @mcp.tool() decorator on line 261 registers testmo_append_automation_run_thread as an MCP tool with the FastMCP server.
@mcp.tool() - testmo/client.py:25-49 (helper)The _request helper function that performs the actual HTTP request to the Testmo API.
async def _request( method: str, endpoint: str, data: dict[str, Any] | None = None, params: dict[str, Any] | None = None, ) -> dict[str, Any]: async with _get_client() as client: response = await client.request( method=method, url=endpoint, json=data, params=params, ) if response.status_code == 204: return {"success": True} if response.status_code >= 400: try: error_body = response.json() except Exception: error_body = response.text raise RuntimeError( f"Testmo API error {response.status_code}: " f"{json.dumps(error_body) if isinstance(error_body, dict) else error_body}" ) return response.json()