Skip to main content
Glama

Start Test Recording [Pro]

start_test_recording

Start recording all subsequent MCP tool calls to generate a reproducible test script. Provide a test name to label the generated code for automated mobile device testing.

Instructions

[Pro] Start recording all MCP tool calls to generate a reproducible test script. All subsequent tool calls will be logged until stop_test_recording is called.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
test_nameNoName for the test (used in generated code)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description notes that tool calls are logged until stop_test_recording is called, which is a key behavioral trait beyond the input schema. Since annotations are missing, the description carries the burden; it sufficiently indicates it starts a recording session, but could mention that it persists across calls or that calls are stored.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that effectively conveys purpose and usage with no wasted words. It is front-loaded with the key action and context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one optional parameter and no output schema, the description is sufficiently complete. It explains the tool's role in a test recording workflow and mentions the lifecycle (until stop_test_recording). Could note what happens if a test already exists, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage with a single parameter 'test_name' described as 'Name for the test (used in generated code)'. The description adds no additional param info, but given full schema coverage, the baseline of 3 is exceeded because the description's context about generating a test script adds meaning to the parameter's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool starts recording MCP tool calls to generate a reproducible test script, which is a specific verb+resource combination. It distinguishes from sibling tools like 'stop_test_recording' and 'get_recorded_actions' by noting that subsequent calls are logged until stopped.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use when you want to generate a test script and that it logs subsequent calls until 'stop_test_recording' is called, providing clear context. It does not explicitly state when not to use it or mention alternatives, but the sibling tools list includes related ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saranshbamania/mobile-device-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server