Skip to main content
Glama

cocos_run_preview_sequence

Execute automated browser test sequences in a single session to maintain game state across multiple actions like clicks, typing, and screenshots for Cocos Creator previews.

Instructions

Run a list of actions in a SINGLE browser session.

Essential for play-testing: game state resets on every page reload, so a multi-step test plan (click Start → type name → press Enter → read score) MUST share one session.

Each action is a dict with a kind + kind-specific keys: {"kind": "click", "x": int, "y": int, "wait_ms"?: int, "button"?: str} {"kind": "key", "key": str, "wait_ms"?: int} {"kind": "type", "text": str, "wait_ms"?: int} {"kind": "drag", "from_x": int, "from_y": int, "to_x": int, "to_y": int, "steps"?: int} {"kind": "wait", "ms": int} {"kind": "read_state", "expression": str} {"kind": "screenshot"}

Returns a list parallel to actions: each entry is {kind, ok, result, error}. A single failed action does NOT abort the sequence, so earlier successful reads/screenshots still come back.

screenshot results return {"png_bytes_hex": "..."} — decode with bytes.fromhex() to get raw PNG data (MCP JSON can't carry bytes natively).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
actionsYes
viewport_widthNo
viewport_heightNo
timeout_msNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool runs actions sequentially in a shared session, does not abort on single failures (allowing earlier results to be returned), and handles screenshot data encoding (PNG bytes as hex). It lacks details on error handling beyond 'error' field, rate limits, or authentication needs, but covers core operational traits well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by essential usage context, parameter details, and return value explanation. Every sentence adds value: the first states the purpose, the second provides usage guidance, the third introduces action structure, the list details action kinds, and the remainder covers output behavior and data handling. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, 0% schema coverage, no annotations, but with output schema), the description is remarkably complete. It explains the purpose, usage context, parameter semantics (especially the complex 'actions' array), output format, and special data handling (hex encoding for screenshots). The output schema existence means return values need less explanation, and the description provides exactly what's needed for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must fully compensate. It provides detailed semantics for the 'actions' parameter, listing all 7 action kinds with their specific keys and optional fields. It also implies 'url' is for browser navigation and mentions viewport/timeout defaults contextually. This adds substantial meaning beyond the bare schema, fully documenting the complex 'actions' array structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run a list of actions in a SINGLE browser session.' It specifies the verb ('Run') and resource ('list of actions'), and distinguishes itself from potential siblings by emphasizing the single-session requirement for play-testing, which is unique among the listed sibling tools focused on Cocos engine operations rather than browser interaction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'Essential for play-testing: game state resets on every page reload, so a multi-step test plan... MUST share one session.' It clearly defines the context (multi-step browser interactions in testing) and implicitly excludes single-step operations or non-browser tasks, though it doesn't name specific alternative tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chenShengBiao/cocos-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server