Skip to main content
Glama

cocos_screenshot_preview

Capture PNG screenshots of running Cocos Creator previews to visually verify UI rendering and iterate on game development with actual browser feedback.

Instructions

Capture a PNG screenshot of a running preview URL via Playwright.

Closes the visual-feedback loop for AI clients — after cocos_build + cocos_start_preview call this to see what the browser actually rendered and iterate on UI with sight, not guesswork.

wait_ms gives the page time to run scripts after networkidle fires — Cocos's web build finishes asset loading before the first frame draws, so the default 500ms covers most scenes. Bump for heavy 3D scenes.

OPTIONAL DEPENDENCY: needs playwright + the chromium browser binary. Install with::

uv pip install playwright
playwright install chromium

Returns the PNG as an MCP Image, shown inline in the chat.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlNohttp://localhost:8080/
viewport_widthNo
viewport_heightNo
wait_msNo
full_pageNo
timeout_msNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it captures a screenshot (implying a read-only operation), returns the PNG as an MCP Image for inline display, and mentions dependencies and installation requirements. However, it lacks details on error handling, performance implications, or what happens if the preview URL is not accessible, which prevents a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage context, parameter guidance, and dependency notes. Each sentence adds value, but the inclusion of installation commands, while necessary, slightly reduces conciseness. Overall, it is efficient and avoids redundancy, though it could be slightly tighter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 6 parameters, 0% schema coverage, no annotations, and no output schema, the description does a strong job by covering purpose, usage context, parameter semantics, dependencies, and return format. It misses some behavioral details like error cases or performance limits, but given the complexity and lack of structured data, it is largely complete and actionable for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Given 0% schema description coverage for 6 parameters, the description compensates excellently by explaining the semantics of key parameters. It details the purpose of 'wait_ms' ('gives the page time to run scripts after `networkidle` fires') with context on default values and when to adjust ('Bump for heavy 3D scenes'), and implies the use of 'url' for the preview target. This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Capture a PNG screenshot') and target resource ('of a running preview URL via Playwright'), distinguishing it from sibling tools like 'cocos_screenshot_preview_diff' by focusing on direct capture rather than comparison. It explicitly mentions the tool's role in the visual-feedback loop after build and preview steps, making its purpose distinct and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('after `cocos_build` + `cocos_start_preview` call this to see what the browser actually rendered') and why ('iterate on UI with sight, not guesswork'). It also mentions an optional dependency and installation steps, which are crucial prerequisites for correct usage, ensuring the agent knows the necessary setup before invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chenShengBiao/cocos-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server