Skip to main content
Glama

cocos_audit_scene_modules

Audit Cocos Creator scene components to detect disabled engine modules that cause runtime failures. Identifies mismatches between scene requirements and project configuration to prevent silent component failures.

Instructions

Cross-check scene components against the project's engine.json.

Catches the single highest-frequency "build succeeded, game broken at runtime" failure: using a component (RigidBody2D, Spine, VideoPlayer, ...) whose engine module is currently disabled. Build produces artifacts, the scene loads, but the components silently do nothing.

project_path=None → walks up from the scene file looking for package.json. Pass explicitly when the scene lives outside the project (prefab library, template copy).

Returns {ok, project_path, required, enabled, disabled, actions}. When ok=False, actions lists copy-pasteable next steps (cocos_set_engine_module calls + the library clean that module changes need).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scene_pathYes
project_pathNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it walks up from the scene file to find package.json if project_path is not provided, and returns a structured result with fields like 'ok', 'required', 'enabled', 'disabled', and 'actions'. It also explains what happens when ok=False (actions list next steps). However, it doesn't mention error handling, performance implications, or side effects like file system access.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: it starts with the core purpose, explains the problem it solves, provides parameter usage notes, and describes the return value—all in a compact format. Every sentence adds value without redundancy, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (validating engine modules against scene components) and lack of annotations or output schema, the description does a good job of covering key aspects: purpose, usage, parameters, and return structure. It could be more complete by detailing error cases or performance considerations, but it provides sufficient context for an agent to use the tool effectively in most scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for 2 parameters, the description compensates well. It explains the semantics of 'project_path' (defaults to null, walks up from scene file, use when scene is outside project) and implies 'scene_path' is required for the audit. While it doesn't detail parameter formats or constraints, it adds meaningful context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Cross-check scene components against the project's engine.json' to catch a specific runtime failure scenario. It explicitly distinguishes this from sibling tools by focusing on module validation rather than scene creation, component addition, or other operations listed in the sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: for detecting 'build succeeded, game broken at runtime' failures due to disabled engine modules. It also specifies when to pass the project_path parameter (when the scene is outside the project) and mentions an alternative tool ('cocos_set_engine_module') for remediation, giving clear context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chenShengBiao/cocos-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server