Skip to main content
Glama

stage_bind_physics

Links visual scene objects to physics bodies so physics simulations drive object animations for realistic motion in 3D scenes.

Instructions

Bind a scene object to a physics body.

Links a visual object to a physics simulation body so that
the physics simulation drives the object's animation.

Args:
    scene_id: Scene identifier
    object_id: Scene object ID
    physics_body_id: Physics body ID from chuk-mcp-physics
        Format: "rapier://sim-{sim_id}/body-{body_id}"
        Example: "rapier://sim-abc123/body-ball"

Returns:
    BindPhysicsResponse confirmation

Tips for LLMs:
    - Create physics body first using chuk-mcp-physics
    - Then create matching scene object with same shape/size
    - Bind them together so physics drives visuals
    - Use stage_bake_simulation to convert physics → keyframes

Example:
    # 1. Create physics simulation (chuk-mcp-physics)
    sim = await create_simulation(gravity_y=-9.81)

    # 2. Add physics body
    await add_rigid_body(
        sim_id=sim.sim_id,
        body_id="ball",
        body_type="dynamic",
        shape="sphere",
        radius=1.0,
        position=[0, 5, 0]
    )

    # 3. Create scene object
    await stage_add_object(
        scene_id=scene_id,
        object_id="ball",
        object_type="sphere",
        radius=1.0,
        position_y=5.0
    )

    # 4. Bind them
    await stage_bind_physics(
        scene_id=scene_id,
        object_id="ball",
        physics_body_id=f"rapier://{sim.sim_id}/body-ball"
    )

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
scene_idYes
object_idYes
physics_body_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining the tool's behavior: it links objects for physics-driven animation, requires pre-existing resources (physics body and scene object), and mentions the outcome (physics drives visuals). It could improve by specifying error conditions or performance implications, but covers core behavioral aspects adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, args, returns, tips, example) and front-loaded key information. While comprehensive, it maintains efficiency with no redundant sentences. The example is detailed but necessary for illustrating the workflow. Minor trimming could be possible, but overall it's appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no annotations, no output schema), the description is complete. It explains the tool's purpose, parameters, usage workflow, prerequisites, and relationship to other tools. The example provides concrete implementation guidance, and the tips section addresses common LLM usage patterns. No significant gaps remain for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter's purpose (scene identifier, scene object ID, physics body ID), provides format details for physics_body_id with an example, and contextualizes how parameters relate to the binding process. This fully compensates for the schema's lack of documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('bind', 'links') and resources ('scene object', 'physics body'), and distinguishes it from siblings by explaining it connects visual and physics simulations. It explicitly mentions the physics simulation drives animation, which differentiates it from other stage tools like stage_add_object or stage_bake_simulation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool through the 'Tips for LLMs' section and example workflow. It specifies prerequisites (create physics body first, then matching scene object), sequencing (bind them together), and alternatives (use stage_bake_simulation for conversion). This gives clear context for when and how to invoke this tool versus other options.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chrishayuk/chuk-mcp-stage'

If you have feedback or need assistance with the MCP directory API, please join our Discord server