Skip to main content
Glama

generate_topdown_asset

Generate 2D game assets with consistent top-down perspective for RPGs and strategy games. Create props, characters, tiles, or creatures using AI workflows with viewpoint control.

Instructions

Simplified tool to generate top-down 2D game assets with guaranteed viewpoint.

This is a convenience wrapper around generate_with_viewpoint specifically for
top-down games (RPG, strategy, etc.).

Args:
    prompt: Description of the asset (e.g., "wooden treasure chest", "stone well")
    asset_type: Type of asset - "prop", "character", "creature", "tile", "effect"
    size: Output size in pixels (square)
    control_strength: How strictly to enforce top-down view (0.5-1.0)
    seed: Random seed for reproducibility
    save_to_file: Whether to save the image to disk (default: True for reliability)

Returns:
    JSON with file_path to generated image

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYes
asset_typeNoprop
sizeNo
control_strengthNo
seedNo
save_to_fileNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The primary handler function for the 'generate_topdown_asset' MCP tool. It is registered via the @mcp.tool() decorator and implements the core logic by mapping asset types to presets/shapes and delegating to the generate_with_viewpoint helper for top-down ControlNet generation.
    @mcp.tool()
    async def generate_topdown_asset(
        prompt: str,
        asset_type: str = "prop",
        size: int = 512,
        control_strength: float = 0.65,
        seed: Optional[int] = None,
        save_to_file: bool = True
    ) -> str:
        """Simplified tool to generate top-down 2D game assets with guaranteed viewpoint.
        
        This is a convenience wrapper around generate_with_viewpoint specifically for
        top-down games (RPG, strategy, etc.).
        
        Args:
            prompt: Description of the asset (e.g., "wooden treasure chest", "stone well")
            asset_type: Type of asset - "prop", "character", "creature", "tile", "effect"
            size: Output size in pixels (square)
            control_strength: How strictly to enforce top-down view (0.5-1.0)
            seed: Random seed for reproducibility
            save_to_file: Whether to save the image to disk (default: True for reliability)
        
        Returns:
            JSON with file_path to generated image
        """
        # Map asset type to preset and shape
        preset_map = {
            "prop": ("topdown_prop", "box"),
            "character": ("topdown_character", "humanoid"),
            "creature": ("topdown_creature", "humanoid"),
            "tile": ("topdown_tile", "flat"),
            "effect": ("effect", "sphere"),
        }
        
        preset, shape = preset_map.get(asset_type, ("topdown_prop", "flat"))
    
        effective_strength = control_strength
        effective_prompt = prompt
        if asset_type == "character":
            effective_strength = min(control_strength, 0.70)
            effective_prompt = (
                f"{prompt}, single character, one body, one head, full body, "
                f"no visible face, no eyes, no mouth, helmet top view, "
                f"no duplicated weapons, no duplicated armor, no floating parts, no separate objects"
            )
        elif asset_type == "creature":
            effective_strength = min(control_strength, 0.70)
            effective_prompt = (
                f"{prompt}, single creature, one body, full body, "
                f"no duplicated limbs, no floating parts, no separate objects"
            )
        elif asset_type == "prop":
            # Stardew Valley style - allow more artistic freedom
            effective_strength = min(control_strength, 0.60)
            effective_prompt = f"{prompt}, single object"
        elif asset_type == "effect":
            effective_prompt = f"{prompt}, centered effect, radial glow"
        
        # Use the viewpoint tool
        return await generate_with_viewpoint(
            prompt=effective_prompt,
            view_type="topdown",
            shape=shape,
            preset=preset,
            control_strength=effective_strength,
            width=size,
            height=size,
            seed=seed,
            save_to_file=save_to_file
        )
  • Supporting helper tool 'generate_with_viewpoint' called by generate_topdown_asset. It handles ControlNet depth-based viewpoint enforcement, which is central to the top-down asset generation logic.
    async def generate_with_viewpoint(
        prompt: str,
        view_type: str = "topdown",
        shape: str = "flat",
        preset: str = "topdown_prop",
        controlnet_model: str = "diffusers_xl_depth_full.safetensors",
        control_strength: float = 0.95,
        width: int = 1024,
        height: int = 1024,
        seed: Optional[int] = None,
        save_to_file: bool = False
    ) -> str:
        """Generate a game asset with precise camera viewpoint control using ControlNet.
        
        This tool uses depth maps to guide the generation, ensuring consistent camera angles
        like top-down, side view, front view, etc.
        
        Args:
            prompt: Description of the asset to generate (e.g., "a wooden barrel")
            view_type: Camera angle - "topdown", "side", "front", "3/4"
            shape: Object shape hint - "flat", "sphere", "cylinder", "box"
            preset: Style preset to use (default: topdown_prop)
            controlnet_model: ControlNet model (default: diffusers_xl_depth_full.safetensors)
            control_strength: How strongly to follow viewpoint (0.0-1.0, default: 0.8)
            width: Output width in pixels
            height: Output height in pixels
            seed: Random seed for reproducibility
            save_to_file: Whether to save the image to disk
        
        Returns:
            JSON with base64 image data and metadata
        
        Note:
            Requires ControlNet models installed in ComfyUI. Common depth models:
            - diffusers_xl_depth_full.safetensors (SDXL)
            - control_v11f1p_sd15_depth.pth (SD1.5)
        """
        preset_config = get_preset(preset)
        
        # Build full prompt with preset
        full_prompt = f"{preset_config.prompt_prefix}{prompt}{preset_config.prompt_suffix}"
        full_negative = preset_config.negative_prompt
    
        img_width = width
        img_height = height
    
        render_width = img_width
        render_height = img_height
        should_downscale = (img_width < preset_config.default_width) or (img_height < preset_config.default_height)
        if should_downscale:
            scale = max(preset_config.default_width / max(1, img_width), preset_config.default_height / max(1, img_height))
            render_width = int(round(img_width * scale))
            render_height = int(round(img_height * scale))
    
        # Clamp render dimensions to match backend constraints (SDXL-safe)
        render_width = max(512, min(2048, (render_width // 8) * 8))
        render_height = max(512, min(2048, (render_height // 8) * 8))
        
        # Create depth map for the specified viewpoint
        depth_map = create_depth_map(render_width, render_height, view_type=view_type, shape=shape)
        
        try:
            # Generate with timeout to prevent hanging
            image_bytes = await asyncio.wait_for(
                backend.generate_with_controlnet(
                    prompt=full_prompt,
                    control_image=depth_map,
                    controlnet_model=controlnet_model,
                    control_strength=control_strength,
                    negative_prompt=full_negative,
                    width=render_width,
                    height=render_height,
                    seed=seed,
                    steps=preset_config.steps,
                    cfg_scale=preset_config.cfg_scale,
                    sampler=preset_config.sampler,
                    scheduler=preset_config.scheduler
                ),
                timeout=300.0  # 5 minute timeout
            )
        except asyncio.TimeoutError:
            return json.dumps({
                "success": False,
                "error": "Generation timed out after 5 minutes",
                "backend": backend.get_name(),
                "backend_type": BACKEND_TYPE
            }, indent=2)
        except NotImplementedError as e:
            return json.dumps({
                "success": False,
                "error": str(e),
                "hint": "ControlNet requires ComfyUI backend with ControlNet models installed",
                "backend": backend.get_name(),
                "backend_type": BACKEND_TYPE
            }, indent=2)
        except Exception as e:
            return json.dumps({
                "success": False,
                "error": str(e),
                "hint": "Check if ControlNet model exists in ComfyUI/models/controlnet/",
                "backend": backend.get_name()
            }, indent=2)
        
        if should_downscale:
            resample = Image.Resampling.NEAREST if preset.startswith("pixel") else Image.Resampling.LANCZOS
            image_bytes = resize_image(image_bytes, img_width, img_height, resample=resample)
    
        image_b64 = image_to_base64(image_bytes)
        result = {
            "success": True,
            "backend": backend.get_name(),
            "width": img_width,
            "height": img_height,
            "view_type": view_type,
            "shape": shape,
            "control_strength": control_strength,
            "preset": preset,
            "prompt": full_prompt,
            "hash": hash_image(image_bytes)
        }
        
        # ControlNet images are always saved to file to ensure reliable MCP response
        # (large base64 payloads can cause MCP stdio transport issues)
        output_dir = ensure_directory(OUTPUT_DIR / "controlnet")
        fname = generate_filename(prefix=f"cn_{view_type}", suffix=shape)
        file_path = output_dir / fname
        file_path.write_bytes(image_bytes)
        result["file_path"] = str(file_path)
        
        depth_path = output_dir / f"depth_{fname}"
        depth_path.write_bytes(depth_map)
        result["depth_map_path"] = str(depth_path)
        
        # Never include base64 for ControlNet - always use file_path
        # This prevents MCP stdio blocking and ensures agent receives response
        result["image_base64_omitted"] = True
        result["image_base64_omitted_reason"] = "controlnet_always_saves_to_file"
        
        return json.dumps(result, indent=2)
  • server/main.py:744-744 (registration)
    The @mcp.tool() decorator registers the generate_topdown_asset function as an MCP tool.
    @mcp.tool()
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: it's a wrapper around another tool, generates images with guaranteed viewpoint, saves to disk by default for reliability, and returns JSON with file_path. However, it doesn't mention rate limits, authentication needs, or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized. It starts with the core purpose, explains the wrapper nature, lists parameters with helpful details, and ends with return information. Every sentence adds value with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, no annotations) and the presence of an output schema (returns JSON with file_path), the description is complete enough. It covers purpose, usage context, all parameters, and behavioral aspects without needing to explain return values since the output schema handles that.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema's 0% coverage. It explains each parameter's purpose with examples ('wooden treasure chest') and clarifications ('square' size, '0.5-1.0' range for control_strength, default behavior for save_to_file). This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'generate top-down 2D game assets with guaranteed viewpoint' and specifies it's 'specifically for top-down games (RPG, strategy, etc.)'. It distinguishes itself from sibling 'generate_with_viewpoint' by being a convenience wrapper focused on top-down assets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context: 'convenience wrapper around generate_with_viewpoint specifically for top-down games'. It implies when to use this tool (for top-down assets) but doesn't explicitly state when NOT to use it or compare it to other sibling tools like 'generate_sprite' or 'generate_tileset'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tuannguyen14/ComfyAI-MCP-GameAssets'

If you have feedback or need assistance with the MCP directory API, please join our Discord server