Skip to main content
Glama

generate_character_animations

Generate multiple character poses from a reference image while maintaining consistent identity, outfit, and style for animation workflows.

Instructions

Generate multiple character poses with consistent identity using img2img.

This tool takes a reference character image and generates variations with different poses while maintaining the same character identity (outfit, colors, style). Args: reference_image_base64: Base64 encoded reference character image (front/idle view) description: Character description (e.g., "a knight in silver armor") poses: List of poses to generate (e.g., ["walking", "attacking", "jumping"]) denoise: How much to change from reference (0.2=very similar, 0.5=more different). Default 0.35 seed: Random seed for reproducibility (same seed = same variations) preset: Style preset to use (default: character) pose_denoise_boost: Additional denoise for action poses (default: 0.25) save_to_file: Whether to save images to disk Returns: JSON with base64 images for each pose, all maintaining character identity Example workflow: 1. First generate a base character with generate_character(description, poses=["idle"]) 2. Take the best result's image_base64 as reference_image_base64 3. Call this tool with poses=["walking", "running", "attacking"]

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
reference_image_base64Yes
descriptionYes
posesYes
denoiseNo
seedNo
presetNocharacter
pose_denoise_boostNo
save_to_fileNo

Implementation Reference

  • The @mcp.tool()-decorated handler function that implements generate_character_animations. It uses img2img to generate consistent character poses from a reference image, handling multiple poses with adjusted denoise and seeds for reproducibility.
    @mcp.tool() async def generate_character_animations( reference_image_base64: str, description: str, poses: List[str], denoise: float = 0.35, seed: Optional[int] = None, preset: str = "character", pose_denoise_boost: float = 0.25, save_to_file: bool = False ) -> str: """Generate multiple character poses with consistent identity using img2img. This tool takes a reference character image and generates variations with different poses while maintaining the same character identity (outfit, colors, style). Args: reference_image_base64: Base64 encoded reference character image (front/idle view) description: Character description (e.g., "a knight in silver armor") poses: List of poses to generate (e.g., ["walking", "attacking", "jumping"]) denoise: How much to change from reference (0.2=very similar, 0.5=more different). Default 0.35 seed: Random seed for reproducibility (same seed = same variations) preset: Style preset to use (default: character) pose_denoise_boost: Additional denoise for action poses (default: 0.25) save_to_file: Whether to save images to disk Returns: JSON with base64 images for each pose, all maintaining character identity Example workflow: 1. First generate a base character with generate_character(description, poses=["idle"]) 2. Take the best result's image_base64 as reference_image_base64 3. Call this tool with poses=["walking", "running", "attacking"] """ preset_config = get_preset(preset) reference_bytes = base64.b64decode(reference_image_base64) # Use fixed seed for all poses if provided (better consistency) if seed is None: import random seed = random.randint(0, 2**32 - 1) animations = [] for i, pose in enumerate(poses): import hashlib pose_key = pose.strip().lower() # Stronger pose templates to prevent img2img from copying the reference pose. # (img2img without ControlNet can only change pose to a limited extent, so we need to be explicit.) pose_hint_map = { "idle": "idle stance, neutral pose", "walk": "walking pose, one leg forward, arms swinging", "walking": "walking pose, one leg forward, arms swinging", "run": "running pose, dynamic motion, leaning forward", "running": "running pose, dynamic motion, leaning forward", "attack": "attacking pose, dynamic action, weapon swing", "attacking": "attacking pose, dynamic action, weapon swing", "jump": "jumping pose, mid-air, dynamic", "jumping": "jumping pose, mid-air, dynamic", } pose_hint = next((v for k, v in pose_hint_map.items() if k in pose_key), "dynamic pose") pose_hash = int.from_bytes(hashlib.sha256(pose_key.encode("utf-8")).digest()[:4], "little") pose_seed = (seed + (pose_hash % 1000003) + (i * 10000019)) % (2**32 - 1) pose_is_action = any(k in pose_key for k in ["walk", "run", "attack", "hit", "slash", "jump", "kick", "punch", "cast", "shoot", "dash"]) pose_denoise = denoise if pose_is_action: pose_denoise = min(0.75, denoise + pose_denoise_boost) pose_denoise = max(0.2, min(0.75, pose_denoise)) prompt = f"{description}, {pose_hint}, {pose} pose, full body, same character identity, same outfit, same colors, consistent character sheet" full_prompt = f"{preset_config.prompt_prefix}{prompt}{preset_config.prompt_suffix}" full_negative = f"{preset_config.negative_prompt}, same pose as reference, idle pose, standing straight, front view, identical composition" try: image_bytes = await backend.generate_img2img( reference_image=reference_bytes, prompt=full_prompt, negative_prompt=full_negative, denoise=pose_denoise, seed=pose_seed, steps=preset_config.steps, cfg_scale=preset_config.cfg_scale, sampler=preset_config.sampler, scheduler=preset_config.scheduler ) except NotImplementedError as e: return json.dumps({ "success": False, "error": str(e), "backend": backend.get_name(), "backend_type": BACKEND_TYPE }, indent=2) anim_data = { "index": i, "pose": pose, "image_base64": image_to_base64(image_bytes), "denoise": pose_denoise, "seed": pose_seed } if save_to_file: output_dir = ensure_directory(OUTPUT_DIR / "characters" / "animations") fname = generate_filename(prefix=f"anim_{pose}") file_path = output_dir / fname file_path.write_bytes(image_bytes) anim_data["file_path"] = str(file_path) animations.append(anim_data) return json.dumps({ "success": True, "description": description, "denoise": denoise, "base_seed": seed, "count": len(animations), "animations": animations }, indent=2)
  • server/main.py:439-439 (registration)
    The @mcp.tool() decorator registers this function as an MCP tool.
    @mcp.tool()

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tuannguyen14/ComfyAI-MCP-GameAssets'

If you have feedback or need assistance with the MCP directory API, please join our Discord server