Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Animagine MCPOptimize this prompt for Animagine XL: 1girl, solo, sitting in a cafe"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Animagine MCP
FastMCP server for the Animagine XL 4.0 image generation experience, providing prompt validation, optimization, explanation, and checkpoint/LoRA management tools.
Overview
Exposes
validate_prompt,optimize_prompt,explain_prompt,list_models,load_checkpoint, andunload_lorasthrough FastMCP.Normalizes prompts for consistent structure, category coverage, and tag ordering before handing them to the diffusion pipeline.
Integrates with local checkpoint and LoRA assets stored under
checkpoints/andloras/.Encourages responsible use: the platform can technically generate NSFW material, but choosing to do so and owning the resulting content is the caller's responsibility.
Requirements
Python >= 3.10
GPU with CUDA support for production-grade generation (or compatible Accelerate/torch backends)
gitplus a package tool such aspip,poetry, orhatch
Installation and usage
Clone the repository and create a virtual environment:
python -m venv .venv source .venv/bin/activateInstall the main dependencies:
pip install -e .(Optional) Install development dependencies:
pip install -e .[dev]Start the MCP server:
animagine-mcpThis registers the FastMCP tools defined in
src/animagine_mcp/server.pyand exposes them to MCP clients.
Core tools
validate_prompt(prompt, width=832, height=1216, negative_prompt=None)– enforces quality rules, tag ordering, resolution compatibility, and other prompt health checks.optimize_prompt(description=None, prompt=None)– restructures tags, fills missing categories, and keeps quality tags trailing.explain_prompt(prompt)– breaks down each tag by category, intent, and effect while presenting canonically ordered prompts.list_models()– lists available checkpoints, LoRAs, and currently loaded weights.load_checkpoint(checkpoint=None)– preloads a specific checkpoint (or uses the Animagine XL 4.0 default) to reduce latency.unload_loras()– strips LoRAs from the pipeline so the base checkpoint styling can be restored quickly.
Repository layout
src/animagine_mcp/– core package with contracts, prompt processing, diffusion wiring, and the FastMCP server.checkpoints/– optional.safetensors/.ckptfiles referenced byload_checkpoint.loras/– LoRA modifiers for stylistic tweaks and performance-aligned variants.pyproject.toml– metadata, dependencies, scripts, and build configuration.02-behavior/through05-implementation/– documentation, standards, and implementation notes that guide the MCP and Codex behaviors.
Suggested workflow
Run
animagine-mcpso the tools become available.Use
validate_promptto inspect the user prompt for issues before generation.Apply
optimize_promptorexplain_promptas needed to refine or understand prompt structure.Load a checkpoint/LoRA with
load_checkpoint/unload_lorasbefore invoking downstream generation.Reference
list_modelswhenever you need to know what weights are available or loaded.
Development notes
Run tests (once available) using
pytest tests/or the appropriate test folder.Apply formatting and linting tools (e.g.,
ruff,black) as configured in your workflow.Keep documentation, README, and inline comments aligned with code changes.
Contributions & behavior
Open clear pull requests that describe the issue, resolution, and linked issues when applicable.
Include tests and documentation updates for new tools, contracts, or behaviors.
Promote responsible use; the server enables NSFW generation only when the caller deliberately requests it.
Support
Open an issue describing the desired workflow, including relevant prompts and logs (omit sensitive content). We aim for transparent, responsible guidance.