# Animagine MCP
FastMCP server for the **Animagine XL 4.0** image generation experience, providing prompt validation, optimization, explanation, and checkpoint/LoRA management tools.
## Overview
- Exposes `validate_prompt`, `optimize_prompt`, `explain_prompt`, `list_models`, `load_checkpoint`, and `unload_loras` through FastMCP.
- Normalizes prompts for consistent structure, category coverage, and tag ordering before handing them to the diffusion pipeline.
- Integrates with local checkpoint and LoRA assets stored under `checkpoints/` and `loras/`.
- Encourages responsible use: the platform can technically generate NSFW material, but choosing to do so and owning the resulting content is the caller's responsibility.
## Requirements
- Python **>= 3.10**
- GPU with CUDA support for production-grade generation (or compatible Accelerate/torch backends)
- `git` plus a package tool such as `pip`, `poetry`, or `hatch`
## Installation and usage
1. Clone the repository and create a virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate
```
2. Install the main dependencies:
```bash
pip install -e .
```
3. (Optional) Install development dependencies:
```bash
pip install -e .[dev]
```
4. Start the MCP server:
```bash
animagine-mcp
```
This registers the FastMCP tools defined in `src/animagine_mcp/server.py` and exposes them to MCP clients.
## Core tools
- `validate_prompt(prompt, width=832, height=1216, negative_prompt=None)` – enforces quality rules, tag ordering, resolution compatibility, and other prompt health checks.
- `optimize_prompt(description=None, prompt=None)` – restructures tags, fills missing categories, and keeps quality tags trailing.
- `explain_prompt(prompt)` – breaks down each tag by category, intent, and effect while presenting canonically ordered prompts.
- `list_models()` – lists available checkpoints, LoRAs, and currently loaded weights.
- `load_checkpoint(checkpoint=None)` – preloads a specific checkpoint (or uses the Animagine XL 4.0 default) to reduce latency.
- `unload_loras()` – strips LoRAs from the pipeline so the base checkpoint styling can be restored quickly.
## Repository layout
- `src/animagine_mcp/` – core package with contracts, prompt processing, diffusion wiring, and the FastMCP server.
- `checkpoints/` – optional `.safetensors`/`.ckpt` files referenced by `load_checkpoint`.
- `loras/` – LoRA modifiers for stylistic tweaks and performance-aligned variants.
- `pyproject.toml` – metadata, dependencies, scripts, and build configuration.
- `02-behavior/` through `05-implementation/` – documentation, standards, and implementation notes that guide the MCP and Codex behaviors.
## Suggested workflow
1. Run `animagine-mcp` so the tools become available.
2. Use `validate_prompt` to inspect the user prompt for issues before generation.
3. Apply `optimize_prompt` or `explain_prompt` as needed to refine or understand prompt structure.
4. Load a checkpoint/LoRA with `load_checkpoint`/`unload_loras` before invoking downstream generation.
5. Reference `list_models` whenever you need to know what weights are available or loaded.
## Development notes
- Run tests (once available) using `pytest tests/` or the appropriate test folder.
- Apply formatting and linting tools (e.g., `ruff`, `black`) as configured in your workflow.
- Keep documentation, README, and inline comments aligned with code changes.
## Contributions & behavior
- Open clear pull requests that describe the issue, resolution, and linked issues when applicable.
- Include tests and documentation updates for new tools, contracts, or behaviors.
- Promote responsible use; the server enables NSFW generation only when the caller deliberately requests it.
## Support
Open an issue describing the desired workflow, including relevant prompts and logs (omit sensitive content). We aim for transparent, responsible guidance.