Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Live2D Automation MCP ServerConvert character.png into a rigged Live2D model with physics and idle motions."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Live2D Automation MCP Server
Generate a mock intermediate Live2D package from a single character image.
Features
MCP tools for image analysis, face extraction, layer generation, rigging, physics, motions, and export
Server-issued session IDs with TTL, concurrency limits, explicit close support, and status metrics
Output directory confinement under
output/Mock
.moc3export contract validated before success is reportedExplicit
detector_used,fallback_reason, andconfidence_summarymetadata on analysis steps
Installation
Minimal runtime:
pip install -e .CPU-assisted vision stack:
pip install -e ".[vision-cpu]"GPU-assisted vision stack:
pip install -e ".[vision-gpu]"Development tools:
pip install -e ".[dev]"Usage
Run the MCP server
python -m mcp_server.serverRun the full pipeline
from mcp_server.server import full_pipeline
result = await full_pipeline(
image_path="ATRI.png",
output_dir="output/ATRI",
model_name="ATRI",
motion_types=["idle", "tap", "move", "emotional"],
)Step-by-step flow
Call
analyze_photo(image_path)and store the returnedsession_idCall
detect_face_features(session_id, output_dir)Call
generate_layers(session_id, output_dir)Call
create_mesh(session_id)Call
setup_rigging(session_id)Call
configure_physics(session_id)Call
generate_motions(session_id, motion_types)Call
export_model(session_id, output_dir, model_name)Call
close_session(session_id)when the step flow is complete
Safety constraints
output_dirmust remain inside the projectoutput/directorymodel_nameonly supports letters, digits,_, and-input image formats:
png,jpg,jpeg,webpinput image limits: 20 MiB, 4096x4096, 16,777,216 total pixels
supported motion types:
idle,tap,move,emotional
Export notes
The exporter writes a mock intermediate bundle, not a production-ready Live2D runtime model
model3.jsonand the returned file manifest always reference{model_name}.moc3ready_for_cubism_editorremainsfalseuntil a real Cubism-compatible exporter existsFinal validation and export should happen in Cubism Editor before production use
License
MIT
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.