Provides installation option for the MCP server directly from GitHub repository using git+https URL format
Provides installation and distribution of the MCP server package through the Python Package Index
ModelScope Image MCP Server
English | 中文
An MCP (Model Context Protocol) server for generating images via the ModelScope image generation API. This server provides seamless integration with AI assistants, enabling them to create images through natural language prompts with robust async processing and local file management.
IMPORTANT: Earlier drafts of this README mentioned features like returning base64 data, negative prompts, and additional parameters. The current released code (see
src/modelscope_image_mcp/server.py
) implements a focused minimal feature set: one toolgenerate_image
that submits an async task and saves the resulting image locally. Planned / upcoming features are listed in the roadmap below.
Current Features
- Asynchronous image generation using ModelScope async task API
- Periodic task status polling (every 5 seconds, up to 2 minutes)
- Saves the first generated image to a local file
- Returns task status and image URL to the MCP client
- Robust error handling + timeout messaging
- Simple one-command start with
uvx
Environment Variable
The server reads your credential from:
If it is missing, the server will raise an error. Obtain a token from: https://modelscope.cn/my/myaccesstoken
Set on Windows (cmd):
PowerShell:
Unix/macOS bash/zsh:
Installation & MCP Client Configuration
You can register the server directly in an MCP-compatible client (e.g. Claude Desktop) without a prior manual install thanks to uvx
.
Option 1: PyPI (Recommended once published)
Option 2: Direct from GitHub
Option 3: Local Development Checkout
Then configure MCP client entry using:
Quick Local Smoke Test
When running successfully you should see log lines showing task submission and polling.
Advanced Configuration
Creative Prompts
- Art Style: "in the style of Van Gogh", "watercolor painting", "digital art"
- Composition: "close-up portrait", "wide-angle landscape", "bird's eye view"
- Lighting: "dramatic lighting", "golden hour", "studio lighting"
- Mood: "mysterious atmosphere", "vibrant colors", "minimalist design"
Best Practices
- Be Specific: Detailed prompts produce better results than vague ones
- Use References: Mention specific art styles, artists, or time periods
- Experiment: Try variations of your prompt to find the best result
- Organize Outputs: Use descriptive filenames and organized directories
- Check Status: Monitor the async task status for long-running generations
generate_image
Creates an image from a text prompt using the ModelScope async API.
Parameters:
- prompt (string, required): The text description of the desired image
- model (string, optional, default: Qwen/Qwen-Image): Model name passed to API
- size (string, optional, default: 1024x1024): Image resolution size, Qwen-Image supports: [64x64,1664x1664]
- output_filename (string, optional, default: result_image.jpg): Local filename to save the first output image
- output_dir (string, optional, default: ./outputs): Directory path where the image will be saved
Sample invocation (conceptual JSON sent by MCP client):
Sample textual response payload (returned to the client):
Notes:
- Only the first image URL is used (if multiple are ever returned)
- If the task fails or times out you receive a descriptive message
- No base64 data is currently returned (roadmap item)
Internal Flow
- Submit async generation request with header
X-ModelScope-Async-Mode: true
- Poll task endpoint
/v1/tasks/{task_id}
every 5 seconds (max 120 attempts ~= 2 minutes) - On SUCCEED download first image and save via Pillow (PIL)
- Return textual metadata to MCP client
- Provide clear error / timeout messages otherwise
Roadmap
Planned enhancements (not yet implemented in server.py
):
- Optional base64 return data
- Negative prompt & guidance parameters
- Adjustable polling interval & timeout via arguments
- Multiple image outputs selection
- Streaming progress notifications
Development
Project Structure
Troubleshooting
Symptom | Possible Cause | Action |
---|---|---|
ValueError: 需要设置 MODELSCOPE_SDK_TOKEN 环境变量 | Token missing | Export / set environment variable then restart |
图片生成超时 | Slow model processing | Re-run; later we will expose longer timeout argument |
网络相关 httpx.TimeoutException | Connectivity issues | Check network / retry |
PIL cannot identify image file | Invalid image data received | Try a different prompt or model |
Permission denied when saving | Output directory permissions | Check write permissions or change output_dir |
No such file or directory | Output directory doesn't exist | Server will create it automatically, or specify existing path |
Changelog
1.0.1
- Added size parameter support for customizable image resolution
- Improved image generation with Qwen-Image model resolution range [64x64,1664x1664]
- Enhanced documentation with size parameter usage examples
1.0.0
- Major update with improved async handling and output directory support
- Added configurable output directory parameter
- Enhanced error handling and logging
- Updated dependencies to use httpx for better async support
- Fixed notification_options bug from initial release
0.1.0
- Initial minimal implementation with async polling & local image save
- Fixed bug:
notification_options
previously None causing AttributeError
License
MIT License
Contributing
PRs & issues welcome. Please describe reproduction steps for any failures.
Disclaimer
This is an unofficial integration example. Use at your own risk; abide by ModelScope Terms of Service.
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables users to generate high-quality images using ModelScope's Qwen-Image model through natural language prompts. Supports async task processing with both image URL and base64 encoded data output options.