Supports downloading and using models from Hugging Face Hub for various computer vision tasks like object detection.
Uses Imgur for image hosting and sharing in the demonstration examples, displaying the results of image processing operations.
Utilizes NumPy for image manipulation operations, particularly for the cropping functionality through OpenCV's NumPy slicing approach.
Uses OpenCV for core image manipulation operations like cropping, resizing, and drawing text and shapes on images.
Leverages Ultralytics models for object detection and image analysis, allowing users to identify and locate objects in images with configurable confidence thresholds.
Integrates YOLO object detection models for identifying and locating objects in images with customizable confidence settings.
🪄 ImageSorcery MCP
ComputerVision-based 🪄 sorcery of local image recognition and editing tools for AI assistants
Official website: imagesorcery.net
✅ With ImageSorcery MCP
🪄 ImageSorcery
empowers AI assistants with powerful image processing capabilities:
- ✅ Crop, resize, and rotate images with precision
- ✅ Draw text and shapes on images
- ✅ Add logos and watermarks
- ✅ Detect objects using state-of-the-art models
- ✅ Extract text from images with OCR
- ✅ Use a wide range of pre-trained models for object detection, OCR, and more
- ✅ Do all of this locally, without sending your images to any servers
Just ask your AI to help with image tasks:
"copy photos with pets from folder
photos
to folderpets
"
"Find a cat at the photo.jpg and crop the image in a half in height and width to make the cat be centered"
😉 Hint: Use full path to your files".
"Enumerate form fields on this
form.jpg
withfoduucom/web-form-ui-field-detection
model and fill theform.md
with a list of described fields"😉 Hint: Specify the model and the confidence".
😉 Hint: Add "use imagesorcery" to make sure it will use the proper tool".
Your tool will combine multiple tools listed below to achieve your goal.
🛠️ Available Tools
Tool | Description | Example Prompt |
---|---|---|
blur | Blurs specified rectangular or polygonal areas of an image using OpenCV. Can also invert the provided areas e.g. to blur background. | "Blur the area from (150, 100) to (250, 200) with a blur strength of 21 in my image 'test_image.png' and save it as 'output.png'" |
change_color | Changes the color palette of an image | "Convert my image 'test_image.png' to sepia and save it as 'output.png'" |
crop | Crops an image using OpenCV's NumPy slicing approach | "Crop my image 'input.png' from coordinates (10,10) to (200,200) and save it as 'cropped.png'" |
detect | Detects objects in an image using models from Ultralytics. Can return segmentation masks (as PNG files) or polygons. | "Detect objects in my image 'photo.jpg' with a confidence threshold of 0.4" |
draw_arrows | Draws arrows on an image using OpenCV | "Draw a red arrow from (50,50) to (150,100) on my image 'photo.jpg'" |
draw_circles | Draws circles on an image using OpenCV | "Draw a red circle with center (100,100) and radius 50 on my image 'photo.jpg'" |
draw_lines | Draws lines on an image using OpenCV | "Draw a red line from (50,50) to (150,100) on my image 'photo.jpg'" |
draw_rectangles | Draws rectangles on an image using OpenCV | "Draw a red rectangle from (50,50) to (150,100) and a filled blue rectangle from (200,150) to (300,250) on my image 'photo.jpg'" |
draw_texts | Draws text on an image using OpenCV | "Add text 'Hello World' at position (50,50) and 'Copyright 2023' at the bottom right corner of my image 'photo.jpg'" |
fill | Fills specified rectangular, polygonal, or mask-based areas of an image with a color and opacity, or makes them transparent. Can also invert the provided areas e.g. to remove background. | "Fill the area from (150, 100) to (250, 200) with semi-transparent red in my image 'test_image.png'" |
find | Finds objects in an image based on a text description. Can return segmentation masks (as PNG files) or polygons. | "Find all dogs in my image 'photo.jpg' with a confidence threshold of 0.4" |
get_metainfo | Gets metadata information about an image file | "Get metadata information about my image 'photo.jpg'" |
ocr | Performs Optical Character Recognition (OCR) on an image using EasyOCR | "Extract text from my image 'document.jpg' using OCR with English language" |
overlay | Overlays one image on top of another, handling transparency | "Overlay 'logo.png' on top of 'background.jpg' at position (10, 10)" |
resize | Resizes an image using OpenCV | "Resize my image 'photo.jpg' to 800x600 pixels and save it as 'resized_photo.jpg'" |
rotate | Rotates an image using imutils.rotate_bound function | "Rotate my image 'photo.jpg' by 45 degrees and save it as 'rotated_photo.jpg'" |
😉 Hint: detailed information and usage instructions for each tool can be found in the tool's /src/imagesorcery_mcp/tools/README.md
.
📚 Available Resources
Resource URI | Description | Example Prompt |
---|---|---|
models://list | Lists all available models in the models directory | "Which models are available in ImageSorcery?" |
😉 Hint: detailed information and usage instructions for each resource can be found in the resource's /src/imagesorcery_mcp/resources/README.md
.
🚀 Getting Started
Requirements
Python 3.10
or higherffmpeg
,libsm6
,libxext6
,libgl1-mesa-glx
- system libraries required by OpenCVClaude.app
,Cline
, or another MCP client
These dependencies are typically included with OpenCV installation and don't require separate installation. But they might be missing in some virtual environments like Docker.
For Ubuntu/Debian systems:
For Docker containers: Add this line to your Dockerfile:
Installation
- Create and activate a virtual environment (Strongly Recommended):
For reliable installation of all components, especially the
clip
package (installed via the post-install script), it is strongly recommended to use Python's built-invenv
module instead ofuv venv
. - Install the package into the activated virtual environment:
You can use
pip
oruv pip
. - Run the post-installation script:
This step is crucial. It downloads the required models and attempts to install the
clip
Python package from GitHub into the active virtual environment.
- Creates a
models
directory (usually within the site-packages directory of your virtual environment, or a user-specific location if installed globally) to store pre-trained models. - Generates an initial
models/model_descriptions.json
file there. - Downloads default YOLO models (
yoloe-11l-seg-pf.pt
,yoloe-11s-seg-pf.pt
,yoloe-11l-seg.pt
,yoloe-11s-seg.pt
) required by thedetect
tool into thismodels
directory. - Attempts to install the
clip
Python package from Ultralytics' GitHub repository directly into the active Python environment. This is required for text prompt functionality in thefind
tool. - Downloads the CLIP model file required by the
find
tool into themodels
directory.
You can run this process anytime to restore the default models and attempt clip
installation.
- Using
uv venv
to create virtual environments: Based on testing, virtual environments created withuv venv
may not includepip
in a way that allows theimagesorcery-mcp --post-install
script to automatically install theclip
package from GitHub (it might result in a "No module named pip" error during theclip
installation step). If you choose to useuv venv
:- Create and activate your
uv venv
. - Install
imagesorcery-mcp
:uv pip install imagesorcery-mcp
. - Manually install the
clip
package into your activeuv venv
: - Run
imagesorcery-mcp --post-install
. This will download models but may fail to install theclip
Python package. For a smoother automatedclip
installation via the post-install script, usingpython -m venv
(as described in step 1 above) is the recommended method for creating the virtual environment.
- Create and activate your
- Using
uvx imagesorcery-mcp --post-install
: Running the post-installation script directly withuvx
(e.g.,uvx imagesorcery-mcp --post-install
) will likely fail to install theclip
Python package. This is because the temporary environment created byuvx
typically does not havepip
available in a way the script can use. Models will be downloaded, but theclip
package won't be installed by this command. If you intend to useuvx
to run the mainimagesorcery-mcp
server and requireclip
functionality, you'll need to ensure theclip
package is installed in an accessible Python environment thatuvx
can find, or consider installingimagesorcery-mcp
into a persistent environment created withpython -m venv
.
⚙️ Configure MCP client
Add to your MCP client these settings.
If imagesorcery-mcp
is in your system's PATH after installation, you can use imagesorcery-mcp
directly as the command. Otherwise, you'll need to provide the full path to the executable.
📦 Additional Models
Some tools require specific models to be available in the models
directory:
When downloading models, the script automatically updates the models/model_descriptions.json
file:
- For Ultralytics models: Descriptions are predefined in
src/imagesorcery_mcp/scripts/create_model_descriptions.py
and include detailed information about each model's purpose, size, and characteristics. - For Hugging Face models: Descriptions are automatically extracted from the model card on Hugging Face Hub. The script attempts to use the model name from the model index or the first line of the description.
After downloading models, it's recommended to check the descriptions in models/model_descriptions.json
and adjust them if needed to provide more accurate or detailed information about the models' capabilities and use cases.
Running the Server
ImageSorcery MCP server can be run in different modes:
STDIO
- defaultStreamable HTTP
- for web-based deploymentsServer-Sent Events (SSE)
- for web-based deployments that rely on SSE
- STDIO Mode (Default) - This is the standard mode for local MCP clients:
- Streamable HTTP Mode - For web-based deployments:With custom host, port, and path:
Available transport options:
--transport
: Choose between "stdio" (default), "streamable-http", or "sse"--host
: Specify host for HTTP-based transports (default: 127.0.0.1)--port
: Specify port for HTTP-based transports (default: 8000)--path
: Specify endpoint path for HTTP-based transports (default: /mcp)
🤝 Contributing
Directory Structure
This repository is organized as follows:
Development Setup
- Clone the repository:
- (Recommended) Create and activate a virtual environment:
- Install the package in editable mode along with development dependencies:
This will install imagesorcery-mcp
and all dependencies from [project.dependencies]
and [project.optional-dependencies].dev
(including build
and twine
).
Rules
These rules apply to all contributors: humans and AI.
- Read all the
README.md
files in the project. Understand the project structure and purpose. Understand the guidelines for contributing. Think through how it relates to your task, and how to make changes accordingly. - Read
pyproject.toml
. Pay attention to sections:[tool.ruff]
,[tool.ruff.lint]
,[project.optional-dependencies]
and[project]dependencies
. Strictly follow code style defined inpyproject.toml
. Stick to the stack defined inpyproject.toml
dependencies and do not add any new dependencies without a good reason. - Write your code in new and existing files.
If new dependencies are needed, update
pyproject.toml
and install them viapip install -e .
orpip install -e ".[dev]"
. Do not install them directly viapip install
. Check out existing source codes for examples (e.g.src/imagesorcery_mcp/server.py
,src/imagesorcery_mcp/tools/crop.py
). Stick to the code style, naming conventions, input and output data formats, code structure, architecture, etc. of the existing code. - Update related
README.md
files with your changes. Stick to the format and structure of the existingREADME.md
files. - Write tests for your code.
Check out existing tests for examples (e.g.
tests/test_server.py
,tests/tools/test_crop.py
). Stick to the code style, naming conventions, input and output data formats, code structure, architecture, etc. of the existing tests. - Run tests and linter to ensure everything works:
In case of failures - fix the code and tests. It is strictly required to have all new code to comply with the linter rules and pass all tests.
Coding hints
- Use type hints where appropriate
- Use pydantic for data validation and serialization
📝 Questions?
If you have any questions, issues, or suggestions regarding this project, feel free to reach out to:
You can also open an issue in the repository for bug reports or feature requests.
📜 License
This project is licensed under the MIT License. This means you are free to use, modify, and distribute the software, subject to the terms and conditions of the MIT License.
This server cannot be installed
local-only server
The server can only run on the client's local machine because it depends on local resources.
🪄 ImageSorcery MCP
Related MCP Servers
- AsecurityAlicenseAqualityEnables AI assistants to download images from URLs and perform basic image optimization tasks.Last updated -23310JavaScriptApache 2.0
- -securityAlicense-qualityA Cursor-compatible toolkit that provides intelligent coding assistance through custom AI tools for code architecture planning, screenshot analysis, code review, and file reading capabilities.Last updated -59310TypeScriptMIT License
- JavaScript
- -securityFlicense-qualityEnables searching for AI agents by keywords or categories, allowing users to discover tools like coding agents, GUI agents, or industry-specific assistants across marketplaces.Last updated -35Python