Skip to main content
Glama
duke0317

Image Processing MCP Server

by duke0317

blend_images

Combine two images using blend modes like normal, multiply, screen, or overlay with adjustable opacity to create composite visuals.

Instructions

混合两张图片

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
image1_sourceYes第一张图片源,可以是文件路径或base64编码的图片数据
image2_sourceYes第二张图片源,可以是文件路径或base64编码的图片数据
blend_modeNo混合模式:normal(正常)、multiply(正片叠底)、screen(滤色)、overlay(叠加)normal
opacityNo第二张图片的不透明度,范围 0.0-1.0
output_formatNo输出格式:PNG、JPEG、WEBP 等PNG

Implementation Reference

  • Core handler implementing image blending: loads two images, resizes them according to mode, applies opacity to second image, uses blend modes (normal, multiply, etc.), and outputs base64 result.
    async def blend_images(arguments: Dict[str, Any]) -> List[TextContent]:
        """
        混合两张图片
        
        Args:
            arguments: 包含两张图片源和混合参数的字典
            
        Returns:
            List[TextContent]: 处理结果
        """
        try:
            # 参数验证
            image1_source = arguments.get("image1_source")
            image2_source = arguments.get("image2_source")
            ensure_valid_image_source(image1_source)
            ensure_valid_image_source(image2_source)
            
            blend_mode = arguments.get("blend_mode", "normal")
            opacity = arguments.get("opacity", 0.5)
            resize_mode = arguments.get("resize_mode", "fit_first")
            output_format = arguments.get("output_format", DEFAULT_IMAGE_FORMAT)
            
            # 验证参数
            validate_numeric_range(opacity, 0.0, 1.0, "opacity")
            
            processor = ImageProcessor()
            
            # 加载图片
            image1 = processor.load_image(image1_source)
            image2 = processor.load_image(image2_source)
            
            # 转换为RGBA模式
            if image1.mode != "RGBA":
                image1 = image1.convert("RGBA")
            if image2.mode != "RGBA":
                image2 = image2.convert("RGBA")
            
            # 调整尺寸
            if resize_mode == "fit_first":
                image2 = image2.resize(image1.size, Image.Resampling.LANCZOS)
                final_size = image1.size
            elif resize_mode == "fit_second":
                image1 = image1.resize(image2.size, Image.Resampling.LANCZOS)
                final_size = image2.size
            elif resize_mode == "fit_largest":
                if image1.width * image1.height > image2.width * image2.height:
                    image2 = image2.resize(image1.size, Image.Resampling.LANCZOS)
                    final_size = image1.size
                else:
                    image1 = image1.resize(image2.size, Image.Resampling.LANCZOS)
                    final_size = image2.size
            else:  # fit_smallest
                if image1.width * image1.height < image2.width * image2.height:
                    image2 = image2.resize(image1.size, Image.Resampling.LANCZOS)
                    final_size = image1.size
                else:
                    image1 = image1.resize(image2.size, Image.Resampling.LANCZOS)
                    final_size = image2.size
            
            # 调整第二张图片的透明度
            alpha_channel = image2.split()[-1]
            alpha_channel = alpha_channel.point(lambda p: int(p * opacity))
            image2.putalpha(alpha_channel)
            
            # 应用混合模式
            if blend_mode == "normal":
                result = Image.alpha_composite(image1, image2)
            elif blend_mode == "multiply":
                # 简化的乘法混合
                result = Image.blend(image1, image2, opacity)
            elif blend_mode == "screen":
                # 简化的屏幕混合
                result = Image.blend(image1, image2, opacity)
            else:
                # 其他混合模式使用普通混合
                result = Image.alpha_composite(image1, image2)
            
            # 转换为base64
            output_info = processor.output_image(result, "batch_resize", output_format)
            
            return [TextContent(
                type="text",
                text=json.dumps({
                    "success": True,
                    "message": f"成功混合图片,使用{blend_mode}模式",
                    "data": {
                        **output_info,
                        "metadata": {
                            "size": f"{result.width}x{result.height}",
                            "blend_mode": blend_mode,
                            "opacity": opacity,
                            "resize_mode": resize_mode,
                            "format": output_format
                        }
                    }
                }, ensure_ascii=False)
            )]
            
        except ValidationError as e:
            return [TextContent(
                type="text",
                text=json.dumps({
                    "success": False,
                    "error": f"参数验证失败: {str(e)}"
                }, ensure_ascii=False)
            )]
        except Exception as e:
            return [TextContent(
                type="text",
                text=json.dumps({
                    "success": False,
                    "error": f"混合图片失败: {str(e)}"
                }, ensure_ascii=False)
            )]
  • main.py:726-749 (registration)
    Tool registration via @mcp.tool() decorator in main.py. Defines input parameters with descriptions and defaults matching the handler schema, constructs arguments dict, and calls the advanced handler via safe_run_async.
    @mcp.tool()
    def blend_images(
        image1_source: Annotated[str, Field(description="第一张图片源,可以是文件路径或base64编码的图片数据")],
        image2_source: Annotated[str, Field(description="第二张图片源,可以是文件路径或base64编码的图片数据")],
        blend_mode: Annotated[str, Field(description="混合模式:normal(正常)、multiply(正片叠底)、screen(滤色)、overlay(叠加)", default="normal")],
        opacity: Annotated[float, Field(description="第二张图片的不透明度,范围 0.0-1.0", ge=0.0, le=1.0, default=0.5)],
        output_format: Annotated[str, Field(description="输出格式:PNG、JPEG、WEBP 等", default="PNG")]
    ) -> str:
        """混合两张图片"""
        try:
            arguments = {
                "image1_source": image1_source,
                "image2_source": image2_source,
                "blend_mode": blend_mode,
                "opacity": opacity,
                "output_format": output_format
            }
            result = safe_run_async(advanced_blend_images(arguments))
            return result[0].text
        except Exception as e:
            return json.dumps({
                "success": False,
                "error": f"混合图片失败: {str(e)}"
            }, ensure_ascii=False, indent=2)
  • JSON schema definition for blend_images tool inputs within the Tool object in get_advanced_tools(), detailing properties, enums, defaults, and required fields.
    Tool(
        name="blend_images",
        description="混合两张图片",
        inputSchema={
            "type": "object",
            "properties": {
                "image1_source": {
                    "type": "string",
                    "description": "第一张图片源(文件路径或base64编码)"
                },
                "image2_source": {
                    "type": "string",
                    "description": "第二张图片源(文件路径或base64编码)"
                },
                "blend_mode": {
                    "type": "string",
                    "description": "混合模式",
                    "enum": ["normal", "multiply", "screen", "overlay", "soft_light", "hard_light"],
                    "default": "normal"
                },
                "opacity": {
                    "type": "number",
                    "description": "第二张图片的透明度(0.0-1.0)",
                    "minimum": 0.0,
                    "maximum": 1.0,
                    "default": 0.5
                },
                "resize_mode": {
                    "type": "string",
                    "description": "尺寸调整模式",
                    "enum": ["fit_first", "fit_second", "fit_largest", "fit_smallest"],
                    "default": "fit_first"
                },
                "output_format": {
                    "type": "string",
                    "description": "输出格式",
                    "enum": ["PNG", "JPEG", "WEBP"],
                    "default": "PNG"
                }
            },
            "required": ["image1_source", "image2_source"]
        }
    ),

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/duke0317/ps-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server