Skip to main content
Glama
sungmin-koo-ai

Gemini Image Generator MCP

transform_image_from_encoded

Modify existing images using text prompts with Google's Gemini AI. Upload encoded images and describe changes to generate transformed versions saved locally.

Instructions

Transform an existing image based on the given text prompt using Google's Gemini model.

Args:
    encoded_image: Base64 encoded image data with header. Must be in format:
                "data:image/[format];base64,[data]"
                Where [format] can be: png, jpeg, jpg, gif, webp, etc.
    prompt: Text prompt describing the desired transformation or modifications
    
Returns:
    Path to the transformed image file saved on the server

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
encoded_imageYes
promptYes

Implementation Reference

  • Main handler function for transform_image_from_encoded tool, including registration via @mcp.tool(), input schema in docstring, and core logic that loads the base64 image, translates prompt, and processes transformation.
    @mcp.tool()
    async def transform_image_from_encoded(encoded_image: str, prompt: str) -> str:
        """Transform an existing image based on the given text prompt using Google's Gemini model.
    
        Args:
            encoded_image: Base64 encoded image data with header. Must be in format:
                        "data:image/[format];base64,[data]"
                        Where [format] can be: png, jpeg, jpg, gif, webp, etc.
            prompt: Text prompt describing the desired transformation or modifications
            
        Returns:
            Path to the transformed image file saved on the server
        """
        try:
            logger.info(f"Processing transform_image_from_encoded request with prompt: {prompt}")
    
            # Load and validate the image
            source_image, _ = await load_image_from_base64(encoded_image)
            
            # Translate the prompt to English
            translated_prompt = await translate_prompt(prompt)
            
            # Process the transformation
            return await process_image_transform(source_image, translated_prompt, prompt)
            
        except Exception as e:
            error_msg = f"Error transforming image: {str(e)}"
            logger.error(error_msg)
            return error_msg
  • Helper function to load and validate the base64-encoded image into a PIL Image object, specifically used by the transform_image_from_encoded handler.
    async def load_image_from_base64(encoded_image: str) -> Tuple[PIL.Image.Image, str]:
        """Load an image from a base64-encoded string.
        
        Args:
            encoded_image: Base64 encoded image data with header
            
        Returns:
            Tuple containing the PIL Image object and the image format
        """
        if not encoded_image.startswith('data:image/'):
            raise ValueError("Invalid image format. Expected data:image/[format];base64,[data]")
        
        try:
            # Extract the base64 data from the data URL
            image_format, image_data = encoded_image.split(';base64,')
            image_format = image_format.replace('data:', '')  # Get the MIME type e.g., "image/png"
            image_bytes = base64.b64decode(image_data)
            source_image = PIL.Image.open(BytesIO(image_bytes))
            logger.info(f"Successfully loaded image with format: {image_format}")
            return source_image, image_format
        except ValueError as e:
            logger.error(f"Error: Invalid image data format: {str(e)}")
            raise ValueError("Invalid image data format. Image must be in format 'data:image/[format];base64,[data]'")
        except base64.binascii.Error as e:
            logger.error(f"Error: Invalid base64 encoding: {str(e)}")
            raise ValueError("Invalid base64 encoding. Please provide a valid base64 encoded image.")
        except PIL.UnidentifiedImageError:
            logger.error("Error: Could not identify image format")
            raise ValueError("Could not identify image format. Supported formats include PNG, JPEG, GIF, WebP.")
        except Exception as e:
            logger.error(f"Error: Could not load image: {str(e)}")
            raise
  • Helper function that creates the transformation prompt and calls Gemini to process the image transformation, used by the handler.
    async def process_image_transform(
        source_image: PIL.Image.Image, 
        optimized_edit_prompt: str, 
        original_edit_prompt: str
    ) -> str:
        """Process image transformation with Gemini.
        
        Args:
            source_image: PIL Image object to transform
            optimized_edit_prompt: Optimized text prompt for transformation
            original_edit_prompt: Original user prompt for naming
            
        Returns:
            Path to the transformed image file
        """
        # Create prompt for image transformation
        edit_instructions = get_image_transformation_prompt(optimized_edit_prompt)
        
        # Process with Gemini and return the result
        return await process_image_with_gemini(
            [edit_instructions, source_image],
            original_edit_prompt
        )
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that the tool uses Google's Gemini model and saves the transformed image to the server, which are useful behavioral traits. However, it doesn't mention rate limits, authentication requirements, file size limits, or error conditions that would be important for a transformation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear purpose statement followed by well-organized Args and Returns sections. Every sentence earns its place by providing essential information without redundancy. The formatting with clear section headers enhances readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a transformation tool with no annotations and no output schema, the description provides good parameter documentation and purpose clarity. However, it lacks information about the transformation process (e.g., quality, limitations, processing time), error handling, and more detailed behavioral context that would be valuable given the complexity of image transformation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantic information for both parameters. It specifies the exact format required for encoded_image (including header structure and supported formats) and explains that prompt describes 'desired transformation or modifications.' This adds significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Transform an existing image based on the given text prompt using Google's Gemini model.' It specifies the verb (transform), resource (existing image), method (Gemini model), and distinguishes from sibling tools (transform_image_from_file handles file input instead of encoded data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly suggests usage context by mentioning 'existing image' and the encoding format, but doesn't explicitly state when to use this tool versus alternatives like transform_image_from_file or generate_image_from_text. It provides technical prerequisites but lacks comparative guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sungmin-koo-ai/GeminiImageMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server