Skip to main content
Glama
nanameru

Gemini 2.5 Flash Image MCP

by nanameru

edit_image

Modify images using text prompts to adjust content, style, or composition while preserving original lighting and visual characteristics.

Instructions

Edit an image using a prompt. Provide one input image via base64 or file path.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
imageYesOne input image
promptYesDescribe the edit; the model matches original style and lighting.
saveToFilePathNoOptional path to save the edited image

Implementation Reference

  • The handler function for the 'edit_image' tool. It takes the prompt, image input, and optional save path, calls the shared Gemini API helper, processes the generated image, optionally saves it to file, and returns a content array with text description, image data, and data URL.
    async (args) => {
      const { prompt, image, saveToFilePath } = args as { prompt: string; image: InlineImageInput; saveToFilePath?: string };
      const results = await callGeminiGenerate({ prompt, images: [image] });
      const first = results[0];
      const savedPath = await maybeSaveImage(first.imageBase64, first.mimeType, saveToFilePath);
      const dataUrl = `data:${first.mimeType};base64,${first.imageBase64}`;
      return {
        content: [
          { type: 'text', text: `Edited image${savedPath ? ` saved to ${savedPath}` : ''}` },
          { type: 'image', mimeType: first.mimeType, data: first.imageBase64 },
          { type: 'text', text: dataUrl },
        ],
      };
    }
  • Zod input schema for the 'edit_image' tool defining parameters: prompt (string), image (object with dataBase64/path/mimeType), and optional saveToFilePath.
      prompt: z.string().describe('Describe the edit; the model matches original style and lighting.'),
      image: z
        .object({
          dataBase64: z.string().optional().describe('Base64 without data URL prefix'),
          path: z.string().optional().describe('Path to the input image file'),
          mimeType: z.string().optional().describe('image/png or image/jpeg'),
        })
        .describe('One input image'),
      saveToFilePath: z.string().optional().describe('Optional path to save the edited image'),
    },
  • src/index.ts:152-180 (registration)
    The mcp.tool call that registers the 'edit_image' tool with its name, description, input schema, and handler function.
    mcp.tool(
      'edit_image',
      'Edit an image using a prompt. Provide one input image via base64 or file path.',
      {
        prompt: z.string().describe('Describe the edit; the model matches original style and lighting.'),
        image: z
          .object({
            dataBase64: z.string().optional().describe('Base64 without data URL prefix'),
            path: z.string().optional().describe('Path to the input image file'),
            mimeType: z.string().optional().describe('image/png or image/jpeg'),
          })
          .describe('One input image'),
        saveToFilePath: z.string().optional().describe('Optional path to save the edited image'),
      },
      async (args) => {
        const { prompt, image, saveToFilePath } = args as { prompt: string; image: InlineImageInput; saveToFilePath?: string };
        const results = await callGeminiGenerate({ prompt, images: [image] });
        const first = results[0];
        const savedPath = await maybeSaveImage(first.imageBase64, first.mimeType, saveToFilePath);
        const dataUrl = `data:${first.mimeType};base64,${first.imageBase64}`;
        return {
          content: [
            { type: 'text', text: `Edited image${savedPath ? ` saved to ${savedPath}` : ''}` },
            { type: 'image', mimeType: first.mimeType, data: first.imageBase64 },
            { type: 'text', text: dataUrl },
          ],
        };
      }
    );
  • Core helper function that handles the API call to Gemini for image editing/generation. Converts inputs to API format, sends POST request, parses response, extracts image data. Used by edit_image and other image tools.
    async function callGeminiGenerate(request: GenerateRequest): Promise<{ imageBase64: string; mimeType: string }[]> {
      const textPart = { text: request.prompt };
      const imageParts = await toInlineDataParts(request.images);
      const parts = [textPart as any, ...imageParts];
    
      const fetchResponse = await fetch(`${GEMINI_ENDPOINT}?key=${encodeURIComponent(GEMINI_API_KEY)}`, {
        method: 'POST',
        headers: {
          'Content-Type': 'application/json',
        },
        body: JSON.stringify({
          contents: [
            {
              parts,
            },
          ],
        }),
      });
    
      if (!fetchResponse.ok) {
        const text = await fetchResponse.text();
        throw new Error(`Gemini API error ${fetchResponse.status}: ${text}`);
      }
    
      const json = (await fetchResponse.json()) as GeminiGenerateResponse;
      const images: { imageBase64: string; mimeType: string }[] = [];
      const first = json.candidates?.[0]?.content?.parts ?? [];
      for (const part of first) {
        if (part.inlineData?.data) {
          images.push({ imageBase64: part.inlineData.data, mimeType: part.inlineData.mimeType ?? 'image/png' });
        }
      }
    
      if (images.length === 0) {
        // Fallback: if API returns interleaved text etc.
        throw new Error('No image data returned by Gemini API');
      }
    
      return images;
    }
  • Helper function to optionally save the generated/edited image to a file path, determining extension from mimeType if needed.
    async function maybeSaveImage(base64: string, mimeType: string, targetPath?: string): Promise<string | undefined> {
      if (!targetPath) return undefined;
      const { writeFile } = await import('node:fs/promises');
      const { extname } = await import('node:path');
      const extension = extname(targetPath) || (mimeType === 'image/jpeg' ? '.jpg' : '.png');
      const resolved = resolve(targetPath.endsWith(extension) ? targetPath : `${targetPath}${extension}`);
      const buffer = Buffer.from(base64, 'base64');
      await writeFile(resolved, buffer);
      return resolved;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool edits an image but doesn't mention side effects (e.g., whether it modifies the original file or creates a new one), permissions needed, rate limits, or output format. The mention of 'saveToFilePath' hints at file creation, but this is insufficient for a mutation tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two sentences that directly state the tool's function and input requirements without any fluff. It's front-loaded with the core purpose, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., file handling, error cases), doesn't explain the return value or output format, and provides minimal guidance on usage. Given the complexity of image editing, this leaves significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents parameters. The description adds minimal value by reiterating 'one input image via base64 or file path,' which is already clear in the schema. It doesn't explain the 'prompt' parameter's role beyond 'Describe the edit,' leaving the agent to rely on the schema's more detailed description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Edit an image') and the mechanism ('using a prompt'), which distinguishes it from sibling tools like 'generate_image' (creation) and 'style_transfer' (style application). However, it doesn't specify what types of edits are possible beyond 'using a prompt,' making it slightly less specific than a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'compose_images' or 'style_transfer.' It mentions providing 'one input image,' but doesn't clarify use cases, prerequisites, or exclusions, leaving the agent to infer usage from tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nanameru/Gemini-2.5-Flash-Image-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server