Skip to main content
Glama
brunoqgalvao

Gemini Flash Image MCP Server

by brunoqgalvao

generate_image

Create images from text prompts, edit existing images using natural language, or combine multiple images with customizable aspect ratios.

Instructions

Generate or edit images using Gemini 2.5 Flash Image (Nano Banana). Supports text-to-image generation, image editing with natural language prompts, and multi-image composition. All generated images include a SynthID watermark.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesText prompt describing the image to generate or edits to make
input_imagesNoOptional array of file paths to input images for editing or composition
aspect_ratioNoOutput aspect ratio. Options: 1:1, 2:3, 3:2, 3:4, 4:3, 4:5, 5:4, 9:16, 16:9, 21:91:1
output_pathYesPath where the generated image will be saved (must end in .png)output.png
image_onlyNoIf true, requests image-only output without text response

Implementation Reference

  • index.js:93-208 (handler)
    The core handler function for the 'generate_image' tool. It destructures arguments, validates inputs, constructs the Gemini API payload with optional input images, makes the HTTP request, extracts and saves the generated image as PNG, and returns a formatted text response or error.
    this.server.setRequestHandler(CallToolRequestSchema, async (request) => {
      if (request.params.name !== "generate_image") {
        throw new Error(`Unknown tool: ${request.params.name}`);
      }
    
      const {
        prompt,
        input_images = [],
        aspect_ratio = "1:1",
        output_path = "output.png",
        image_only = false,
      } = request.params.arguments;
    
      if (!prompt) {
        throw new Error("prompt is required");
      }
    
      if (!output_path.endsWith(".png")) {
        throw new Error("output_path must end with .png");
      }
    
      if (!ASPECT_RATIOS.includes(aspect_ratio)) {
        throw new Error(`Invalid aspect_ratio. Must be one of: ${ASPECT_RATIOS.join(", ")}`);
      }
    
      try {
        // Build request parts
        const parts = [{ text: prompt }];
    
        // Add input images if provided
        for (const imagePath of input_images) {
          const imageData = await this.encodeImage(imagePath);
          parts.push(imageData);
        }
    
        // Build API request payload
        const payload = {
          contents: [{ parts }],
          generationConfig: {
            responseModalities: image_only ? ["Image"] : ["Text", "Image"],
            imageConfig: {
              aspectRatio: aspect_ratio,
            },
          },
        };
    
        // Make API request
        const url = `${BASE_URL}?key=${GEMINI_API_KEY}`;
        const response = await fetch(url, {
          method: "POST",
          headers: {
            "Content-Type": "application/json",
          },
          body: JSON.stringify(payload),
        });
    
        if (!response.ok) {
          const errorText = await response.text();
          throw new Error(`API request failed: ${response.status} ${response.statusText}\n${errorText}`);
        }
    
        const result = await response.json();
    
        // Extract and save image
        let textResponse = null;
        let imageSaved = false;
    
        if (result.candidates) {
          for (const candidate of result.candidates) {
            if (candidate.content) {
              for (const part of candidate.content.parts || []) {
                if (part.inlineData) {
                  await this.saveImage(part.inlineData, output_path);
                  imageSaved = true;
                } else if (part.text) {
                  textResponse = part.text;
                }
              }
            }
          }
        }
    
        if (!imageSaved) {
          throw new Error("No image was generated in the response");
        }
    
        // Return success response
        const responseText = [
          `✓ Image generated successfully!`,
          `  Saved to: ${output_path}`,
          `  Aspect ratio: ${aspect_ratio}`,
          textResponse ? `  AI response: ${textResponse}` : null,
        ]
          .filter(Boolean)
          .join("\n");
    
        return {
          content: [
            {
              type: "text",
              text: responseText,
            },
          ],
        };
      } catch (error) {
        return {
          content: [
            {
              type: "text",
              text: `Error generating image: ${error.message}`,
            },
          ],
          isError: true,
        };
      }
    });
  • Input schema defining parameters for generate_image: prompt (required), input_images (optional), aspect_ratio, output_path (required), image_only.
    inputSchema: {
      type: "object",
      properties: {
        prompt: {
          type: "string",
          description: "Text prompt describing the image to generate or edits to make",
        },
        input_images: {
          type: "array",
          items: {
            type: "string",
          },
          description: "Optional array of file paths to input images for editing or composition",
        },
        aspect_ratio: {
          type: "string",
          enum: ASPECT_RATIOS,
          description: "Output aspect ratio. Options: " + ASPECT_RATIOS.join(", "),
          default: "1:1",
        },
        output_path: {
          type: "string",
          description: "Path where the generated image will be saved (must end in .png)",
          default: "output.png",
        },
        image_only: {
          type: "boolean",
          description: "If true, requests image-only output without text response",
          default: false,
        },
      },
      required: ["prompt", "output_path"],
  • index.js:48-91 (registration)
    Tool registration via ListToolsRequestSchema handler, providing name, description, and schema for 'generate_image'.
    this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        {
          name: "generate_image",
          description:
            "Generate or edit images using Gemini 2.5 Flash Image (Nano Banana). " +
            "Supports text-to-image generation, image editing with natural language prompts, " +
            "and multi-image composition. All generated images include a SynthID watermark.",
          inputSchema: {
            type: "object",
            properties: {
              prompt: {
                type: "string",
                description: "Text prompt describing the image to generate or edits to make",
              },
              input_images: {
                type: "array",
                items: {
                  type: "string",
                },
                description: "Optional array of file paths to input images for editing or composition",
              },
              aspect_ratio: {
                type: "string",
                enum: ASPECT_RATIOS,
                description: "Output aspect ratio. Options: " + ASPECT_RATIOS.join(", "),
                default: "1:1",
              },
              output_path: {
                type: "string",
                description: "Path where the generated image will be saved (must end in .png)",
                default: "output.png",
              },
              image_only: {
                type: "boolean",
                description: "If true, requests image-only output without text response",
                default: false,
              },
            },
            required: ["prompt", "output_path"],
          },
        },
      ],
    }));
  • Helper methods encodeImage and saveImage used by the handler for processing input/output images.
    async encodeImage(imagePath) {
      const absolutePath = resolve(imagePath);
      const imageBuffer = await readFile(absolutePath);
      const base64Data = imageBuffer.toString("base64");
    
      // Determine MIME type from extension
      const mimeTypes = {
        ".jpg": "image/jpeg",
        ".jpeg": "image/jpeg",
        ".png": "image/png",
        ".webp": "image/webp",
        ".gif": "image/gif",
      };
    
      const ext = imagePath.toLowerCase().match(/\.\w+$/)?.[0];
      const mimeType = mimeTypes[ext] || "image/jpeg";
    
      return {
        inlineData: {
          mimeType,
          data: base64Data,
        },
      };
    }
    
    async saveImage(inlineData, outputPath) {
      const absolutePath = resolve(outputPath);
      const imageBuffer = Buffer.from(inlineData.data, "base64");
      await writeFile(absolutePath, imageBuffer);
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool can generate or edit images, supports multiple input types (text prompts, input images), and includes a SynthID watermark on outputs. However, it lacks details on rate limits, error conditions, or performance characteristics, leaving some behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by specific capabilities and a critical behavioral note (watermark). It uses two concise sentences with no redundant or extraneous information, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (image generation/editing with 5 parameters) and lack of annotations or output schema, the description provides a solid foundation by covering purpose, capabilities, and key behavior (watermark). However, it does not address output format details (e.g., image resolution, file size) or error handling, which could enhance completeness for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, providing clear documentation for all parameters. The description adds minimal semantic value beyond the schema, as it mentions 'text prompts' and 'input images' but does not elaborate on parameter interactions or usage nuances. The baseline score of 3 is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('generate or edit images') and resource ('images'), identifies the underlying technology ('Gemini 2.5 Flash Image (Nano Banana)'), and lists three distinct capabilities: text-to-image generation, image editing with natural language prompts, and multi-image composition. This is comprehensive and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage contexts by listing capabilities (e.g., use for text-to-image, editing, or composition) but does not provide explicit guidance on when to choose this tool over alternatives or any prerequisites. Since there are no sibling tools, the lack of differentiation is not penalized, but it remains at an implied level without exclusions or best practices.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brunoqgalvao/gemini-image-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server