Skip to main content
Glama

Edit image

edit_image

Edit existing images using a text prompt and optional mask. Supports multiple OpenAI models. Results saved to disk with file paths returned.

Instructions

Edit one or more existing images using a text prompt and optional mask. Supports gpt-image-1.5, gpt-image-1, gpt-image-1-mini, and dall-e-2. Results are saved to disk and file paths are returned.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescription of the desired edit.
imagesYesAbsolute paths to input image files (png/jpg/webp). Up to 16 for GPT Image.
maskNoAbsolute path to a mask image. Transparent pixels indicate areas to edit. Must match the first input image's dimensions.
modelNoModel to use. DALL·E 3 does not support edits. Defaults to env DALLE_DEFAULT_MODEL or gpt-image-1.5.
sizeNo
qualityNo
nNo
backgroundNo
output_formatNo
output_compressionNo
input_fidelityNoGPT Image only. 'high' preserves more of the original image.
userNo
output_dirNo
filename_prefixNo
return_image_contentNo

Implementation Reference

  • The main handler function `registerEditTool` which registers the 'edit_image' MCP tool via `server.registerTool(...)`. The handler reads inputs (prompt, images, mask), validates model supports edit, builds ImageEditParams, calls OpenAI images.edit(), saves results to disk, and returns file paths.
    export function registerEditTool(server: McpServer): void {
      server.registerTool(
        "edit_image",
        {
          title: "Edit image",
          description:
            "Edit one or more existing images using a text prompt and optional mask. Supports gpt-image-1.5, gpt-image-1, gpt-image-1-mini, and dall-e-2. Results are saved to disk and file paths are returned.",
          inputSchema,
        },
        async (args) => {
          try {
            const model = (args.model ?? defaultModel()) as Model;
            const info = MODELS[model];
            if (!info.supportsEdit) {
              return errorContent(new Error(`Model '${model}' does not support image edits.`));
            }
    
            const uploads = await Promise.all(args.images.map((p) => readImageAsUpload(p)));
            const params: ImageEditParams = {
              model,
              prompt: args.prompt,
              image: uploads.length === 1 ? uploads[0]! : uploads,
            };
            if (args.mask) params.mask = await readImageAsUpload(args.mask);
            if (args.size) params.size = args.size as ImageEditParams["size"];
            if (args.quality) params.quality = args.quality as ImageEditParams["quality"];
            if (args.n !== undefined) params.n = args.n;
            if (args.user) params.user = args.user;
    
            if (info.family === "gpt-image") {
              if (args.background) params.background = args.background;
              if (args.output_format) params.output_format = args.output_format;
              if (args.output_compression !== undefined) params.output_compression = args.output_compression;
              if (args.input_fidelity) params.input_fidelity = args.input_fidelity;
            } else {
              params.response_format = "b64_json";
            }
    
            const client = getOpenAI();
            const response = await client.images.edit(params);
            const items = response.data ?? [];
            if (items.length === 0) {
              return errorContent(new Error("OpenAI returned no images."));
            }
    
            const outDir = resolveOutputDir(args.output_dir);
            const seed = `${Date.now()}_edit_${args.prompt}`;
            const saved = await Promise.all(
              items.map(async (item, i) => {
                const extracted = await extractImage(item, response.output_format ?? args.output_format ?? null);
                return saveImage(extracted, outDir, args.filename_prefix ?? "edit", seed, i);
              }),
            );
    
            const lines: string[] = [
              `Edited ${saved.length} image${saved.length === 1 ? "" : "s"} with ${model}.`,
              `Source: ${args.images.join(", ")}${args.mask ? ` (mask: ${args.mask})` : ""}`,
              `Saved to: ${outDir}`,
              "",
              ...saved.map((s, i) => `  [${i}] ${s.path} (${s.mime}, ${s.bytes} bytes)`),
            ];
            if (response.usage) {
              lines.push("", `Usage: ${JSON.stringify(response.usage)}`);
            }
    
            return {
              content: buildContent(lines.join("\n"), saved, args.return_image_content === true),
            };
          } catch (err) {
            return errorContent(err);
          }
        },
      );
    }
  • Zod input schema for the 'edit_image' tool, defining all parameters: prompt, images, mask, model, size, quality, n, background, output_format, output_compression, input_fidelity, user, output_dir, filename_prefix, return_image_content.
    const inputSchema = {
      prompt: z.string().min(1).max(32000).describe("Description of the desired edit."),
      images: z
        .array(z.string())
        .min(1)
        .max(16)
        .describe("Absolute paths to input image files (png/jpg/webp). Up to 16 for GPT Image."),
      mask: z
        .string()
        .optional()
        .describe(
          "Absolute path to a mask image. Transparent pixels indicate areas to edit. Must match the first input image's dimensions.",
        ),
      model: z
        .enum(["gpt-image-1.5", "gpt-image-1", "gpt-image-1-mini", "dall-e-2"])
        .optional()
        .describe("Model to use. DALL·E 3 does not support edits. Defaults to env DALLE_DEFAULT_MODEL or gpt-image-1.5."),
      size: z
        .enum([
          "auto",
          "1024x1024",
          "1536x1024",
          "1024x1536",
          "512x512",
          "256x256",
        ])
        .optional(),
      quality: z.enum(["auto", "low", "medium", "high", "standard"]).optional(),
      n: z.number().int().min(1).max(10).optional(),
      background: z.enum(["transparent", "opaque", "auto"]).optional(),
      output_format: z.enum(["png", "jpeg", "webp"]).optional(),
      output_compression: z.number().int().min(0).max(100).optional(),
      input_fidelity: z
        .enum(["high", "low"])
        .optional()
        .describe("GPT Image only. 'high' preserves more of the original image."),
      user: z.string().optional(),
      output_dir: z.string().optional(),
      filename_prefix: z.string().optional(),
      return_image_content: z.boolean().optional(),
    };
  • src/server.ts:21-21 (registration)
    Registration call: `registerEditTool(server)` invoked during server creation, which wires the edit_image tool into the MCP server.
    registerEditTool(server);
  • The `supportsEdit` boolean field in the ModelInfo interface, used by the edit handler to reject models that don't support editing (e.g., dall-e-3).
    supportsEdit: boolean;
  • Example model config showing `supportsEdit: true` for gpt-image-1.5; similar entries exist for gpt-image-1, gpt-image-1-mini, and dall-e-2.
    supportsEdit: true,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions saving to disk and returning file paths, but does not disclose other behavioral traits such as destructiveness, permissions, or model-specific behaviors. The description is minimal.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences and front-loads the main action. It could be more structured but is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 15 parameters, low schema coverage, no output schema, and sibling tools, the description is incomplete. It does not cover parameter details or usage contexts, leaving significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is only 33%, so the description should compensate. It only mentions 'text prompt' and 'optional mask', which are already captured in the schema. No additional meaning is provided for the other 13 parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'edit' and the resource 'existing images' using a text prompt and optional mask. It distinguishes from siblings like 'generate_image' by focusing on existing images, but does not explicitly contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings. It does not mention alternative tools for generation or variation, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sam-david/openai-images-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server