Skip to main content
Glama

seedream_edit_image

Edit existing images via text prompts to transform styles, backgrounds, and attributes. Apply style transfers, swap scenes, modify clothing, and perform virtual try-ons using SeedEdit AI.

Instructions

Edit or modify existing images using ByteDance's Seedream/SeedEdit model.

This tool modifies existing images based on text instructions. It can change
styles, backgrounds, attributes, clothing, and more. Supports single or
multiple image inputs.

Use this when:
- You want to modify or transform an existing image
- You need to change style, background, colors, or attributes
- You want to apply artistic transformations (watercolor, oil painting, etc.)
- You need virtual try-on (clothing on person)
- You want to place objects in different scenes

Common use cases:
- Style transfer: "Convert to anime style", "Make it look like a pencil sketch"
- Background change: "Replace background with a sunset beach"
- Attribute edit: "Change hair color to blonde", "Add sunglasses"
- Virtual try-on: Provide person image + clothing image
- Scene composition: Place products in realistic environments

Returns:
    JSON with task_id, trace_id, success status, and edited image data
    including image URLs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesDescription of the edit to perform on the image(s). Describe what changes you want. Example: 'Change the background to a beach scene', 'Make the person wear a red dress', 'Convert to watercolor painting style'
imageYesList of image URLs or base64-encoded images to edit. Supports HTTP/HTTPS URLs (publicly accessible) or base64 format (data:image/png;base64,...). Each image must be under 10MB.
modelNoModel to use for editing. 'doubao-seededit-3-0-i2i-250628' (dedicated editing model, best for image modification). Other models can also be used for editing when images are provided.doubao-seededit-3-0-i2i-250628
sizeNoOutput image resolution. '1K' (default), '2K', '3K', '4K', or 'adaptive'.
seedNoRandom seed for reproducible edits. Range: [-1, 2147483647]. Default is -1 (random). Only works with v3 models.
guidance_scaleNoPrompt weight — higher values make edits follow the prompt more closely. Range: [1, 10]. Default is 5.5 for doubao-seededit-3-0-i2i. Only works with v3 models.
response_formatNoResponse format. 'url' (default) or 'b64_json'.
watermarkNoWhether to add an AI-generated watermark. Default is true.
output_formatNoOutput image format. 'jpeg' (default) or 'png'.
callback_urlNoOptional webhook URL for async result notification.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully documents the return format (JSON with task_id, trace_id, image URLs) and notes version-specific constraints ('Only works with v3 models'). However, it omits critical operational context: whether the operation is destructive, async processing implications (beyond callback_url mention), rate limits, or authentication requirements for the image URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear section headers (implicit through formatting) and front-loaded purpose statement. The 'Common use cases' section with quoted examples justifies the length. Minor deduction for slightly redundant phrasing ('Edit or modify' in opening, 'modifies existing images' in second paragraph).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a 10-parameter tool with complex capabilities. The description covers input semantics, output schema (task_id, trace_id, image URLs), and primary use cases. Given the presence of an output schema (per context signals) and 100% parameter coverage, the description successfully provides the necessary contextual layer for an AI agent to invoke this correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context beyond the schema: noting that multiple images support 'virtual try-on' use cases, clarifying that the dedicated editing model is recommended over general models, and providing concrete prompt examples ('Convert to anime style', 'Change hair color to blonde') that illustrate parameter intent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb+resource ('Edit or modify existing images') and explicitly names the underlying technology (ByteDance's Seedream/SeedEdit model). It clearly distinguishes from the sibling 'seedream_generate_image' by emphasizing modification of 'existing images' versus generation from scratch.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this when:' section with five specific scenarios (style changes, virtual try-on, scene composition, etc.) and 'Common use cases:' with concrete examples. Lacks explicit negative constraints (e.g., 'Do not use for text-to-image generation'), though the distinction is implied through the focus on existing images.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AceDataCloud/MCPSeedream'

If you have feedback or need assistance with the MCP directory API, please join our Discord server