SeedreamMCP
Server Quality Checklist
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 6 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Only behavioral disclosure is 'Returns: Formatted list...' which partially covers output format. Missing: idempotency guarantees, caching behavior, rate limits, or side effects. Given output schema exists, return value disclosure is less critical, but other behavioral traits remain undocumented.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, usage conditions, returns). No redundant text. Bullet points for usage scenarios improve scannability. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Tool has zero parameters and low complexity. Output schema exists (per context signals), so description appropriately avoids detailing return values beyond high-level format. Complete for a simple listing utility, though explicit mention of 'no parameters required' could reinforce the empty schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters present. Per scoring rules, 0 params = baseline 4. Schema is empty object with 100% coverage, requiring no additional semantic clarification in description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description states specific verb ('List') + resource ('image sizes and resolution options') + scope ('for Seedream'). Clearly distinguishes from siblings like seedream_generate_image or seedream_edit_image by focusing on metadata retrieval rather than image manipulation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' section with three clear scenarios (user asks about sizes, helping choose resolution, understanding options). Lacks explicit 'when not to use' or named alternatives (e.g., 'don't use for generation, use seedream_generate_image'), but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully documents the return format (JSON with task_id, trace_id, image URLs) and notes version-specific constraints ('Only works with v3 models'). However, it omits critical operational context: whether the operation is destructive, async processing implications (beyond callback_url mention), rate limits, or authentication requirements for the image URLs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear section headers (implicit through formatting) and front-loaded purpose statement. The 'Common use cases' section with quoted examples justifies the length. Minor deduction for slightly redundant phrasing ('Edit or modify' in opening, 'modifies existing images' in second paragraph).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 10-parameter tool with complex capabilities. The description covers input semantics, output schema (task_id, trace_id, image URLs), and primary use cases. Given the presence of an output schema (per context signals) and 100% parameter coverage, the description successfully provides the necessary contextual layer for an AI agent to invoke this correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
While the schema has 100% description coverage (baseline 3), the description adds valuable semantic context beyond the schema: noting that multiple images support 'virtual try-on' use cases, clarifying that the dedicated editing model is recommended over general models, and providing concrete prompt examples ('Convert to anime style', 'Change hair color to blonde') that illustrate parameter intent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb+resource ('Edit or modify existing images') and explicitly names the underlying technology (ByteDance's Seedream/SeedEdit model). It clearly distinguishes from the sibling 'seedream_generate_image' by emphasizing modification of 'existing images' versus generation from scratch.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' section with five specific scenarios (style changes, virtual try-on, scene composition, etc.) and 'Common use cases:' with concrete examples. Lacks explicit negative constraints (e.g., 'Do not use for text-to-image generation'), though the distinction is implied through the focus on existing images.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively establishes the polling pattern for async operations ('check if a generation/edit is complete') and implies read-only safety through the verb 'Query'. However, it lacks explicit statements about idempotency, rate limits, or error handling for invalid task_ids that would be helpful given the lack of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with purpose front-loaded, followed by usage context, specific bullet points, and return value summary. There is minor redundancy between the second sentence ('Use this to check...') and the subsequent bullet points, and the Returns section is somewhat redundant given the existence of an output schema.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a single-parameter tool with full schema coverage and existing output schema. It clearly links to sibling tools (noting task_id comes from generate_image or edit_image responses) and explains the async retrieval pattern. Lacks explicit read-only safety confirmation that would ideally be in annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for the task_id parameter, establishing the baseline score of 3. The tool description does not add additional semantic context about the parameter format, validation rules, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Query' with clear resource 'status and result of a Seedream image generation or edit task'. It distinguishes from siblings seedream_generate_image and seedream_edit_image by specifying this retrieves results from existing tasks rather than creating new ones, and from list operations by focusing on specific task retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' bullet list with three specific scenarios (checking completion, retrieving URLs, getting full details). However, it does not explicitly name the sibling alternative seedream_get_tasks_batch for batch operations or provide explicit 'when not to use' guidance, though the singular 'task' scope is implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds valuable behavioral context by noting efficiency gains over single requests and specifying return value contents ('Status and image information'). However, lacks explicit safety disclosure (read-only nature), rate limits, or error handling behavior for invalid task IDs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear information hierarchy: purpose statement, efficiency rationale, usage guidelines, and return value. Front-loaded with primary function. Bullet points for usage scenarios are efficient. Minor deduction for slight verbosity in 'Returns' section given that output schema exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter tool with output schema. Covers invocation context (when to use), operational context (efficiency), and output context (status/image info). Could be improved by mentioning batch size limits or not-found behavior, but sufficient for agent selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage ('List of task IDs to query. Allows querying multiple tasks at once'). Description references 'task IDs' in context of batch efficiency but does not add semantic details (format constraints, max batch size, validation rules) beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Query' with resource 'Seedream image tasks' and scope 'multiple/at once'. Explicitly distinguishes from sibling 'seedream_get_task' by stating it is 'More efficient than calling seedream_get_task multiple times', clearly establishing the batch vs. single distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' section with three concrete scenarios (multiple pending generations, several images at once, tracking a batch). Directly references the alternative single-task tool, giving clear guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It successfully discloses the return format ('Formatted table') and content (descriptions, capabilities, pricing), but omits other behavioral traits like rate limiting, caching behavior, or authentication requirements that would be helpful for a complete picture.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Excellent structure with clear visual hierarchy: purpose statement first, followed by usage conditions bullet points, and return value specification. No redundant text; every sentence provides distinct value (purpose, trigger conditions, output format).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has zero parameters and an output schema exists, the description appropriately explains what the output contains (formatted table with descriptions) without needing to detail return values exhaustively. Minor gap: could mention if the model list is static or dynamic, but adequate for the complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters. According to the scoring rubric, 0 parameters establishes a baseline score of 4, as there are no parameter semantics to explain beyond the schema itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('List') with a specific resource ('Seedream models') and clarifies scope ('capabilities and pricing'). It clearly distinguishes from siblings like seedream_list_sizes (which likely lists image dimensions) and the generation/editing tools by focusing on model enumeration rather than image manipulation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit 'Use this when:' section with three concrete scenarios (availability queries, model selection help, capability comparison). However, it lacks explicit 'when not to use' guidance or named alternatives (e.g., 'don't use this to check task status, use seedream_get_task instead').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses return format (JSON with task_id, image URLs), default watermarking behavior, asynchronous callback capability, and model-specific constraints (e.g., seed only works with v3). Could improve by explicitly noting the async task nature (implied by task_id and sibling get_task tools) and any rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy: one-line summary, capability elaboration, usage conditions, model selection guide, and return specification. No redundant text; every sentence provides decision-making value. Appropriate length for a 12-parameter tool with complex model variants.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness5/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given high complexity (12 parameters, 4 model versions with different capabilities, conditional features) and 100% schema coverage, the description provides sufficient context. It addresses model selection strategy and usage boundaries without needing to replicate detailed parameter specs already covered in the schema. Output schema exists, so brief return summary is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. The description adds value through the 'Model selection guide' section which organizes model options by capability (flagship vs. cost-effective) and the usage guidelines which help agents understand when to set certain parameters (e.g., not using edit models for generation).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description immediately states 'Generate an AI image from a text prompt using ByteDance's Seedream model' with specific verb (generate), resource (AI image), and provider context (ByteDance/Seedream). It distinguishes from sibling seedream_edit_image by explicitly naming it as the alternative for editing existing images.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains explicit 'Use this when:' and 'Do NOT use this when:' sections with clear scenarios (generate from scratch vs. edit/combine images). Specifically names seedream_edit_image as the alternative tool, providing unambiguous guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/AceDataCloud/MCPSeedream'
If you have feedback or need assistance with the MCP directory API, please join our Discord server