Skip to main content
Glama

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools have distinct purposes covering the generation lifecycle (inspiration, prompt enhancement, generation, workflow management, preferences). Minor potential confusion between 'comfyui_workflow' (template management) and 'generate_image' (which can use ComfyUI), but descriptions clarify the separation between configuration and execution.

    Naming Consistency3/5

    Six tools follow a clear verb_noun pattern (enhance_prompt, generate_image, get_inspiration, list_models, manage_preferences, search_gallery), but 'comfyui_workflow' breaks this convention as a noun phrase without an action verb, creating inconsistency in the naming scheme.

    Tool Count5/5

    Seven tools is an ideal count for this domain, covering inspiration gathering, prompt processing, image generation, model selection, workflow template management, and user preferences without bloat or significant gaps.

    Completeness4/5

    Covers the core AI design workflow comprehensively: gallery search/inspiration, prompt enhancement, multi-platform generation, model listing, workflow CRUD, and preferences. Minor gaps exist (no generation history management or async job status checking), but the surface supports complete generation workflows.

  • Average 4.1/5 across 7 of 7 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v1.2.7

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 7 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description aligns with annotations (mentions modify/delete matching destructiveHint=true) but adds minimal behavioral context beyond what structured fields provide. It does not clarify whether delete is permanent, if import overwrites existing workflows, or what data structure the operations return.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficiently structured sentence that front-loads the core purpose ('Manage ComfyUI workflow templates') followed by a colon-delimited list of specific actions. Every word serves a distinct purpose with zero redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the comprehensive schema and clear action enumeration, the description is adequate for tool selection. However, it lacks any description of return values or output formats (list returns names, view returns parameters, etc.), which would be helpful since no output schema is provided.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents all six parameters including the enum values and conditional requirements. The description provides a high-level grouping of actions ('modify settings', 'import from file') but does not add syntax details, examples, or semantic clarifications beyond the excellent schema documentation.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description provides a specific verb (Manage) and resource (ComfyUI workflow templates), then explicitly enumerates all five supported operations. It clearly distinguishes this tool from siblings like generate_image or enhance_prompt by focusing on template management rather than content generation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description lists available actions, it provides no explicit guidance on when to use this tool versus alternatives (e.g., when to import/modify a workflow vs. using generate_image directly). There are no prerequisites, warnings about destructive operations beyond the implicit 'delete', or conditional logic for choosing between actions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The annotation readOnlyHint: false confirms the tool performs writes, which aligns with the 'update' verb. The description adds valuable workflow context about initialization at conversation start, but fails to disclose mutation semantics (persistence guarantees, atomicity) or return value structure given the lack of output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: the first establishes capabilities and scope, the second provides actionable timing guidance. Information is front-loaded and every clause earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for basic invocation but incomplete given the tool's complexity (8 parameters, 4 action modes) and absence of output schema. The description omits what data structure returns from the 'get' action or success indicators for mutations, leaving a significant gap for an agent expecting to consume preference data.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already fully documents all 8 parameters including enum values and conditional logic (e.g., 'set' vs 'add_favorite' contexts). The description lists the preference categories but adds no semantic depth beyond what the schema provides, meeting the baseline for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the dual purpose ('Read or update') and enumerates specific preference resources (style, aspect ratio, model, style notes, favorite prompts). It effectively distinguishes this tool from sibling generation/workflow tools by focusing on user configuration management.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides specific temporal guidance ('Call with action "get" at conversation start') which establishes a clear usage pattern. However, it lacks explicit 'when-not-to-use' guidance or alternatives for preference management (e.g., when to use 'set' vs 'add_favorite').

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior1/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The annotations declare destructiveHint: true, implying the tool destroys or deletes data, but the description frames the tool as purely creative ('Generate an image'). The description fails to explain any destructive behavior (such as overwriting existing files) or reconcile this contradiction, despite providing other behavioral details like automatic file compression.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste: sentence 1 states purpose and scope, sentence 2 provides workflow tips linking to siblings, sentence 3 states a critical model restriction, and sentence 4 gives specific instruction for prompt enhancement. Information is front-loaded and every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the high complexity (9 parameters, 3 providers with different behaviors), the description comprehensively covers provider-specific behaviors, file handling for references, and model constraints. It lacks only a description of the return value (image data/URL), which would be helpful given the absence of an output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema coverage, the baseline is 3. The description adds valuable workflow context beyond the schema: it links the prompt parameter to get_inspiration/enhance_prompt, clarifies referenceImages accepts gallery URLs, and provides model-specific constraints (Niji 7 limitations) that inform the model parameter selection.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description opens with a specific verb-noun pair ('Generate an image') and immediately clarifies the supported platforms (MeiGen, ComfyUI, OpenAI-compatible), distinguishing it from sibling tools like comfyui_workflow which manages workflows rather than generating images.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit workflow guidance: recommends using get_inspiration() or enhance_prompt() for prompts, specifies using gallery URLs for referenceImages, and includes a clear when-not-to-use constraint ('Midjourney Niji 7 is for anime/illustration ONLY'). Also specifies the correct style parameter for enhance_prompt when using Niji 7.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The annotation declares readOnlyHint=true, and the description adds valuable behavioral context by disclosing that return data includes 'pricing and capabilities'—critical information for model selection that is not present in the structured fields.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence of nine words with zero waste. Front-loaded with verb, immediately qualifying the scope (AI image generation), and ending with specific return attributes (pricing and capabilities).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the low complexity (1 optional param, read-only), no output schema, and clear sibling context, the description is complete. It compensates for missing output schema by specifying return content (pricing/capabilities).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage for the 'activeOnly' parameter, the baseline is 3. The description does not explicitly reference the parameter or explain filtering semantics beyond the word 'available', but the schema carries the full burden adequately.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'List' with clear resource 'AI image generation models' and distinguishes from siblings like 'generate_image' and 'comfyui_workflow' by focusing on discovery/inspection rather than execution or workflow management.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the purpose is clear, there is no explicit guidance on when to use this versus alternatives, or that it should be called before 'generate_image' to select valid model IDs. Usage is implied but not stated.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    While annotations declare readOnlyHint=true, the description adds valuable behavioral context: 'Free, no API key needed' discloses cost/auth requirements, and 'detailed, high-quality prompt' sets expectations for output quality. It does not contradict the read-only annotation despite using the word 'Transform'.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three well-structured sentences with zero waste: purpose statement first, usage context second, operational constraints and workflow tips third. Every sentence earns its place by conveying distinct information (function, trigger conditions, cost/workflow).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a 2-parameter tool with complete schema coverage, the description is adequately complete. It provides ecosystem context (integration with gallery tools) and operational constraints (free, no auth). Could explicitly mention the return format (string), but 'detailed, high-quality prompt' provides sufficient implicit context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the baseline is appropriately 3. The description references the 'prompt' parameter implicitly through the example ('a cat in a garden'), but adds no semantic information about the 'style' parameter or parameter interactions beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Transform') and resource ('image generation prompt') to clearly define the tool's function. It distinguishes itself from sibling tools like 'generate_image' by emphasizing it creates prompts rather than final images, and references 'gallery inspiration' (likely 'get_inspiration' or 'search_gallery') as a complementary workflow step.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance ('Use when the user provides a brief description... and needs a detailed, high-quality prompt') and suggests workflow integration ('Combine with gallery inspiration'). However, it lacks explicit when-NOT-to-use guidance or direct comparison to alternatives like using raw prompts with generate_image.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Substantial value beyond readOnlyHint annotation: discloses that results include image URLs and provides specific rendering instructions ('render them as markdown images'). Also clarifies the semantic search behavior. Does not cover rate limits or pagination details, preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: defines capability, explains output handling (critical given no output schema), and specifies usage context. Front-loaded with the most important distinction (semantic vs keyword search).

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates well for missing output schema by explaining that results contain image URLs and how to display them. Covers primary use cases and search behavior. Could note that all parameters are optional (0 required), but schema structure makes this evident.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Input schema has 100% description coverage (query, category, limit, offset, sortBy all well-documented). The description does not add parameter-specific guidance beyond what's in the schema, meeting the baseline expectation for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Excellent specificity: states the tool 'Search[es] AI image prompts with semantic understanding' and distinguishes its capability from keyword matching. The phrase 'visually and conceptually similar results' precisely defines the search behavior, differentiating it from sibling tools like generate_image or get_inspiration.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides explicit when-to-use guidance ('Use when users need inspiration, want to explore styles, or say "generate an image" without a specific idea'), implicitly distinguishing from generate_image. Lacks explicit 'when not to use' or named alternative tools, which would be needed for a perfect score.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations declare readOnlyHint=true, confirming safe retrieval. The description adds valuable behavioral context about output consumption patterns (displaying to users, feeding into generate_image, style transfer usage) that annotations don't cover. Does not mention error cases (e.g., invalid imageId), preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences efficiently structured: purpose (sentence 1), immediate action (sentence 2), downstream integration (sentence 3). No redundant words. Front-loaded with the core action. Every sentence earns its place by advancing agent understanding.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the single parameter with complete schema coverage and readOnly annotation, the description adequately covers the retrieval behavior and output utilization. Minor gap: doesn't mention what happens if the imageId doesn't exist, but sufficiently complete for a simple lookup tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% coverage with 'imageId' well-described. The description adds semantic context by implying the parameter represents a 'gallery entry' and linking it to 'search_gallery results', helping agents understand this tool consumes output from its sibling. Exceeds baseline 3 by providing provenance context.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') with clear resource ('gallery entry') and scope ('full prompt and all image URLs'). It distinguishes from sibling 'search_gallery' by implying this retrieves specific details after searching, and from 'generate_image' by noting this retrieves prompts rather than creating images.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly states when to use ('Show the images to the user as visual examples') and provides clear workflow integration naming siblings: 'used directly with generate_image()' and 'passed as referenceImages for style transfer'. This establishes the tool's position in a multi-step workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

MeiGen-AI-Design-MCP MCP server

Copy to your README.md:

Score Badge

MeiGen-AI-Design-MCP MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jau123/MeiGen-AI-Design-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server