Skip to main content
Glama
Ownership verified

Server Details

MCP server for AI Diagram Maker — generate software engineering diagrams from natural language, code, ASCII diagram, images, or Mermaid. Inline diagram rendering using MCP apps UI and diagram URL in responses. Works with Cursor, Claude Desktop, Claude Code, and any MCP-compatible AI.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

5 tools
generate_diagram_from_asciiAInspect

Convert an ASCII art diagram into a polished visual diagram. Use this tool when the user has an existing ASCII art representation of a system, flow, or architecture and wants it rendered as a proper diagram. Accepts box-drawing characters, arrow representations (-->, ==>), and plain text layouts. Returns a link to view and edit the generated diagram in the browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptNoAdditional instruction for rendering. Example: "Use a dark theme and add icons"
contentYesRaw ASCII art diagram to convert into a polished visual diagram. Include the full ASCII art as-is, with box-drawing characters, arrows, or plain text layout. Example: +--------+ +--------+ | Client | --> | Server | +--------+ +--------+
diagramTypeNoPreferred diagram type. Leave blank to let the AI infer from the ASCII layout.
isIconEnabledNoSet to true when the user asks to include icons in the diagram.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses input handling capabilities (box-drawing characters, arrows) and explicitly states the output behavior ('Returns a link to view and edit'), compensating for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences: purpose statement, usage condition, input format specifications, and output description. Every sentence earns its place with no redundancy or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage and no output schema, the description appropriately covers the tool's purpose, usage context, input expectations, and return value. Lacks only minor behavioral details like rate limits or link expiration that would be needed for a perfect score without annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters comprehensively. The description adds general context about accepted input formats (box-drawing, arrows) but does not add significant semantic detail beyond the schema's parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Convert') and clearly identifies the resource (ASCII art diagram). It specifically mentions 'ASCII art' twice, clearly distinguishing it from sibling tools that handle JSON, images, Mermaid, or plain text inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use guidance ('Use this tool when the user has an existing ASCII art representation...'). However, it does not explicitly name alternative sibling tools for non-ASCII inputs or provide explicit when-not-to-use exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_diagram_from_imageAInspect

Convert an image (whiteboard photo, screenshot, hand-drawn sketch) into a clean diagram. Use this tool when the user provides an image URL or base64-encoded image and wants it converted to a proper software engineering diagram. Accepts public image URLs or base64 data URIs (data:image/...;base64,...). Returns a link to view and edit the generated diagram in the browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptNoInstruction describing what to extract or how to render the diagram. Example: "Convert this whiteboard photo into a clean sequence diagram"
contentYesEither a public image URL or a base64 data URI of the image to convert. Supported formats: JPEG, PNG, GIF, WebP. For a URL: 'https://example.com/whiteboard.png'. For a data URI: 'data:image/png;base64,iVBORw0KGgo...'
diagramTypeNoPreferred output diagram type. Leave blank to let the AI decide based on the image content.
isIconEnabledNoSet to true when the user asks to include icons in the diagram.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries full burden. It successfully discloses input format constraints (public URLs or base64 data URIs) and critical output behavior (returns a link to view and edit). Could improve by mentioning error conditions for unsupported image content or data retention policies.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose statement (front-loaded), usage condition, and technical I/O details. Zero redundancy. Each sentence earns its place without repeating schema fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description appropriately explains the return value (link to view/edit). Combined with 100% schema coverage and clear behavioral constraints, the description is complete for this tool's complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. The description adds contextual examples for content parameter (whiteboard, screenshot, sketch) and reinforces the base64 format syntax, but does not add semantic meaning beyond what the well-documented schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Convert' with clear resource 'image' (subtyped as whiteboard photo, screenshot, hand-drawn sketch) into 'clean diagram'. The suffix 'from_image' combined with siblings 'from_ascii/json/mermaid/text' provides clear differentiation from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this tool when the user provides an image URL or base64-encoded image and wants it converted to a proper software engineering diagram,' giving clear positive context. Lacks explicit 'when not to use' or named sibling alternatives, preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_diagram_from_jsonAInspect

Generate a diagram from a JSON structure. Use this tool when the user wants to visualise JSON data such as API responses, database schemas, dependency trees, configuration files, or any structured data. Pass the raw JSON string as content. Returns a link to view and edit the generated diagram in the browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptNoInstruction for how to interpret or render the JSON. Example: "Show as an entity relationship diagram with cardinality labels"
contentYesA JSON string representing the structure to visualise. This can be API response data, a database schema, a config file, dependency tree, or any other structured JSON. Example: '{"users": [{"id": 1, "orders": [{"id": 101}]}]}'
diagramTypeNoPreferred diagram type. Defaults to 'erd' for schemas and 'flowchart' for other JSON.
isIconEnabledNoSet to true when the user asks to include icons in the diagram.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It adds critical behavioral context that it 'Returns a link to view and edit the generated diagram in the browser' rather than returning raw image data. However, it omits mutation characteristics (whether diagrams persist, are public/private), error handling for malformed JSON, or rate limit implications for a generation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences structured logically: purpose statement, usage conditions, parameter instruction, and return value. Every sentence earns its place with zero redundancy. High information density without repetition of schema details (it references 'content' by name without redefining it). Front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 4-parameter tool with no output schema. Compensates for missing output schema by explicitly stating the return behavior (browser link). Given the well-documented input schema, the description provides sufficient high-level context without needing to replicate parameter details or explain return schema fields.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds specific usage guidance for the 'content' parameter ('Pass the raw JSON string as `content`'), reinforcing that a stringified representation is required. It does not elaborate on 'prompt' syntax, 'diagramType' selection logic, or 'isIconEnabled' behavior beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific action 'Generate a diagram from a JSON structure', clearly identifying both the verb (generate) and resource (JSON structure). It effectively distinguishes from siblings (generate_diagram_from_text, _ascii, etc.) by enumerating specific JSON use cases like 'API responses, database schemas, dependency trees' that uniquely signal this tool's domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance: 'Use this tool when the user wants to visualise JSON data such as...' followed by concrete examples. This enables correct selection when JSON is present. Lacks explicit negative constraints or direct comparison to sibling alternatives (e.g., 'use generate_diagram_from_text for natural language instead'), but the input specificity makes the boundary clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_diagram_from_mermaidAInspect

Convert a Mermaid diagram definition into a D2 diagram and return a PNG image. Use this tool when the user has existing Mermaid code (flowchart, sequenceDiagram, erDiagram, etc.) and wants it converted to D2 or rendered as an image. Pass the Mermaid source as content. Returns a link to view and edit the generated diagram in the browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptNoOptional instruction for layout or styling of the converted diagram.
contentYesA Mermaid diagram definition to convert to D2. Pass the raw Mermaid source (e.g. flowchart, sequenceDiagram, erDiagram). Example: "flowchart LR A --> B --> C"
diagramTypeNoPreferred diagram type for the converted D2 output.
isIconEnabledNoSet to true when the user asks to include icons in the diagram.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully explains the conversion process (Mermaid to D2), output format (PNG), and return value (link to view/edit in browser). This covers the essential behavioral traits, though it omits error handling details for invalid Mermaid syntax.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-sentence structure is efficiently front-loaded: sentence 1 defines the action, sentence 2 specifies usage context, sentence 3 indicates parameter mapping, and sentence 4 explains return values. Zero redundancy; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters with 100% schema coverage and no output schema, the description adequately compensates by explaining what the tool returns (PNG image and browser link). It covers purpose, usage, parameters, and returns sufficiently for a conversion tool, though error handling details are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, meaning the schema fully documents all four parameters (content, prompt, diagramType, isIconEnabled). The description adds minimal semantic value beyond the schema by explicitly mapping 'Mermaid source' to the 'content' parameter, but otherwise relies on the schema's comprehensive documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Convert') and resources ('Mermaid diagram definition', 'D2 diagram', 'PNG image') to clearly state the transformation. It explicitly distinguishes from siblings by specifying Mermaid source as input, immediately signaling this is for Mermaid-to-D2 conversion rather than ASCII, JSON, or text inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance ('when the user has existing Mermaid code... and wants it converted'). It implicitly distinguishes from siblings by specifying the Mermaid input type, though it does not explicitly name sibling tools or provide negative constraints (when NOT to use).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

generate_diagram_from_textAInspect

Generate a software engineering diagram from a natural language description. Use this tool when: the user asks to 'create a diagram', 'show me a flowchart', 'visualise the architecture', uses the keyword 'adm' or 'ai diagram maker', or asks for any visual representation of code, systems, processes or data flows. Supported diagram types: flowchart, sequence, ERD, system architecture, network architecture, UML, mindmap, workflow. Returns a link to view and edit the generated diagram in the browser.

ParametersJSON Schema
NameRequiredDescriptionDefault
promptNoAdditional styling or layout instruction. Example: "Use left-to-right layout with pastel colors"
contentYesNatural language description of the diagram to generate. Be descriptive — include components, relationships, data flows, etc. Example: "Create a microservices architecture with API gateway, auth service, user service, and PostgreSQL database"
diagramTypeNoPreferred diagram type. Leave blank to let the AI infer the best type from your description.
isIconEnabledNoSet to true when the user asks to include icons in the diagram.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and explicitly discloses: 'Returns a link to view and edit the generated diagram in the browser.' This reveals output format (URL), persistence (editable), and access method. Could be improved with failure mode or rate limit disclosure, but covers essential behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three-sentence structure logically flows: purpose → usage triggers → supported types/output. Information-dense but well front-loaded. The 'Use this tool when' sentence is lengthy but necessary for the comprehensive trigger list.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 4-parameter generation tool with no output schema or annotations. Compensates for missing output schema by describing return behavior (link), and provides sufficient context for parameter usage despite relying on schema for parameter specifics.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description lists supported diagram types (flowchart, sequence, etc.) which reinforces the enum in schema but adds no new parameter constraints or usage patterns beyond schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description explicitly states 'Generate a software engineering diagram from a natural language description' — specific verb (Generate), resource (diagram), and input modality (natural language description). This effectively distinguishes it from siblings that accept ASCII, JSON, Mermaid, or image inputs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit 'Use this tool when:' triggers including specific phrases ('create a diagram', 'show me a flowchart'), keywords ('adm', 'ai diagram maker'), and intent indicators ('visual representation of code, systems, processes'). Lacks explicit 'when not to use' or sibling alternative references, but input modality provides implicit differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources