Skip to main content
Glama

Diagrams MCP

Server Details

Generate cloud architecture diagrams, flowcharts, and sequence diagrams.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ByteOverDev/diagrams-mcp
GitHub Stars
2

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.5/5 across 9 of 9 tools scored. Lowest: 3.7/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity. The tools are well-separated into equivalence mapping (find_equivalent, list_categories), node browsing (list_providers, list_services, list_nodes, search_nodes), and diagram rendering (render_diagram, render_mermaid, render_plantuml). The descriptions clearly differentiate their functions, and there's no overlap in what they accomplish.

Naming Consistency5/5

All tools follow a consistent verb_noun naming pattern using snake_case throughout. The verbs are appropriate and descriptive: 'find', 'list', 'render', and 'search' clearly indicate the action, while nouns like 'equivalent', 'categories', 'diagram', etc., specify the target. There are no deviations in naming conventions across the toolset.

Tool Count5/5

With 9 tools, the count is well-scoped for a diagrams server covering equivalence mapping, node discovery, and multiple rendering backends. Each tool earns its place by addressing a specific need in the workflow, from exploring available nodes to generating diagrams in different formats, without being overly sparse or bloated.

Completeness5/5

The tool surface is complete for the domain of diagram generation and node management. It covers the full lifecycle: discovering nodes (list_providers, list_services, list_nodes, search_nodes), understanding equivalences (find_equivalent, list_categories), and rendering diagrams in multiple formats (render_diagram, render_mermaid, render_plantuml). There are no obvious gaps, and the tools work together seamlessly as described.

Available Tools

9 tools
find_equivalentA
Read-onlyIdempotent
Inspect

Find cross-provider equivalents for a diagram node by infrastructure role.

Given a node name (e.g. 'EC2', 'Lambda', 'ComputeEngine'), returns the infrastructure role category it belongs to and the equivalent nodes from other providers.

If a node name is ambiguous, use list_categories to see all mapped roles and pick a provider-specific node name.

Args: node: Node class name to look up (case-insensitive, e.g. 'EC2', 'lambda'). target_provider: Optional provider to filter equivalents to (e.g. 'gcp', 'azure', 'aws'). If omitted, all equivalents across all other providers are returned.

Returns: A dict with keys: category (str): Infrastructure role category name. description (str): Human-readable description of the category. source (dict): The matched node with keys node, provider, service, import. equivalents (list[dict]): Equivalent nodes, each with keys node, provider, service, import.

ParametersJSON Schema
NameRequiredDescriptionDefault
nodeYes
target_providerNo

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate read-only and idempotent operations, which the description does not contradict. The description adds valuable context beyond annotations: it specifies case-insensitive node lookup, optional filtering by target_provider, and details on handling ambiguous inputs, enhancing behavioral understanding.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage notes, parameter details, and return format. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity and 0% schema coverage, the description is complete: it covers purpose, usage, parameters, and return values. With an output schema present, the detailed return format explanation is helpful but not strictly necessary, ensuring full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters: 'node' as a case-insensitive class name with examples, and 'target_provider' as an optional filter with examples and default behavior. This adds essential meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Find cross-provider equivalents') and resources ('diagram node by infrastructure role'), distinguishing it from siblings like list_categories or search_nodes by focusing on equivalence mapping rather than listing or searching.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance is provided: use this tool to find equivalents for a node name, and if ambiguous, use list_categories to resolve. It distinguishes when to use this tool versus alternatives, with clear sibling tool references.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_categoriesA
Read-onlyIdempotent
Inspect

List all infrastructure role categories with their mapped nodes.

Use this to browse all available equivalence mappings, or to disambiguate node names when find_equivalent reports ambiguity.

Returns a list of category dicts, each with: category (str): Category identifier (e.g. 'virtual_machine'). description (str): Human-readable description. providers (list[str]): Providers covered by this category. nodes (dict): Mapping of provider → list of node names in that category.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating a safe, repeatable read operation. The description adds valuable context beyond annotations by specifying the return format (list of category dicts with detailed structure) and clarifying its purpose for browsing and disambiguation. It doesn't contradict annotations and provides useful behavioral details about output structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly front-loaded with the core purpose in the first sentence, followed by usage guidelines and detailed return format. Every sentence earns its place by providing essential information without redundancy. The structure is logical and efficiently conveys all necessary information in minimal space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (0 parameters, read-only/idempotent annotations, output schema exists), the description is complete. It explains the purpose, usage guidelines, and detailed return structure, which complements the annotations and output schema. No gaps remain for an agent to understand and invoke this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on output semantics, which is valuable given the tool's purpose. It effectively compensates for the lack of input parameters by detailing what the tool returns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List all infrastructure role categories with their mapped nodes'), identifies the resource ('infrastructure role categories'), and distinguishes it from siblings by mentioning its use for browsing equivalence mappings or disambiguating node names when 'find_equivalent reports ambiguity'. This provides a precise verb+resource combination and explicit sibling differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('Use this to browse all available equivalence mappings, or to disambiguate node names when find_equivalent reports ambiguity'), naming a specific sibling alternative ('find_equivalent') and providing clear context for its application. This offers comprehensive guidance on usage scenarios versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_nodesA
Read-onlyIdempotent
Inspect

List available node classes for a provider.service combo.

Args: provider: Provider name (e.g. 'aws', 'gcp', 'k8s'). service: Service category (e.g. 'compute', 'database', 'network').

Returns: List of nodes with keys: name, import, alias_of (optional).

ParametersJSON Schema
NameRequiredDescriptionDefault
serviceYes
providerYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating a safe, repeatable read operation. The description adds valuable context by specifying the return format ('List of nodes with keys: name, import, alias_of (optional)'), which is not covered by annotations. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by well-structured sections for Args and Returns. Every sentence adds value, with no redundant information, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, read-only/idempotent annotations, and an output schema), the description is complete. It covers purpose, parameters with examples, and return format, leaving no gaps for the agent to understand and invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining both parameters: 'provider' as 'Provider name (e.g. 'aws', 'gcp', 'k8s')' and 'service' as 'Service category (e.g. 'compute', 'database', 'network')'. It provides clear examples and meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List available node classes') and target resource ('for a provider.service combo'), distinguishing it from siblings like list_providers, list_services, and search_nodes. It precisely defines the scope of what is being listed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying the required parameters (provider and service), but does not explicitly state when to use this tool versus alternatives like list_categories, list_providers, or search_nodes. No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_providersA
Read-onlyIdempotent
Inspect

List all available diagram providers (aws, gcp, azure, k8s, onprem, etc.).

Use list_providers -> list_services -> list_nodes to browse available node types for a specific provider.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, non-destructive operations. The description adds valuable context by specifying that it lists 'all available' providers and hints at a hierarchical browsing workflow, though it doesn't detail response format or pagination. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: the first sentence states the purpose with examples, and the second provides clear usage guidance. Every sentence adds value without unnecessary details, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only/idempotent annotations, and an output schema), the description is complete. It explains what the tool does, how to use it in context, and references sibling tools, covering all necessary aspects without needing to detail outputs or parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description reinforces this by not mentioning any parameters, which is appropriate and avoids redundancy. It focuses instead on the tool's purpose and usage context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all available diagram providers'), providing specific examples (aws, gcp, azure, k8s, onprem, etc.). It effectively distinguishes this tool from siblings like list_services and list_nodes by focusing on providers rather than services or nodes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance by stating 'Use list_providers -> list_services -> list_nodes to browse available node types for a specific provider.' This outlines a workflow and positions this tool as the first step in a sequence, clearly differentiating it from alternatives like list_categories or search_nodes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_servicesA
Read-onlyIdempotent
Inspect

List service categories for a provider (e.g. 'aws' -> ['compute', 'database', ...]).

Args: provider: Provider name from list_providers (e.g. 'aws', 'gcp', 'k8s').

ParametersJSON Schema
NameRequiredDescriptionDefault
providerYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds context about the provider parameter and example output format, but doesn't disclose additional behavioral traits like rate limits, error handling, or pagination. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by a clear 'Args:' section. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter), annotations covering safety, and an output schema (which handles return values), the description is complete enough. It explains the input semantics and purpose, though it could benefit from more sibling differentiation or usage context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries the full burden. It adds meaning by explaining the 'provider' parameter as 'Provider name from list_providers' with examples ('aws', 'gcp', 'k8s'), which clarifies usage beyond the bare schema. With only one parameter, this is sufficient for a high score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List service categories for a provider' with a specific example. It distinguishes from siblings like 'list_providers' by focusing on categories, but doesn't explicitly differentiate from 'list_categories' which might be similar.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by referencing 'provider name from list_providers', suggesting a workflow dependency. However, it doesn't explicitly state when to use this tool versus alternatives like 'list_categories' or 'search_nodes', nor does it provide exclusions or clear context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_diagramA
Read-only
Inspect

Render a mingrammer/diagrams Python snippet to PNG and return the image.

The code must be a complete Python script using from diagrams import ... imports and a with Diagram(...) context manager block.

Use search_nodes to verify node names and get correct import paths before writing code. Read the diagrams://reference/diagram, diagrams://reference/edge, and diagrams://reference/cluster resources for constructor options and usage examples.

Args: code: Full Python code using the diagrams library. filename: Output filename without extension. format: Output format — "png" (default), "svg", or "pdf". download_link: If True, store the image on the server and return a temporary download URL path (/images/{token}) instead of the inline image. The link expires after 15 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYes
formatNopng
filenameNodiagram
download_linkNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond the readOnlyHint annotation. It explains that the tool generates and returns an image, details the download_link behavior (server storage, temporary URL with 15-minute expiry), and specifies output format options. However, it doesn't mention rate limits or error handling, keeping it from a perfect score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by usage guidelines and parameter details. While efficient, the Args section could be more integrated into the flow, and some sentences (e.g., about reference resources) are slightly verbose but still earn their place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (rendering code to images) and lack of output schema, the description is largely complete. It covers purpose, usage, parameters, and behavioral details like download_link expiry. However, it doesn't explain the return format (e.g., image data vs. URL structure) or error cases, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description compensates well by explaining all four parameters in the Args section. It clarifies code must be 'complete Python script', filename is 'without extension', format has three options with a default, and download_link triggers server storage with a temporary URL. This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Render', 'return the image') and resources ('mingrammer/diagrams Python snippet', 'PNG'). It distinguishes from sibling tools like render_mermaid and render_plantuml by specifying the diagrams library, making it unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: it mentions prerequisites like using search_nodes to verify node names and reading reference resources for examples. It also distinguishes from alternatives by specifying the diagrams library context, though it doesn't explicitly name when-not scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_mermaidA
Read-only
Inspect

Render a Mermaid diagram definition and return the image with metadata.

The definition should be valid Mermaid syntax (e.g. flowchart, sequence, class, ER, state, or Gantt diagram).

Returns a list of content blocks: the rendered image plus a JSON text block with metadata including a mermaid.live edit link for opening the diagram in a browser editor.

Args: definition: Mermaid diagram definition text. filename: Output filename without extension. format: Output format — "png" (default), "svg", or "pdf". download_link: If True, store the image on the server and return a temporary download URL path (/images/{token}) instead of the inline image. The link expires after 15 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNopng
filenameNodiagram
definitionYes
download_linkNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it specifies the return format (list of content blocks with image and JSON metadata), mentions a mermaid.live edit link, and details server-side behavior for download_link (temporary URL with 15-minute expiration). This compensates well for the lack of output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: the first sentence states the core purpose, followed by syntax requirements, return format, and parameter details. Every sentence adds value without redundancy, and the parameter explanations are efficiently organized in bullet-like format under 'Args:'.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, no output schema, 0% schema coverage), the description is highly complete. It covers purpose, usage context, behavioral details (return format, edit link, download behavior), and full parameter semantics. The annotations provide safety info, and the description fills all other gaps effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all 4 parameters in detail. It clarifies that 'definition' requires valid Mermaid syntax, 'filename' excludes extension, 'format' has specific options with a default, and 'download_link' triggers server storage with a temporary URL. This adds essential meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Render a Mermaid diagram definition and return the image with metadata') and distinguishes it from siblings by specifying Mermaid syntax (vs. PlantUML in render_plantuml). It explicitly mentions the resource (Mermaid diagram) and output (image with metadata).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (for rendering Mermaid diagrams) and implies alternatives by mentioning specific diagram types (flowchart, sequence, etc.). However, it does not explicitly state when NOT to use it or directly compare to sibling tools like render_plantuml, which handles a different diagram syntax.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

render_plantumlA
Read-only
Inspect

Render a PlantUML diagram definition and return the image.

The definition should be valid PlantUML syntax wrapped in @startuml/@enduml (sequence, class, component, activity, state, deployment, etc.).

Args: definition: PlantUML diagram definition text. filename: Output filename without extension. format: Output format — "png" (default) or "svg". PDF is not supported (requires Batik/FOP). download_link: If True, store the image on the server and return a temporary download URL path (/images/{token}) instead of the inline image. The link expires after 15 minutes.

ParametersJSON Schema
NameRequiredDescriptionDefault
formatNopng
filenameNodiagram
definitionYes
download_linkNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond annotations: it specifies that PDF is not supported (requires Batik/FOP), and that download links expire after 15 minutes. This discloses limitations and temporal behavior not covered by annotations, though it could mention more about error handling or performance.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized, with a clear purpose statement followed by a parameter breakdown. Every sentence adds value, such as syntax requirements and format limitations. It could be slightly more front-loaded by moving key constraints earlier, but overall it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (rendering diagrams with multiple parameters) and lack of output schema, the description is largely complete. It covers purpose, parameters, and behavioral traits like expiration and format support. However, it does not detail the return format (e.g., image data vs. URL structure) or error cases, leaving some gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all parameters. It defines 'definition' as 'PlantUML diagram definition text', 'filename' as 'Output filename without extension', 'format' with details on default and unsupported options, and 'download_link' with server storage and expiration behavior. This adds crucial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Render a PlantUML diagram definition and return the image.' It specifies the verb ('Render'), resource ('PlantUML diagram definition'), and outcome ('return the image'). It distinguishes from siblings like 'render_mermaid' by explicitly mentioning PlantUML syntax and formats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating the definition 'should be valid PlantUML syntax wrapped in @startuml/@enduml' and listing diagram types. However, it does not explicitly state when to use this tool versus alternatives like 'render_mermaid' or 'render_diagram', nor does it provide exclusions or prerequisites beyond syntax requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_nodesA
Read-onlyIdempotent
Inspect

Search for diagram nodes by keyword across all providers and services.

For targeted browsing when you know the provider, use list_providers -> list_services -> list_nodes instead.

Args: query: Search term (case-insensitive substring match).

Returns: List of matching nodes with keys: node, provider, service, import, alias_of (optional). Sorted by relevance: exact match first, then prefix, then substring.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true and idempotentHint=true, indicating safe, repeatable operations. The description adds valuable behavioral context beyond annotations: it specifies the search scope ('across all providers and services'), matching behavior ('case-insensitive substring match'), and sorting logic ('exact match first, then prefix, then substring'). No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by usage guidelines, parameter details, and return format. Every sentence adds value without redundancy, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (1 parameter, read-only/idempotent annotations, output schema exists), the description is complete. It covers purpose, usage, parameter semantics, and return format, with the output schema handling return values. No gaps are evident for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries full burden. It clearly explains the single parameter 'query' as a 'Search term (case-insensitive substring match)', adding meaningful semantics about case sensitivity and matching type that aren't in the schema. This compensates well for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search for diagram nodes by keyword') and resource ('across all providers and services'), distinguishing it from sibling tools like list_nodes which presumably lists nodes within a specific service. It uses precise verbs and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool ('Search for diagram nodes by keyword across all providers and services') and when not to ('For targeted browsing when you know the provider, use list_providers -> list_services -> list_nodes instead'), naming the alternative workflow. This gives clear guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.