Skip to main content
Glama

Server Details

AI-powered diagrams, mind maps, flowcharts on a free unlimited collaborative whiteboard

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

13 tools
add_elementsAInspect

Main method for creating canvas and adding elements. ⚠️ IMPORTANT: Call get_guide first and follow its instructions! If room_url is NOT provided - creates a NEW canvas and returns room_url. If room_url IS provided - adds elements to that canvas. IMPORTANT: When creating new canvas, ALWAYS include room_url in your response to the user! Element types: rectangle, ellipse, diamond, text, arrow, line. TEXT IN SHAPES: use containerId on text element pointing to shape id. ARROWS: Position at EDGE of source shape. Auto-bound within 30px. Colors: strokeColor, backgroundColor (hex).

ParametersJSON Schema
NameRequiredDescriptionDefault
timeoutNoRoom lifetime in seconds (default: 3600, max: 86400). Only used when creating new canvas
elementsYesArray of element definitions to create
room_urlNoRoom URL or ID. If not provided, creates a new canvas

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations. Annotations indicate it's not read-only, not destructive, and open-world. The description adds: prerequisite (call get_guide first), dual-mode behavior (create vs. add), critical response requirement (include room_url), element type constraints, text-in-shapes technique, arrow positioning rules, auto-binding behavior, and color format details. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose and critical warnings, but becomes dense with implementation details (element types, text binding, arrow rules, colors). Some sentences could be more concise, and the structure mixes high-level guidance with specific technical constraints in a single paragraph.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (dual create/add behavior, many element types, binding rules) and the presence of both comprehensive annotations and an output schema, the description provides complete contextual guidance. It covers prerequisites, behavioral modes, critical response requirements, element constraints, and implementation techniques - addressing what structured fields don't capture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already documents all parameters thoroughly. The description adds some semantic context: it explains the conditional logic of room_url (creates new canvas if not provided), mentions timeout is 'only used when creating new canvas,' and provides element type examples. However, most parameter details remain in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Main method for creating canvas and adding elements.' It specifies the dual behavior based on room_url presence (create new canvas vs. add to existing). However, it doesn't explicitly differentiate from sibling tools like 'add_elements_from_mermaid' or 'update_elements' beyond being the 'main method.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: '⚠️ IMPORTANT: Call get_guide first and follow its instructions!' It specifies when to use (creating new canvas when room_url not provided, adding elements when provided) and includes critical behavioral instructions like 'ALWAYS include room_url in your response to the user when creating new canvas.'

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

add_elements_from_mermaidAInspect

Add elements from Mermaid diagram. ⚠️ IMPORTANT: Call get_guide first and follow its instructions! If room_url is NOT provided - creates a NEW canvas and returns room_url. If room_url IS provided - adds diagram elements to that canvas. IMPORTANT: When creating new canvas, ALWAYS include room_url in your response to the user! Supports Flowchart, Sequence, and Class diagrams. FLOWCHART EXAMPLE: "flowchart TD\n A[Start] --> B{Decision}\n B -->|Yes| C[OK]\n B -->|No| D[Cancel]" SEQUENCE EXAMPLE: "sequenceDiagram\n Alice->>Bob: Hello\n Bob-->>Alice: Hi" CLASS EXAMPLE: "classDiagram\n class Animal{\n +name: string\n +eat()\n }\n Animal <|-- Dog"

ParametersJSON Schema
NameRequiredDescriptionDefault
configNoOptional Mermaid configuration
mermaidYesMermaid diagram definition (flowchart, sequence, or class diagram)
timeoutNoRoom lifetime in seconds (default: 3600, max: 86400). Only used when creating new canvas
room_urlNoRoom URL or ID. If not provided, creates a new canvas

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds valuable behavioral context beyond annotations: it explains the conditional behavior based on room_url (creates new canvas vs adds to existing), emphasizes the importance of returning room_url to users, and provides specific examples of supported diagram syntax. While annotations cover basic safety (destructiveHint: false), the description adds practical implementation details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with important warnings upfront and clear sections. While slightly lengthy due to including multiple diagram examples, every sentence serves a purpose: establishing prerequisites, explaining conditional behavior, emphasizing user communication requirements, and providing syntax guidance. The information density is high with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (conditional behavior, multiple diagram types, interaction with other tools) and the presence of both comprehensive annotations and an output schema, the description provides excellent contextual completeness. It covers prerequisites, behavioral conditions, supported formats with examples, and user communication requirements - addressing everything an agent needs to use this tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all parameters thoroughly. The description adds minimal parameter semantics beyond the schema - it clarifies the conditional behavior of room_url and mentions timeout is 'only used when creating new canvas,' but doesn't significantly enhance understanding of individual parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Add elements from Mermaid diagram' with specific details about creating new canvases or adding to existing ones. It distinguishes itself from siblings by focusing on Mermaid diagram parsing rather than general element manipulation like 'add_elements' or 'update_elements'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage instructions: 'Call get_guide first and follow its instructions!' and clarifies when to use based on room_url presence. It also specifies supported diagram types (Flowchart, Sequence, Class) with concrete examples, giving clear context for when this tool is appropriate versus other element manipulation tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

align_elementsAInspect

Align elements on a canvas. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
alignmentYesAlignment direction
elementIdsYesArray of element IDs to align

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a non-read-only, non-destructive operation (readOnlyHint=false, destructiveHint=false). The description adds minimal behavioral context by mentioning the prerequisite 'room_url from add_elements,' which suggests a dependency, but doesn't elaborate on effects like whether alignment is relative to canvas or other elements, or if it modifies element properties permanently.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two sentences, front-loading the core purpose and following with a prerequisite. There is no wasted text, and every part serves a clear function, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, mutation operation), annotations cover safety aspects, and an output schema exists (though not provided here), the description is reasonably complete. It states the purpose and a key prerequisite, but could benefit from more detail on alignment behavior or interaction with siblings to fully guide the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for all three parameters (room_url, alignment, elementIds). The description doesn't add any meaningful semantic details beyond what the schema provides, such as explaining alignment behavior or element ID formats, so it meets the baseline for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('align elements') and the target ('on a canvas'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'distribute_elements' or 'update_elements' that might also manipulate element positioning, missing full sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context by stating 'Requires room_url from add_elements,' which implies a prerequisite and hints at workflow sequencing. However, it doesn't specify when to use this tool versus alternatives like 'distribute_elements' or 'update_elements' for positioning tasks, leaving usage guidance incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_elementsA
Destructive
Inspect

Delete elements from a canvas. Requires room_url from add_elements. Pass ids array of element IDs to delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
idsYesArray of element IDs to delete
room_urlYesRoom URL (from create_canvas) or room ID to operate on

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true, readOnlyHint=false, and openWorldHint=false, covering safety and mutability. The description adds valuable context by specifying that it deletes elements (reinforcing destructiveness) and mentions the requirement for room_url from add_elements, which clarifies dependencies beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action ('Delete elements from a canvas') and uses only two concise sentences that each add necessary information (prerequisites and parameter guidance). There is no wasted text, making it highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive operation with two parameters), annotations cover safety and mutability, and an output schema exists (so return values need not be explained). The description complements this by stating the action, prerequisites, and parameter usage, making it complete for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (ids and room_url). The description adds minimal value by restating 'ids array of element IDs to delete' and mentioning room_url's source, but does not provide additional semantics beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Delete elements') and the resource ('from a canvas'), making the purpose immediately understandable. It distinguishes itself from siblings like 'add_elements' or 'update_elements' by specifying deletion, avoiding ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit context for when to use this tool by mentioning 'Requires room_url from add_elements', which helps the agent understand prerequisites. However, it does not specify when not to use it or name alternatives (e.g., 'update_elements' for modifications instead of deletion), leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

distribute_elementsAInspect

Lay out elements sequentially without overlapping, with a 10% gap between each pair. First element stays as anchor, others are placed after it. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
directionYesDistribution direction
elementIdsYesArray of element IDs to distribute

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-read-only, non-destructive, closed-world tool. The description adds behavioral context: it specifies a 10% gap between elements, an anchor element that stays fixed, and sequential placement without overlapping. This goes beyond annotations, but doesn't cover aspects like error handling, performance, or what happens if elements already overlap. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: two sentences that directly state the core functionality and key constraints. Every sentence adds value—first defines the layout behavior, second specifies anchor and prerequisite—with zero wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (layout operation with 3 parameters), rich annotations, 100% schema coverage, and presence of an output schema, the description is reasonably complete. It covers the main behavior and constraints, though it could benefit from more detail on error cases or interactions with sibling tools. The output schema likely handles return values, reducing the burden on the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds minimal param semantics: it mentions 'room_url from add_elements' as a requirement, implying a dependency, but doesn't elaborate on parameter interactions or usage beyond what the schema provides. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Lay out elements sequentially without overlapping, with a 10% gap between each pair.' It specifies the verb ('lay out'), resource ('elements'), and key behavior (non-overlapping with 10% gap). However, it doesn't explicitly distinguish this from sibling tools like 'align_elements' or 'group_elements', which might have similar layout functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'First element stays as anchor, others are placed after it. Requires room_url from add_elements.' This implies a prerequisite (room_url from add_elements) and an anchor behavior, but it doesn't explicitly state when to use this tool versus alternatives like 'align_elements' or 'group_elements', nor does it mention exclusions or specific scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_guideA
Read-only
Inspect

⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools! Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples. Following these instructions ensures correct diagrams.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already indicate read-only, non-destructive, and closed-world behavior. The description adds valuable context beyond annotations: it emphasizes the mandatory nature of calling this tool first and that following instructions 'ensures correct diagrams.' This provides operational guidance that annotations don't cover, though it doesn't detail specific behavioral traits like rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: it uses a warning symbol and bold text for the mandatory step, then clearly states the return value and benefit in two sentences. Every sentence earns its place by providing critical information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a setup/instruction tool with no parameters), the description is complete. It explains the purpose, mandatory usage, and expected output (instructions for whiteboard creation). With annotations covering safety aspects and no output schema needed for an instruction-returning tool, the description provides all necessary context for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and usage. A baseline of 4 is applied since no parameters exist, and the description doesn't attempt to explain non-existent parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool's purpose: 'Returns comprehensive instructions for creating whiteboards: tool selection strategy, iterative workflow, and examples.' It specifies the verb ('Returns comprehensive instructions'), resource ('creating whiteboards'), and scope ('tool selection strategy, iterative workflow, and examples'), clearly distinguishing it from sibling tools that perform operations on whiteboard elements.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: '⚠️ MANDATORY FIRST STEP - Call this tool BEFORE using any other Canvs tools!' It clearly states when to use this tool (as a first step before any other tools) and implies when not to use it (after other tools have been called without it). This directly addresses the context of sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_imageA
Read-only
Inspect

Get PNG screenshot of the canvas from the browser. If no browser has the canvas open, returns an error — ask the user to open the canvas URL in their browser and retry.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, and closed-world behavior, but the description adds valuable context beyond this: it specifies that the tool requires an open browser with the canvas, returns an error if not available, and outputs a PNG format screenshot. This enhances understanding of operational constraints without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a critical precondition in the second. Both sentences are essential—the first defines the action, and the second provides crucial error handling guidance—with no wasted words or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (single parameter, no output schema), the description is complete: it explains what the tool does, when to use it, key behavioral constraints, and error handling. Combined with annotations covering safety and world hints, this provides sufficient context for an agent to invoke the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the single required parameter 'room_url'. The description does not add further parameter details beyond what the schema provides, such as format examples or usage nuances, so it meets the baseline for adequate but not enhanced parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get PNG screenshot') and resource ('canvas from the browser'), distinguishing it from sibling tools that manipulate elements rather than capture visual output. It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (to get a screenshot) and when not to use it (if no browser has the canvas open, it returns an error). It provides clear alternative action guidance ('ask the user to open the canvas URL in their browser and retry'), making usage context unambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

group_elementsAInspect

Group elements on a canvas. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
elementIdsYesArray of element IDs to group

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-read-only, non-destructive, closed-world operation. The description adds minimal behavioral context by noting the room_url requirement, but doesn't elaborate on effects like whether grouping is reversible, how it affects element properties, or any rate limits. With annotations covering basic safety, the description provides some value but lacks rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—two short sentences with zero wasted words. It front-loads the core action and efficiently states the prerequisite, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (grouping elements), annotations cover safety aspects, and an output schema exists (so return values needn't be described), the description is reasonably complete. It specifies the action and a key prerequisite, though it could better explain the grouping concept or relationship to other tools for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear documentation for both parameters. The description mentions 'room_url from add_elements', adding a semantic constraint about its source, but doesn't explain 'elementIds' beyond what the schema provides. This meets the baseline for high schema coverage without significant added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Group elements') and resource ('on a canvas'), making the purpose understandable. However, it doesn't explicitly differentiate this tool from its sibling 'ungroup_elements' or explain what grouping entails in this context, preventing a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some guidance by mentioning a prerequisite ('Requires room_url from add_elements'), which implies a sequence of operations. However, it doesn't specify when to use this tool versus alternatives like 'ungroup_elements' or how grouping relates to other canvas manipulation tools, leaving usage context partially implied rather than explicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lock_elementsBInspect

Lock elements on a canvas. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
elementIdsYesArray of element IDs to lock

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate this is a non-destructive, non-read-only operation without open-world assumptions. The description adds minimal context about the 'room_url' requirement but doesn't disclose behavioral traits like what 'lock' means (e.g., prevents editing, visibility changes), whether it's reversible only via 'unlock_elements', or any rate limits. With annotations covering basic safety, it adds some value but not rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very brief (two short phrases) and front-loaded with the main action, making it efficient. However, the second phrase about 'room_url' could be integrated more smoothly, and it lacks structural elements like bullet points or examples that might enhance clarity without adding bulk.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has annotations and an output schema (implied by 'Has output schema: true'), the description doesn't need to cover return values or basic safety. However, for a mutation tool with siblings like 'unlock_elements' and 'update_elements', it's incomplete in explaining locking behavior, prerequisites beyond the schema, and differentiation from alternatives. It meets a minimum viable level but has clear gaps in context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional meaning beyond implying 'room_url' comes from 'add_elements', which is redundant with the schema's mention of 'create_canvas'. No syntax, format, or usage details are provided beyond what the schema already states.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Lock elements') and resource ('on a canvas'), making the purpose immediately understandable. However, it doesn't differentiate this tool from its sibling 'unlock_elements' beyond the obvious opposite action, missing an opportunity to clarify when to choose one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions a prerequisite ('Requires room_url from add_elements') but provides no guidance on when to use this tool versus alternatives like 'update_elements' (which might include locking functionality) or 'unlock_elements'. It lacks explicit when/when-not scenarios or comparisons with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

query_elementsA
Read-only
Inspect

Query elements on a canvas. Requires room_url from add_elements. Returns elements matching optional filters. If no browser has the canvas open, returns an error — ask the user to open the canvas URL in their browser and retry.

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNoFilter by element type
filterNoAdditional key-value filters (e.g., {locked: true})
room_urlYesRoom URL (from create_canvas) or room ID to operate on
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds valuable behavioral context beyond this: it specifies that the tool requires a browser to have the canvas open, returns an error otherwise, and mentions that results match optional filters. This provides practical implementation details that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: it states the purpose, specifies requirements and returns, and provides error handling guidance. Every sentence adds essential information without redundancy, making it front-loaded and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations, and 100% schema coverage, the description is mostly complete. It covers purpose, prerequisites, error conditions, and basic behavior. However, without an output schema, it doesn't detail return values (e.g., format of matched elements), leaving a minor gap in completeness for a query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all three parameters thoroughly. The description adds minimal parameter semantics beyond the schema—it only mentions that filters are 'optional' and references 'room_url from add_elements'. This meets the baseline for high schema coverage but doesn't significantly enhance understanding of parameter usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'query' and resource 'elements on a canvas', making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_image' or 'get_guide' that might also retrieve canvas content, leaving room for ambiguity about when to choose this specific query tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage: it specifies that 'room_url from add_elements' is required and warns about the need for an open browser canvas. It also mentions optional filters. However, it doesn't explicitly state when to use this tool versus alternatives like 'get_image' or 'get_guide', nor does it provide exclusion criteria for when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ungroup_elementsAInspect

Ungroup elements on a canvas. Pass element IDs — all groups containing those elements will be removed. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
elementIdsYesArray of element IDs to ungroup

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate it is not read-only, not open-world, and not destructive, but the description adds useful context: it specifies that 'all groups containing those elements will be removed,' clarifying the scope of the operation. However, it lacks details on permissions, rate limits, or error handling, so it adds some value beyond annotations but not comprehensive behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the action and parameters, and the second provides a prerequisite. It is front-loaded and efficiently conveys essential information without unnecessary details, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, annotations covering safety, and an output schema (which handles return values), the description is mostly complete. It includes purpose, parameters, and a prerequisite, but could improve by mentioning side effects or confirming non-destructive behavior, though annotations partially compensate for this gap.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the parameters. The description adds marginal value by linking 'room_url' to 'add_elements' and explaining that 'elementIds' are for ungrouping, but it does not provide additional syntax or format details beyond what the schema already specifies, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Ungroup elements') and target ('on a canvas'), specifying that it removes groups containing the provided element IDs. It distinguishes from siblings like 'group_elements' by describing the inverse operation, making the purpose specific and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It provides clear context by mentioning the prerequisite 'Requires room_url from add_elements,' which guides when to use this tool. However, it does not explicitly state when not to use it or name alternatives, such as using 'update_elements' for other modifications, leaving some guidance implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unlock_elementsAInspect

Unlock elements on a canvas. Requires room_url from add_elements.

ParametersJSON Schema
NameRequiredDescriptionDefault
room_urlYesRoom URL (from create_canvas) or room ID to operate on
elementIdsYesArray of element IDs to unlock

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this is not read-only, not open-world, and not destructive, so the agent knows this is a safe mutation operation. The description adds useful context about the prerequisite ('Requires room_url from add_elements'), which isn't captured in annotations. However, it doesn't describe what 'unlock' means behaviorally (e.g., does it allow editing, moving, or deleting?), potential side effects, or any permission/rate limit considerations that annotations don't cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two short sentences) with zero wasted words. The first sentence states the core purpose, and the second provides the only necessary contextual prerequisite. Every sentence earns its place, and the information is front-loaded appropriately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that there's an output schema (which handles return values), annotations cover safety aspects, and schema coverage is complete, the description provides adequate context for this tool. The prerequisite information about room_url provenance is helpful, and the purpose is clear. However, for a mutation tool that changes element state, some additional context about what 'unlocked' means in this system would be beneficial but isn't critical given the structured data available.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema itself. The description mentions 'room_url from add_elements' which adds minor contextual meaning about parameter provenance, but doesn't provide additional semantic details about what 'unlock' means for the elements or how elementIds should be identified. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Unlock') and target resource ('elements on a canvas'), making the purpose immediately understandable. It distinguishes from sibling tools like 'lock_elements' by specifying the opposite operation. However, it doesn't explicitly differentiate from other canvas manipulation tools like 'update_elements' or 'delete_elements' beyond the unlocking action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides one contextual prerequisite ('Requires room_url from add_elements'), which gives some guidance about when this tool might be used in sequence. However, it doesn't explain when to choose this tool versus alternatives like 'update_elements' (which might also affect locking state) or clarify scenarios where unlocking is appropriate versus leaving elements locked. No explicit exclusions or comparisons to sibling tools are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_elementsAInspect

Update elements on a canvas. Requires room_url from add_elements. Pass elements array with id and fields to update.

ParametersJSON Schema
NameRequiredDescriptionDefault
elementsYesArray of element updates (each must have id)
room_urlYesRoom URL (from create_canvas) or room ID to operate on

Output Schema

ParametersJSON Schema
NameRequiredDescription
urlNoCanvas room URL
isNewNoWhether a new canvas was created
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-destructive, non-read-only operation, which the description aligns with by implying mutation ('Update'). The description adds minimal behavioral context beyond annotations, such as the requirement for 'room_url from add_elements,' but lacks details on rate limits, authentication needs, or error handling. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently convey key information: the action and a prerequisite. It's front-loaded with the core purpose, though it could be slightly more structured by separating usage notes from parameter hints.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with multiple field options), the description is reasonably complete. It covers the basic purpose and a prerequisite, and with annotations providing safety context and an output schema likely detailing return values, the description doesn't need to explain behavior or outputs extensively. Minor gaps remain in sibling differentiation and advanced usage scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents both parameters ('room_url' and 'elements'). The description adds marginal value by clarifying that 'elements' must include an 'id' and 'fields to update,' but this is largely redundant with the schema's requirements. Baseline score of 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update elements') and resource ('on a canvas'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from its siblings like 'align_elements' or 'distribute_elements' which also modify elements, missing an opportunity for clearer distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context by mentioning 'Requires room_url from add_elements,' which hints at a prerequisite relationship. However, it doesn't explain when to use this versus alternatives like 'align_elements' or 'lock_elements,' nor does it specify exclusions or edge cases for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources