cnvs.app
Server Details
Zero-auth real-time collaborative whiteboard with MCP — AI agents + humans edit the same board live.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- lksrz/cnvs-whiteboard-skills
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 10 of 10 tools scored.
Each tool has a clearly distinct purpose with no overlap: add_image, add_link, and add_text handle different content types; draw_stroke creates freehand lines; erase deletes items; get_board and get_preview provide different data views; move repositions items; open_board initializes access; wait_for_update enables collaboration. The descriptions explicitly differentiate use cases, preventing misselection.
All tool names follow a consistent verb_noun pattern in snake_case: add_image, add_link, add_text, draw_stroke, erase, get_board, get_preview, move, open_board, wait_for_update. This uniformity makes the tool set predictable and easy to understand, with no deviations in naming style.
With 10 tools, the count is well-scoped for a collaborative whiteboard server. It covers core operations like creating, reading, updating, and deleting various board elements, plus utilities for initialization and real-time updates, without being overly sparse or bloated.
The tool set provides complete CRUD/lifecycle coverage for a whiteboard domain: add_* tools for creation, get_* for reading, move for updating, erase for deletion, open_board for initialization, and wait_for_update for collaboration. There are no obvious gaps, and the descriptions guide effective workflows.
Available Tools
10 toolsadd_imageAInspect
Place a raster or SVG image on the board at (x, y) with explicit width/height in board pixels. data_url MUST be a data:image/(png|jpeg|gif|webp|svg+xml);base64,... string ≤ ~900 kB; hosted URLs are not accepted. Strongly recommended: also pass a tiny thumb_data_url (≤8 kB JPEG/PNG/WebP, ~64 px on the long edge) — it is embedded into the SVG preview so OTHER AI viewers (and you, on later get_preview calls) can actually see the image instead of a placeholder box.
| Name | Required | Description | Default |
|---|---|---|---|
| x | Yes | ||
| y | Yes | ||
| id | No | ||
| width | Yes | ||
| author | No | ||
| height | Yes | ||
| board_id | Yes | ||
| data_url | Yes | ||
| thumb_data_url | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does so effectively. It discloses critical behavioral traits: the tool mutates the board (implied by 'Place'), has strict input constraints (data URL format, size limits), and explains side effects (thumbnails affect preview visibility). It also hints at performance considerations with size recommendations. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, followed by critical constraints and recommendations. Every sentence adds essential value—no wasted words—and the structure flows logically from requirements to optimizations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, mutation tool, no annotations or output schema), the description is largely complete. It covers key behavioral aspects, parameter semantics for critical inputs, and usage context. However, it does not fully address all parameters or potential error cases, and the lack of output schema means return values are undocumented, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds significant meaning beyond the schema: it explains the purpose of data_url (must be a specific base64 format with size limits) and thumb_data_url (recommended for previews with size/format details), and implies x/y/width/height are in board pixels. However, it does not cover all 9 parameters (e.g., id, author, board_id semantics), preventing a perfect score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Place a raster or SVG image on the board') with precise resource details (board, image types) and distinguishes it from siblings like add_text or draw_stroke by focusing on image placement. It goes beyond a simple verb+resource by specifying the coordinate system and pixel-based dimensions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (placing images with specific data formats and size constraints) and implies alternatives by mentioning 'hosted URLs are not accepted' and referencing other tools like get_preview. However, it does not explicitly state when NOT to use it compared to siblings like add_link or draw_stroke, which would elevate it to a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_linkAInspect
Drop a URL capsule onto the board — rendered as a clickable pill showing the hostname. Use this instead of add_text when the node is just a link; the capsule styling signals clickability to humans. Same coordinate rules as add_text (+x right, +y down).
| Name | Required | Description | Default |
|---|---|---|---|
| x | Yes | ||
| y | Yes | ||
| url | Yes | ||
| author | No | ||
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the visual rendering ('clickable pill showing the hostname') and coordinate rules, but lacks details on permissions, error handling, or what happens if the URL is invalid. For a mutation tool with zero annotation coverage, this leaves behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and well-structured: two sentences that front-load the core purpose and usage guideline, with no wasted words. Every sentence adds clear value, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 5 parameters), no annotations, and no output schema, the description is incomplete. It covers purpose and usage well but lacks details on parameters, error cases, and return values. It's adequate for basic use but has clear gaps for reliable agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'Same coordinate rules as `add_text` (+x right, +y down)', which hints at `x` and `y` parameters, and 'URL capsule' implies `url`, but doesn't explain `board_id` or `author`. With 5 parameters and no schema descriptions, the added value is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Drop a URL capsule onto the board — rendered as a clickable pill showing the hostname.' It specifies the verb ('Drop'), resource ('URL capsule'), and visual outcome ('clickable pill'), and distinguishes it from sibling `add_text` by explaining when to use this tool instead.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides usage guidelines: 'Use this instead of `add_text` when the node is just a link.' It names the alternative tool (`add_text`) and specifies the condition ('when the node is just a link'), giving clear guidance on when to choose this tool over its sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_textAInspect
Create a NEW text node, or update an existing one (pass the same id to overwrite content/position in place — preferred over creating a duplicate). Supports cnvs markup (Markdown-ish) and Mermaid diagrams in the content. When using Mermaid, the ENTIRE content of this text node must be a single Mermaid diagram (one ```mermaid fenced block and nothing else — no heading, no prose before or after). If you need prose + a diagram, create two separate text nodes. postit: true renders as a yellow sticky. Coordinates are in board-world pixels, +x right, +y DOWN; pick a spot that does not overlap existing items (check get_preview first). Default width auto-fits content up to ~320 px; pass width for explicit wrapping (160–4096). Keep content under 100 000 chars.
| Name | Required | Description | Default |
|---|---|---|---|
| x | Yes | ||
| y | Yes | ||
| id | No | Optional stable id; generated when omitted. | |
| color | No | CSS color. Defaults to theme text color. | |
| width | No | Explicit width in px (160–4096). | |
| author | No | Author tag, defaults to ai:claude. | |
| postit | No | ||
| content | Yes | ||
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does so well. It discloses key behavioral traits: the tool can create or update text nodes, supports specific markup (cnvs/Mermaid), has constraints for Mermaid usage, includes rendering options (postit), coordinate system details, and content limits. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose. Every sentence adds value, such as markup support, Mermaid rules, rendering details, coordinate guidance, and parameter tips, with zero wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 9 parameters, no annotations, and no output schema, the description is largely complete. It covers purpose, usage, behavioral details, and parameter semantics effectively. A minor gap is the lack of explicit error handling or response format details, but overall it provides strong contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low at 44%, but the description compensates by adding meaning for parameters: it explains the id parameter's role in updates, content constraints for Mermaid, postit rendering effect, coordinate system interpretation, width ranges and defaults, and content length limits. This significantly enhances understanding beyond the basic schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a NEW text node, or update an existing one.' It specifies the resource (text node) and distinguishes it from siblings like add_image, add_link, and draw_stroke by focusing on text content with markup support.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool, such as preferring to update with the same ID over creating duplicates and checking get_preview first to avoid overlaps. However, it does not explicitly state when NOT to use it or name alternatives for specific scenarios beyond the Mermaid diagram constraint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
draw_strokeAInspect
Draw a freehand stroke on the board. Use for arrows, underlines, connector lines, annotations, or simple shapes — a straight line needs two points, a rough circle wants ~20. Stroke width is fixed at 3 px; color accepts any CSS color (e.g. '#ff0000', 'var(--text-color)'). Accepts three equivalent point formats — pick whichever your MCP client serialises cleanly: nested [[x,y],[x,y],...], flat [x1,y1,x2,y2,...], or a JSON string of either. Some clients (Claude Code as of 2026-04) drop nested arrays during tool-call serialisation, so prefer the flat form or the JSON-string form when in doubt. To delete a stroke later, use erase with kind: 'line' and the id returned here.
| Name | Required | Description | Default |
|---|---|---|---|
| id | No | ||
| color | No | ||
| author | No | ||
| points | Yes | Points as [[x,y],...], flat [x1,y1,x2,y2,...], or a JSON string of either. | |
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behaviors: stroke width is fixed at 3 px, color accepts CSS colors, three point formats are accepted with client-specific recommendations, and deletion requires a separate tool. It doesn't cover error conditions or rate limits, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage details, parameter semantics, and operational advice. Every sentence adds value—no fluff or repetition—making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does an excellent job covering purpose, usage, parameters, and behavioral context. It lacks details on return values (e.g., the id format for deletion) and error handling, but is largely complete for a drawing tool with multiple parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (20%), but the description compensates fully by explaining 'color' accepts CSS colors with examples, 'points' has three equivalent formats with serialization advice, and it implies 'board_id' identifies the target board. It adds meaning beyond the minimal schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Draw') and resource ('a freehand stroke on the board'), with specific examples of use cases (arrows, underlines, connector lines, annotations, simple shapes). It distinguishes from siblings by mentioning deletion requires the 'erase' tool, not this one.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided on when to use this tool (for freehand strokes) versus alternatives (e.g., 'erase' for deletion, with specific parameters). It also advises on point format selection based on client serialization issues, offering clear alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
eraseAInspect
Delete a single item by id. kind MUST match the item type: 'text' for text nodes, 'line' for freehand strokes, 'image' for images — the wrong kind silently targets the wrong table and is a common mistake. Get the id + type from get_board (texts[], lines[], images[]). There is no bulk/erase-all tool: loop if you need to delete multiple items.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | ||
| kind | Yes | ||
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the destructive nature ('Delete'), warns about a common mistake (wrong 'kind' silently targets the wrong table), and clarifies the scope (single-item deletion only, no bulk tool). However, it doesn't mention potential side effects like error handling or confirmation prompts, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by critical warnings and usage notes. Every sentence earns its place: the first states the action, the second warns about a key pitfall, the third provides sourcing guidance, and the fourth clarifies limitations. There is no redundant or verbose content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive operation with 3 parameters, no annotations, no output schema), the description is largely complete. It covers purpose, parameters, usage, and limitations effectively. However, it doesn't describe the return value or error behavior, which could be useful given the destructive nature. For a deletion tool, this is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, so the description must compensate fully. It adds crucial semantic meaning beyond the bare schema: it explains that 'kind' MUST match the item type (text, line, image) and that mismatches cause silent errors. It also clarifies that 'id' and 'type' should be sourced from 'get_board', providing context not in the schema. All three parameters are meaningfully addressed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete a single item by id') and resource ('item'), distinguishing it from sibling tools like 'add_image', 'add_text', or 'draw_stroke' which are creation tools. It explicitly mentions the tool's singular focus on deletion, avoiding any confusion with bulk operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: for deleting single items by id, with 'kind' matching the item type. It also specifies when not to use it (no bulk/erase-all tool) and offers an alternative approach (loop if deleting multiple items). It references 'get_board' as the source for obtaining the required id and type, clarifying prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_boardAInspect
Full structured JSON state of a board: texts (id, x, y, content, color, width, postit, author), strokes (id, points, color, author), images (id, x, y, width, height, dataUrl, thumbDataUrl, author; heavy base64 >8 kB elided to dataUrl:null, tiny images inlined). Use this for EXACT ids/coordinates/content (needed for move, erase, editing a text by id). For visual layout (where is empty space? what overlaps?) call get_preview instead — it's much cheaper for spatial reasoning than a huge JSON dump.
| Name | Required | Description | Default |
|---|---|---|---|
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well. It discloses that this returns 'heavy base64 >8 kB elided to dataUrl:null' and 'tiny images inlined', which are important behavioral traits not inferable from the schema. It also warns about performance ('huge JSON dump') and cost implications compared to get_preview. It doesn't mention error conditions or authentication requirements, keeping it from a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: first sentence states the core purpose and components, second sentence provides usage guidance with specific examples, third sentence gives the alternative with rationale. Every sentence adds value, with no redundant information, and it's front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and simple input schema, the description provides excellent context about what data is returned, performance characteristics, and when to use vs. alternatives. It could be more complete by mentioning potential error cases (e.g., invalid board_id) or authentication requirements, but it covers the essential operational context well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only one parameter (board_id), the description doesn't explicitly discuss parameters. However, the context makes it clear this tool operates on a specific board, and the parameter's purpose is implied through the tool's purpose. Since there's only one parameter and the schema is simple, the description adequately compensates by fully explaining what the tool returns.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool retrieves 'Full structured JSON state of a board' and lists specific components (texts, strokes, images). It clearly distinguishes from sibling 'get_preview' by specifying this is for exact data needed for operations like 'move', 'erase', and editing by id, rather than visual layout analysis.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('for EXACT ids/coordinates/content needed for `move`, `erase`, editing a text by id') and when not to use it ('For visual layout... call `get_preview` instead'). It names the specific alternative tool and explains the trade-off ('much cheaper for spatial reasoning than a huge JSON dump').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_previewAInspect
Compact schematic SVG render of the board (typically a few kB even for dense boards). Returns both an image/svg+xml content block (you can SEE it) and the raw SVG text. CALL THIS any time you need to understand where things are — before placing new items, before deciding whether the canvas is crowded, before picking a free region. AI-authored items get a purple border so you can tell which contributions were yours. For precise text content prefer get_board.
| Name | Required | Description | Default |
|---|---|---|---|
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the output format (image/svg+xml content block + raw SVG text), typical size characteristics ('a few kB even for dense boards'), and that AI-authored items get a purple border for identification. It doesn't mention error conditions or performance characteristics, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero wasted words. The first sentence establishes core functionality, subsequent sentences provide usage guidance and differentiation from alternatives. Every sentence earns its place by adding distinct value about output format, use cases, or tool comparison.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (visual rendering with AI-authored item highlighting), no annotations, and no output schema, the description does well by explaining the dual output format, typical size, visual characteristics, and use cases. It could benefit from mentioning error conditions or response structure, but provides substantial context for effective tool selection and use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and only 1 parameter (board_id), the description doesn't explicitly mention or explain the board_id parameter. However, the context implies it's needed to identify which board to render. Since there's only one parameter and the schema is simple, the baseline of 3 is appropriate as the description doesn't add parameter details but the minimal parameter set is manageable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('returns', 'render') and resource ('board'), distinguishing it from sibling get_board by specifying it returns a schematic SVG render rather than precise text content. It explicitly mentions what the tool does: returns both an image content block and raw SVG text.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('any time you need to understand where things are — before placing new items, before deciding whether the canvas is crowded, before picking a free region') and when to use an alternative ('For precise text content prefer get_board'). It clearly differentiates use cases from the sibling tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
moveAInspect
Reposition an existing item to a new (x, y) without retyping its content. Works for every item kind: text and link set the top-left to (x, y); line translates every point so the stroke's bounding box top-left lands at (x, y); image sets the top-left like text. kind defaults to text for backward compat with older callers. Find the id + kind via get_board. Prefer move over re-creating an item when only the location changes — it preserves the id, content, author and avoids a round-trip of base64 bytes for images.
| Name | Required | Description | Default |
|---|---|---|---|
| x | Yes | ||
| y | Yes | ||
| id | Yes | ||
| kind | No | Item kind. Defaults to `text`. | |
| board_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it explains how different item kinds behave (text/link/image set top-left, line translates points), mentions backward compatibility (kind defaults to text), and describes benefits like preserving id/content/author and avoiding base64 round-trips for images. It doesn't cover error cases or permissions, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence states the core purpose, subsequent sentences add necessary details about behavior and usage, and every sentence earns its place by providing distinct value (kind-specific behavior, backward compatibility, prerequisites, when to prefer over alternatives). No wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description does well for a mutation tool with 5 parameters. It explains the operation's effect, different behaviors per item kind, prerequisites, and when to use it. It doesn't describe return values or error cases, but covers most essential context for correct usage. The main gap is lack of output information, but otherwise quite complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 20% (only 'kind' has a description), so the description must compensate. It adds significant meaning beyond the schema: explains what x and y represent ('new (x, y)'), clarifies that id comes from get_board, details how kind affects behavior for different item types, and mentions kind defaults to text. It doesn't explain board_id, but covers most parameters well given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('reposition an existing item to a new (x, y) without retyping its content') and distinguishes it from siblings by mentioning 'Prefer `move` over re-creating an item when only the location changes' and contrasting with tools like add_text, add_link, add_image, and draw_stroke that create new items. It explicitly names the resource ('item') and verb ('reposition').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Prefer `move` over re-creating an item when only the location changes') and when not to (implied: use sibling tools like add_text for creating new items). It also mentions prerequisites ('Find the id + kind via `get_board`') and distinguishes from alternatives by explaining it preserves id, content, and author compared to recreation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
open_boardAInspect
ALWAYS call this first when given a board URL or ID. Resolves the canonical board id and auto-creates the board row if it does not exist yet. Returns a summary (item counts, authors). After this, call BOTH get_preview and get_board before editing so you can see the layout visually AND know the exact ids/coordinates — do not skip get_preview, otherwise you will place new items blindly on top of existing ones.
| Name | Required | Description | Default |
|---|---|---|---|
| url_or_id | Yes | Full board URL (https://cnvs.app/#<id>) or bare UUID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: auto-creation of missing boards, canonical ID resolution, and return format (summary with item counts and authors). It also warns about the consequence of skipping get_preview ('place new items blindly on top of existing ones'). However, it doesn't mention potential errors, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the most important information (when to call it). Every sentence adds value: the first establishes priority, the second explains core functionality, the third describes the return, and the fourth provides critical workflow guidance. It could be slightly more concise by combining some ideas.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (auto-creation, ID resolution, workflow sequencing) and lack of annotations/output schema, the description does a good job covering essential context. It explains the tool's role in the workflow, what it returns, and critical prerequisites. However, it doesn't detail error conditions or what 'auto-creates the board row' means in practice.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter. The description doesn't add any additional semantic information about the url_or_id parameter beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Resolves the canonical board id and auto-creates the board row if it does not exist yet. Returns a summary (item counts, authors).' This specifies both the action (resolve ID, auto-create) and the resource (board), distinguishing it from siblings like get_board or get_preview which don't auto-create.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'ALWAYS call this first when given a board URL or ID.' It also specifies alternatives: 'After this, call BOTH `get_preview` and `get_board` before editing.' This includes clear when-to-use and when-not-to-use instructions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wait_for_updateAInspect
Long-poll: blocks until the next edit lands on this board, then returns.
WHEN TO CALL THIS: if your MCP client does NOT surface notifications/resources/updated events from resources/subscribe back to the model (most chat clients do not — they receive the SSE event but don't inject it into your context), this tool is how you 'wait for the human' inside a single turn. Typical flow: you draw / write what you were asked to, then instead of ending your turn you call wait_for_update(board_id). When the human adds, moves, or erases something, the call returns and you refresh with get_preview / get_board and continue the collaboration. Great for turn-based interactions (games like tic-tac-toe, brainstorming where you respond to each sticky the user drops, sketch-and-feedback loops, etc.). If your client DOES deliver resource notifications natively, prefer resources/subscribe — it's cheaper and has no timeout ceiling.
BEHAVIOUR: resolves ~3 s after the edit burst settles (same debounce as the push notifications — this is intentional so drags and long strokes collapse into one wake-up). Returns { updated: true, timedOut: false } on a real edit, or { updated: false, timedOut: true } if nothing happened within timeout_ms. On timeout, just call it again to keep waiting; chaining calls is cheap. timeout_ms is clamped to [1000, 55000]; default 25000 (leaves headroom under typical 60 s proxy timeouts).
| Name | Required | Description | Default |
|---|---|---|---|
| board_id | Yes | ||
| timeout_ms | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and delivers comprehensive behavioral details: debounce behavior ('resolves ~3 s after the edit burst settles'), return values with both success and timeout cases, timeout clamping range [1000, 55000], default timeout value, and guidance on chaining calls. It explains the intentional design choice matching push notification behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (purpose, usage guidelines, behavior) but somewhat verbose at 200+ words. Every sentence earns its place by adding value - no redundant information. Could be slightly more concise but maintains excellent information density with front-loaded key information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex long-polling tool with no annotations and no output schema, the description provides complete context: purpose, usage scenarios, behavioral details, parameter semantics, return value examples, and integration guidance. It addresses all aspects needed for an agent to understand when and how to use this tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by explaining both parameters: board_id is implied through context ('this board'), and timeout_ms gets detailed treatment including clamping range, default value (25000), and rationale ('leaves headroom under typical 60 s proxy timeouts'). It provides semantic context beyond what the bare schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with 'Long-poll: blocks until the next edit lands on this board, then returns' - a specific verb ('blocks until') with clear resource ('this board') and outcome ('returns'). It explicitly distinguishes this tool from sibling tools like get_board or get_preview by focusing on waiting for updates rather than retrieving current state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'WHEN TO CALL THIS' section provides explicit guidance: use when client doesn't surface notifications from resources/subscribe, with specific workflow examples (draw/write then call wait_for_update). It clearly states when NOT to use it ('If your client DOES deliver resource notifications natively, prefer resources/subscribe') and provides concrete alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!