Mila
Server Details
Create and manage documents, spreadsheets, and presentations from your AI assistant.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 23 of 23 tools scored.
Each tool has a clearly distinct purpose targeting specific resources (documents, sheets, slides) and actions (create, get, update, delete, list, append). The descriptions explicitly differentiate between similar operations like append_rows vs. create_sheet_tab, and there are no ambiguous overlaps that would cause misselection.
Tool names follow a highly consistent verb_noun pattern throughout (e.g., create_document, get_sheet, update_slide_presentation). All tools use snake_case with clear, predictable naming that aligns with their functions, making the set easy to navigate and understand.
With 23 tools, the count is slightly high but reasonable for the server's purpose of managing documents, spreadsheets, and presentations. Each tool serves a specific role in the CRUD lifecycle, though it might feel a bit heavy compared to more streamlined servers.
The tool set provides complete CRUD/lifecycle coverage for documents, sheets, and slides, including operations like append, list, and detailed updates. There are no obvious gaps; agents can perform all essential workflows without dead ends, ensuring robust functionality.
Available Tools
23 toolsappend_rowsBInspect
Append one or more rows of data to a spreadsheet (sheet, excel, workbook) tab. Use "rows" for multiple rows or "values" for a single row.
| Name | Required | Description | Default |
|---|---|---|---|
| rows | No | Multiple rows. Each row is an array of values or an object keyed by column letter, e.g. [["Alice", 30], ["Bob", 25]] or [{"A": "Alice", "B": 30}] | |
| tab_id | Yes | Tab ID | |
| values | No | Single row as an array or column-letter object, e.g. ["Alice", 30] or {"A": "Alice", "B": 30} | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It mentions the append action but doesn't clarify important behavioral aspects: whether this requires write permissions, if it overwrites existing data, what happens on errors, or typical response format. The description is insufficient for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that efficiently communicate the core functionality and parameter usage. Every word earns its place, and the information is front-loaded with the primary purpose stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't address important contextual aspects: what permissions are needed, whether the operation is idempotent, what format the response takes, or error conditions. The description alone is inadequate for safe and effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 4 parameters. The description adds minimal value by clarifying the 'rows' vs. 'values' distinction with brief examples, but doesn't provide additional semantic context beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append one or more rows of data') and resource ('to a spreadsheet (sheet, excel, workbook) tab'), making the purpose explicit. However, it doesn't distinguish this tool from potential alternatives like 'update_sheet' or 'create_sheet' that might also modify spreadsheet data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage guidance by explaining when to use 'rows' vs. 'values' parameters, but it doesn't explicitly state when to choose this tool over alternatives like 'update_sheet' or 'create_sheet_tab'. No exclusion criteria or sibling tool comparisons are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
append_slidesCInspect
Append one or more slides to an existing presentation (slides, powerpoint, deck, keynote).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID | |
| slide | No | Single slide to append | |
| slides | No | Array of slides to append | |
| position | No | 0-based position to insert at (appends to end if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool appends slides, implying a write/mutation operation, but doesn't disclose critical traits like required permissions, whether changes are reversible, rate limits, or error conditions. The description adds minimal context beyond the basic action, leaving significant gaps for an agent to understand behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that clearly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource. However, it could be slightly more structured by explicitly mentioning key parameters or usage context, but it avoids redundancy and waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters with nested objects, no output schema, and no annotations), the description is incomplete. It doesn't address behavioral aspects like mutation effects, error handling, or return values, nor does it provide usage guidance. For a tool that modifies presentations, more context is needed to help an agent use it correctly and safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no specific parameter semantics beyond implying that 'slides' can be appended (plural), which aligns with the schema's 'slide' and 'slides' parameters. It doesn't explain parameter interactions (e.g., using 'slide' vs 'slides') or provide additional context, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append') and resource ('slides to an existing presentation'), specifying the verb and target. It distinguishes from sibling tools like 'create_slide_presentation' (which creates new presentations) and 'update_slide_presentation' (which modifies existing slides). However, it doesn't explicitly differentiate from 'append_to_document' or 'append_rows', which have similar verbs but different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing presentation ID), exclusions (e.g., not for creating new presentations), or comparisons to siblings like 'update_slide_presentation' for modifying slides versus appending new ones. Usage is implied by the action but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
append_to_documentAInspect
Append HTML content to the end of an existing document (doc, word, note) without replacing existing content.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID | |
| content | Yes | HTML content to append |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the append action and non-destructive nature ('without replacing existing content'), which is helpful. However, it doesn't mention important behavioral aspects like required permissions, whether the document must be editable, potential rate limits, error conditions, or what happens with invalid HTML content. For a mutation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently communicates the core functionality. It's front-loaded with the main action and includes important qualifiers without unnecessary words. Every element serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides adequate basic information about what the tool does but lacks completeness. It doesn't cover important contextual elements like error handling, return values, permissions needed, or how the append operation interacts with document structure. The description is minimal but not fully comprehensive for this type of operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (id and content) adequately. The description adds marginal value by specifying that content is 'HTML content' and that it's appended 'to the end' of a document, but doesn't provide additional syntax, format details, or constraints beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Append HTML content'), the target resource ('existing document'), and the scope ('without replacing existing content'). It distinguishes from siblings like update_document (which might replace content) and create_document (which creates new documents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'existing document' and 'without replacing existing content', suggesting this is for incremental additions rather than full replacements. However, it doesn't explicitly state when to use this tool versus alternatives like update_document or create_document, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_documentBInspect
Create a new document. Use this when asked to create, write, draft, or compose a document, doc, note, page, article, or word document. Content should be HTML (e.g. "Hello").
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Document title | |
| content | No | HTML content of the document | |
| server_id | No | Server ID to create in (omit for personal) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions that content should be HTML, it doesn't cover important behavioral aspects like what permissions are needed, whether the creation is immediate or requires approval, what happens if the title already exists, or what the response looks like. For a creation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that each serve a clear purpose: the first states the tool's function, and the second provides usage guidance and format requirements. There's no unnecessary verbiage, though the list of synonyms could be slightly trimmed for efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (creation operation with 3 parameters), lack of annotations, and no output schema, the description is moderately complete. It covers the basic purpose and usage but lacks important behavioral details about permissions, error conditions, and response format that would be needed for comprehensive understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds some value by specifying that content should be HTML format, which provides additional context beyond the schema's generic 'HTML content' description. However, it doesn't explain the 'server_id' parameter's purpose or the implications of omitting it, leaving the schema to carry most of the parameter documentation burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with the verb 'Create' and resource 'document', making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'create_sheet' or 'create_slide_presentation' beyond mentioning 'document' in the resource name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context with examples of when to use it ('when asked to create, write, draft, or compose a document, doc, note, page, article, or word document'). It doesn't explicitly state when NOT to use it or mention specific alternatives among the many sibling tools, but the context is sufficient for basic guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sheetAInspect
Create a new spreadsheet (sheet, excel, workbook) with an initial tab. Use this when asked to create a spreadsheet, table, workbook, tracker, or organize data in rows and columns.
| Name | Required | Description | Default |
|---|---|---|---|
| rows | No | Number of rows (default 100) | |
| cells | No | Initial cell data in A1 notation. Each cell is an object with: value (string|number), and optional format: { bold, italic, underline (booleans), color (text hex e.g. "#FF0000"), bgColor (background hex), fontSize (number), fontFamily (string), align ("left"|"center"|"right"), numberFormat ("currency"|"percentage"|"number"|"date"), decimals (number), currencySymbol (string) }. Example: {"A1": {"value": "Name", "format": {"bold": true, "bgColor": "#1B3A5C", "color": "#FFFFFF"}}} | |
| title | Yes | Workbook title | |
| columns | No | Number of columns (default 26) | |
| tab_name | No | Name of the first tab (default "Sheet 1") | |
| server_id | No | Server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. While it mentions creating a spreadsheet with an initial tab, it doesn't disclose behavioral traits like whether this requires specific permissions, what happens if a duplicate title exists, or what the response format looks like. For a creation tool with zero annotation coverage, this is a significant gap.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that each earn their place. The first sentence states the core functionality, and the second provides clear usage guidance. There's no wasted language or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (6 parameters including nested objects) and lack of both annotations and output schema, the description is somewhat incomplete. While it clearly states purpose and usage, it doesn't address behavioral aspects like error conditions, permissions, or response format that would be helpful for a creation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. According to the rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Create') and resource ('new spreadsheet') with specific synonyms (sheet, excel, workbook) and mentions the initial tab creation. It distinguishes this tool from siblings like 'create_document' or 'create_slide_presentation' by specifying spreadsheet creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'when asked to create a spreadsheet, table, workbook, tracker, or organize data in rows and columns.' This provides clear context for usage versus alternatives like document or presentation creation tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sheet_tabCInspect
Add a new tab to a spreadsheet (sheet, excel, workbook).
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Tab name | |
| rows | No | Number of rows | |
| cells | No | Initial cells in A1 notation. Each cell: { value, format?: { bold, italic, underline, color (text hex), bgColor (background hex), fontSize, fontFamily, align, numberFormat, decimals, currencySymbol } } | |
| columns | No | Number of columns | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool 'Adds a new tab' which implies a write/mutation operation, but doesn't disclose any behavioral traits: no information about permissions needed, whether the operation is idempotent, rate limits, error conditions, or what happens on success (e.g., returns tab ID). For a mutation tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero waste. It's appropriately sized for a tool with this complexity and is perfectly front-loaded - the agent immediately understands the core functionality. Every word earns its place in this concise formulation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 5 parameters (including complex nested objects for cells), no annotations, and no output schema, the description is incomplete. It doesn't address what the tool returns, error conditions, permission requirements, or behavioral constraints. The agent lacks crucial information needed to use this tool effectively in production scenarios, especially given the absence of structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters. The description doesn't add any parameter semantics beyond what's in the schema - it doesn't explain relationships between parameters (e.g., that rows/columns define initial tab size), constraints, or usage patterns. With complete schema coverage, baseline 3 is appropriate as the description doesn't compensate but schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add a new tab') and resource ('to a spreadsheet'), making the purpose immediately understandable. It distinguishes from siblings like 'create_sheet' (creates entire spreadsheet) and 'update_sheet_tab' (modifies existing tab), though it doesn't explicitly name these alternatives. The description is specific but could be more precise about distinguishing from all relevant siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'create_sheet' (for new spreadsheets) or 'update_sheet_tab' (for modifying existing tabs). It doesn't mention prerequisites, such as needing an existing spreadsheet ID, or contextual factors like permission requirements. The agent must infer usage from the tool name and sibling list alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_slide_presentationBInspect
Create a new slide presentation (slides, powerpoint, deck, keynote). Use this when asked to create a presentation, slide deck, or slideshow. Each slide has "html" content and optional "background" and "notes".
| Name | Required | Description | Default |
|---|---|---|---|
| data | No | Array of slide objects | |
| theme | No | Theme name (default "default") | |
| title | Yes | Presentation title | |
| server_id | No | Server ID | |
| aspectRatio | No | Aspect ratio (default "16:9") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is a creation tool but doesn't disclose behavioral traits like whether it requires authentication, what happens on failure, if it's idempotent, or what the output looks like. The description mentions slide structure but doesn't cover broader behavioral aspects needed for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized at two sentences, front-loaded with the core purpose. The second sentence provides useful additional context about slide structure. There's minimal waste, though the parenthetical synonyms in the first sentence could be slightly redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with 5 parameters, 100% schema coverage, but no annotations and no output schema, the description is moderately complete. It covers the basic purpose and usage context but lacks behavioral transparency and output information. Given the complexity of a presentation creation tool, more context about what happens after creation would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal value beyond the schema - it mentions 'Each slide has "html" content and optional "background" and "notes"' which partially explains the 'data' parameter structure but doesn't add meaningful semantics beyond what's in the schema descriptions. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a new slide presentation' with specific resources mentioned (slides, powerpoint, deck, keynote). It distinguishes from siblings like 'create_document' or 'create_sheet' by focusing on slide presentations, though it doesn't explicitly contrast with them. The verb 'create' is specific and the resource is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Use this when asked to create a presentation, slide deck, or slideshow.' This gives explicit when-to-use guidance. However, it doesn't mention when NOT to use it (e.g., vs. 'create_document' for text documents) or name specific alternatives among siblings, which would be needed for a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_documentCInspect
Permanently delete a document (doc, word, note) by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action is 'permanently delete', which implies destructive behavior, but doesn't address critical aspects like permissions required, whether deletion is reversible, confirmation prompts, or error handling for invalid IDs. This leaves significant gaps for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the core action and key constraint ('permanently') without unnecessary words. It's appropriately sized for a simple tool with one parameter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'permanently' entails (e.g., no trash/recovery), what happens on success/failure, or return values. Given the high stakes of deletion and lack of structured safety hints, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'by ID' which aligns with the single required parameter 'id' in the schema. Since schema description coverage is 100% (the 'id' parameter is documented as 'Document ID'), the description adds minimal value beyond what the schema already provides, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('permanently delete') and resource ('a document (doc, word, note) by ID'), making the purpose unambiguous. It doesn't explicitly differentiate from sibling deletion tools like delete_sheet or delete_slide_presentation, but the document type specification provides some distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing the document ID from list_documents or get_document), nor does it clarify when deletion is appropriate versus updating or other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sheetBInspect
Permanently delete a spreadsheet (sheet, excel, workbook) and all its tabs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Permanently delete,' which implies a destructive, irreversible action, but does not specify authentication requirements, error conditions, or what happens to linked data. For a destructive tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key information ('Permanently delete') and avoids unnecessary words. Every part of the sentence contributes to understanding the tool's scope and impact.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that this is a destructive tool with no annotations and no output schema, the description is incomplete. It lacks details on permissions, error handling, confirmation steps, or what is returned after deletion. For such a high-stakes operation, more context is needed to ensure safe and correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter documented as 'Sheet (workbook) ID.' The description adds no additional parameter semantics beyond what the schema provides, such as format examples or validation rules. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Permanently delete') and the resource ('a spreadsheet (sheet, excel, workbook) and all its tabs'), making the purpose specific and unambiguous. It distinguishes this tool from sibling tools like delete_sheet_tab (which deletes only a tab) and delete_document (which deletes a different resource type).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like delete_sheet_tab or delete_document, nor does it mention prerequisites, permissions, or recovery options. It simply states what the tool does without contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sheet_tabAInspect
Delete a tab from a spreadsheet (sheet, excel, workbook). Cannot delete the last remaining tab.
| Name | Required | Description | Default |
|---|---|---|---|
| tab_id | Yes | Tab ID | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates a key constraint (cannot delete the last tab) and implies a destructive action ('Delete'), but lacks details on permissions needed, error handling (e.g., if tab doesn't exist), or whether the operation is reversible. It adds value beyond the schema but is incomplete for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core action and resource, followed by a critical constraint. Every word serves a purpose, with no redundancy or fluff, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is moderately complete. It covers the main action and a key constraint, but lacks information on permissions, error responses, or what happens post-deletion (e.g., if remaining tabs are reordered). Given the complexity of a delete operation, more behavioral context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('sheet_id' and 'tab_id') documented in the schema. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or sourcing instructions. Baseline 3 is appropriate since the schema adequately covers parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and target resource ('a tab from a spreadsheet'), using specific terminology like 'sheet, excel, workbook' to clarify scope. It distinguishes itself from sibling tools like 'delete_sheet' (which deletes entire sheets) and 'delete_document' (which deletes documents), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when NOT to use this tool ('Cannot delete the last remaining tab'), which is crucial for avoiding errors. However, it does not mention alternatives like 'update_sheet_tab' for modifying tabs instead of deleting, or clarify prerequisites such as needing valid sheet/tab IDs, leaving some gaps in usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_slide_presentationCInspect
Permanently delete a slide presentation (slides, powerpoint, deck, keynote).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the action is 'permanent', which is a critical behavioral trait (irreversible deletion), but lacks other important details: it doesn't specify permissions required, rate limits, error conditions (e.g., if the ID is invalid), or what happens upon success (e.g., no output schema exists). For a destructive tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key action ('permanently delete') and resource. It includes helpful synonyms without redundancy. Every word earns its place, making it appropriately sized and easy to parse for an agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive operation with no annotations and no output schema), the description is incomplete. It mentions permanence but omits critical context: no information on permissions, error handling, or return values. For a deletion tool, this leaves significant gaps that could hinder an agent's ability to use it correctly and safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'id' parameter documented as 'Presentation ID'. The description adds no additional meaning beyond this, such as format examples or sourcing instructions. According to the rules, with high schema coverage (>80%), the baseline is 3 even without param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('permanently delete') and the resource ('slide presentation'), with synonyms provided for clarity (slides, powerpoint, deck, keynote). It distinguishes the tool from siblings like 'get_slide_presentation' or 'update_slide_presentation' by specifying deletion. However, it doesn't explicitly differentiate from other delete tools (e.g., 'delete_document'), which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing the presentation ID), exclusions (e.g., not for read-only operations), or comparisons to siblings like 'delete_document' or 'delete_sheet'. Without such context, an agent might misuse it or overlook better options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentAInspect
Get a document (doc, word, note, page) by ID, including its full content (title, HTML body, metadata).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states this is a read operation ('Get'), but does not disclose behavioral traits such as error handling (e.g., what happens if the ID is invalid), permissions required, rate limits, or whether it's idempotent. The description is minimal and lacks necessary context for safe and effective use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose ('Get a document... by ID') and includes key details about document types and returned content. There is no wasted language, and it is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is somewhat complete but lacks depth. It covers what the tool does and what it returns, but without annotations or output schema, it should ideally include more behavioral context (e.g., error cases, permissions) to be fully helpful for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'id' parameter documented in the schema as 'Document ID'. The description adds no additional meaning beyond this, as it only repeats 'by ID' without explaining format, constraints, or examples. Baseline is 3 since the schema provides full coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('document'), specifies the types of documents included ('doc, word, note, page'), and distinguishes it from sibling tools like 'list_documents' by focusing on retrieval by ID rather than listing. It also mentions what content is returned ('full content including title, HTML body, metadata').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'by ID', which suggests this tool is for retrieving specific documents when the ID is known, as opposed to 'list_documents' for browsing. However, it does not explicitly mention when not to use it or name alternatives, though the distinction is clear from the sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sheetAInspect
Get a spreadsheet (sheet, excel, workbook) by ID, including all tabs and their cell data in A1 notation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool retrieves data but does not disclose behavioral traits such as whether it requires authentication, rate limits, error handling, or if it's a read-only operation (implied by 'Get' but not explicit). This leaves gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose ('Get a spreadsheet...') and includes essential details (input and output). There is no wasted text, and every part earns its place by clarifying scope and format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the tool's purpose and basic usage but lacks details on behavioral aspects (e.g., permissions, errors) and return structure beyond 'cell data in A1 notation'. It is minimally viable but has clear gaps for a data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'id' documented as 'Sheet (workbook) ID'. The description adds value by clarifying that 'ID' refers to a spreadsheet and specifying the return format ('A1 notation'), but does not provide additional syntax or format details beyond what the schema implies. Baseline 3 is appropriate as the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('spreadsheet (sheet, excel, workbook)'), specifies the input ('by ID'), and details what is returned ('all tabs and their cell data in A1 notation'). It distinguishes from siblings like 'get_sheet_tab' (which retrieves a single tab) and 'list_sheets' (which lists sheets without data).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by specifying 'by ID' and the data returned, but does not explicitly state when to use this tool versus alternatives like 'get_sheet_tab' (for a single tab) or 'list_sheets' (for metadata only). It provides context but lacks explicit guidance on exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sheet_tabAInspect
Get a single tab from a spreadsheet (sheet, excel, workbook), including all cell data in A1 notation.
| Name | Required | Description | Default |
|---|---|---|---|
| tab_id | Yes | Tab ID | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'including all cell data in A1 notation', which adds some behavioral context about the return format. However, it doesn't disclose critical traits like whether this is a read-only operation, authentication needs, rate limits, error conditions, or pagination behavior for large tabs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Get a single tab from a spreadsheet') and adds essential detail ('including all cell data in A1 notation'). There's no wasted wording or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with 2 parameters and 100% schema coverage, the description is minimally adequate. However, with no annotations and no output schema, it should ideally explain more about the return structure (e.g., what 'A1 notation' entails) or behavioral constraints. It's complete enough for basic use but lacks depth for robust agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('sheet_id' and 'tab_id') documented in the schema. The description doesn't add any parameter-specific details beyond what the schema provides, such as format examples or where to find these IDs. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('a single tab from a spreadsheet'), specifying it includes 'all cell data in A1 notation'. It distinguishes from siblings like 'get_sheet' (which likely gets the entire spreadsheet) and 'list_sheets' (which lists spreadsheets).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving tab data with cell details, but doesn't explicitly state when to use this vs. alternatives like 'get_sheet' (for entire spreadsheet) or 'update_sheet_tab' (for modifications). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_slide_presentationCInspect
Get a slide presentation (slides, powerpoint, deck, keynote) by ID, including all slide data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't clarify if it requires authentication, has rate limits, returns paginated results, or what 'all slide data' entails (e.g., metadata, content). For a read tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get a slide presentation by ID') and adds a clarifying detail ('including all slide data'). There's no wasted verbiage, though it could be slightly more structured (e.g., separating purpose from scope).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 parameter, no output schema) and lack of annotations, the description is incomplete. It doesn't explain what 'all slide data' includes, potential error cases (e.g., invalid ID), or return format, leaving gaps for the agent. A more comprehensive description would address these aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' documented as 'Presentation ID' in the schema. The description adds no additional semantic context beyond this (e.g., format examples, where to find IDs, or constraints). With high schema coverage, the baseline is 3, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get a slide presentation by ID, including all slide data.' It specifies the verb ('Get') and resource ('slide presentation'), and distinguishes it from list operations. However, it doesn't explicitly differentiate from other get_* tools (like get_document or get_sheet), which would be needed for a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose get_slide_presentation over list_slides (for listing) or update_slide_presentation (for modifications), nor does it specify prerequisites like needing a valid ID. This leaves the agent without contextual usage cues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_documentsAInspect
List all documents (docs, word, notes, pages). Use this when asked to list, find, or search documents, notes, drafts, or written content. Supports pagination, sorting, and filtering by workspace.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset (default 0) | |
| server_id | No | Filter by server ID, or "personal" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Supports pagination, sorting, and filtering by workspace' which adds useful operational context beyond just listing. However, it doesn't specify rate limits, authentication requirements, or what happens when no documents exist. The description doesn't contradict any annotations (none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences. The first sentence states the core purpose, and the second provides usage guidelines and behavioral context. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list operation with no annotations and no output schema, the description provides good context about what the tool does, when to use it, and key behavioral aspects (pagination, sorting, filtering). However, it doesn't describe the return format or what information is included in the listed documents, which would be helpful given the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters. The description mentions 'pagination, sorting, and filtering by workspace' which aligns with parameters like offset, limit, sort, order, and server_id, but doesn't add specific semantic details beyond what the schema provides. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all documents') with specific examples of document types (docs, word, notes, pages). It distinguishes this tool from sibling tools like get_document (which retrieves a single document) and other creation/deletion tools by focusing on listing multiple documents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'when asked to list, find, or search documents, notes, drafts, or written content.' This provides clear usage context and distinguishes it from sibling tools that perform different operations (create, update, delete, get single).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_serversBInspect
List all workspaces (servers) you have access to.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the action ('List all workspaces') but lacks details on permissions, rate limits, pagination, or output format. For a read operation with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the core purpose without any wasted words. It is front-loaded and appropriately sized for a simple tool, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is minimally adequate but incomplete. It covers the basic action but lacks behavioral context like output details or usage scenarios. With no annotations or output schema, it should provide more guidance to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate, and it implies no inputs are required, aligning with the schema. Baseline is 4 for 0 parameters, as it doesn't need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('workspaces/servers') with the scope ('all you have access to'), making the purpose specific and actionable. However, it doesn't explicitly distinguish this tool from its siblings (e.g., list_documents, list_sheets, list_slides), which are also list operations but for different resource types, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or comparisons to sibling tools like list_documents, leaving the agent to infer usage based on resource type alone without explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sheetsAInspect
List all spreadsheets (sheets, excel, workbooks) with their tab metadata (no cell data). Use this when asked to list, find, or search spreadsheets, workbooks, tables, or tabular data.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset | |
| server_id | No | Filter by server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns metadata without cell data, which is useful behavioral context. However, it doesn't mention other important traits like pagination behavior (implied by offset/limit parameters but not explained), authentication requirements, rate limits, or error conditions, leaving gaps for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose and scope, and the second provides clear usage guidelines. Every word earns its place, and it's front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (list operation with 5 parameters) and no annotations or output schema, the description does well by clarifying scope and usage. However, it lacks details on return format (e.g., structure of tab metadata) and behavioral aspects like pagination, which would be helpful for an agent to interpret results correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema, such as explaining how filtering by server_id works or default values. Baseline 3 is appropriate when the schema does all the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all spreadsheets') with specific scope ('with their tab metadata, no cell data'). It distinguishes from siblings like get_sheet (which retrieves cell data) by explicitly excluding cell content, making the purpose highly specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'when asked to list, find, or search spreadsheets, workbooks, tables, or tabular data.' It provides clear alternatives by mentioning what it doesn't do (no cell data), which helps distinguish it from tools like get_sheet that retrieve detailed content.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_slidesAInspect
List all slide presentations (slides, powerpoint, deck, keynote). Use this when asked to list, find, or search presentations, decks, or slideshows.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset | |
| server_id | No | Filter by server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While it mentions the tool lists presentations, it doesn't describe important behavioral traits such as pagination behavior (implied by 'offset' parameter but not explained), rate limits, authentication requirements, or what the output looks like. For a list operation with 5 parameters, this leaves significant gaps in understanding how the tool behaves.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and well-structured: two sentences that directly state the purpose and usage guidelines without any wasted words. It's front-loaded with the core functionality and follows with clear invocation context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, no annotations, no output schema), the description is minimally adequate. It covers the basic purpose and usage but lacks details about behavioral aspects, parameter interactions, and output format. For a list operation with filtering/sorting/pagination capabilities, more context would be helpful, but it meets the minimum viable threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds no information about parameters beyond what's already in the schema (which has 100% coverage). It doesn't explain the meaning of parameters like 'server_id' or provide context about how sorting and pagination work together. With high schema coverage, the baseline is 3, but the description doesn't compensate with any additional semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all slide presentations' with synonyms like 'slides, powerpoint, deck, keynote' for clarity. It specifies the resource type but doesn't explicitly differentiate from sibling tools like 'list_documents' or 'list_sheets', which would require mentioning it's specifically for slide presentations rather than other document types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: 'Use this when asked to list, find, or search presentations, decks, or slideshows.' This gives explicit context for when to invoke the tool. However, it doesn't mention when NOT to use it or alternatives (e.g., 'get_slide_presentation' for a single presentation), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_documentCInspect
Update an existing document (doc, word, note). Supply a new title and/or HTML content to replace the body.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID | |
| title | No | New title | |
| content | No | New HTML content (replaces entire document body) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions that content 'replaces entire document body' which is useful behavioral context, but doesn't disclose permission requirements, whether changes are reversible, rate limits, or what happens to existing content not mentioned in parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. The first sentence states the purpose, the second explains parameter usage. However, it could be slightly more front-loaded by mentioning the 'replaces entire document body' behavior earlier.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is insufficient. It doesn't explain what happens when only title or only content is provided, what the return value looks like, error conditions, or how it differs from append_to_document. Given the complexity of document updates, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters well. The description adds minimal value by mentioning 'new title and/or HTML content' and that content 'replaces entire document body', but doesn't provide additional syntax, format details, or constraints beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('existing document'), specifying document types (doc, word, note). It distinguishes from creation tools but doesn't explicitly differentiate from other update tools like update_sheet or update_slide_presentation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like append_to_document or other update_* tools. The description mentions 'replace the body' which hints at overwriting behavior but doesn't provide explicit usage context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_sheetCInspect
Update spreadsheet (sheet, excel, workbook) workbook-level properties (currently only title).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID | |
| title | Yes | New title |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states this is an update operation, implying mutation, but doesn't cover critical aspects like required permissions, whether changes are reversible, error conditions, or what happens to existing properties not mentioned. For a mutation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads key information (update spreadsheet properties). It avoids redundancy and wastes no words, though it could be slightly more structured by separating purpose from limitations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns, error handling, or behavioral nuances. While it covers the basic purpose, it fails to provide sufficient context for safe and effective use given the complexity of updating a spreadsheet resource.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (id and title). The description adds minimal value beyond the schema by mentioning 'workbook-level properties' and specifying 'currently only title', which helps contextualize the title parameter. However, it doesn't provide additional semantics like format constraints or examples beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('update') and resource ('spreadsheet (sheet, excel, workbook)'), specifying it modifies workbook-level properties. It distinguishes from siblings like 'update_sheet_tab' by focusing on workbook-level changes rather than tab-level. However, it doesn't explicitly differentiate from other update tools like 'update_document' or 'update_slide_presentation' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, when-not-to-use scenarios, or compare with sibling tools like 'update_sheet_tab' for tab-level updates or 'create_sheet' for new workbooks. The agent must infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_sheet_tabBInspect
Update a spreadsheet (sheet, excel, workbook) tab: merge cells, rename, change color, or resize the grid. Set a cell value to null to clear it.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | New tab name | |
| rows | No | New row count | |
| cells | No | Cells to update in A1 notation (null clears a cell). Each cell is an object with: value (string|number), and optional format: { bold, italic, underline (booleans), color (text hex e.g. "#FF0000"), bgColor (background hex), fontSize (number), fontFamily (string), align ("left"|"center"|"right"), numberFormat ("currency"|"percentage"|"number"|"date"), decimals (number), currencySymbol (string) }. Example: {"A1": {"value": "Revenue", "format": {"bold": true, "bgColor": "#1B3A5C", "color": "#FFFFFF"}}} | |
| color | No | Tab color | |
| tab_id | Yes | Tab ID | |
| columns | No | New column count | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions 'update' operations but doesn't disclose behavioral traits like whether this requires edit permissions, if changes are reversible, rate limits, or what happens to existing data not mentioned. The null-clearing behavior is noted, but other critical mutation details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences front-loaded with key operations. The first sentence lists major actions efficiently, and the second adds specific cell-clearing behavior. No wasted words, though it could be slightly more structured (e.g., separating tab-level vs. cell-level operations).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 7 parameters, 100% schema coverage, no annotations, and no output schema, the description is adequate but has gaps. It covers what the tool does but lacks behavioral context (e.g., permissions, side effects) and usage guidance. The schema compensates for parameter details, but overall completeness is minimal viable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds minimal value beyond the schema: it implies the 'cells' parameter can clear values with null, but this is already covered in the schema's description. Baseline 3 is appropriate when schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'update' and the resource 'spreadsheet tab', and specifies multiple operations (merge cells, rename, change color, resize grid, clear cells). It distinguishes from siblings like 'update_sheet' (which likely updates the entire sheet) and 'update_document' (different resource type).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'update_sheet' or 'create_sheet_tab'. The description lists operations but doesn't specify prerequisites, constraints, or typical use cases. It mentions clearing cells with null values, but this is more of a parameter detail than usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_slide_presentationCInspect
Update a slide presentation (slides, powerpoint, deck, keynote): title, slide data, theme, or aspect ratio.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID | |
| data | No | Full replacement array of slide objects | |
| theme | No | New theme | |
| title | No | New title | |
| aspectRatio | No | New aspect ratio |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what can be updated but lacks critical details: it doesn't warn that 'data' replaces all slides (implied by schema but not stated), specify permission requirements, indicate if changes are reversible, describe error conditions, or mention rate limits. For a mutation tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('update a slide presentation') and enumerates key updatable elements. There's no wasted verbiage, though it could be slightly more structured (e.g., clarifying that 'slide data' refers to the 'data' parameter).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 5 parameters, no annotations, and no output schema, the description is incomplete. It doesn't explain the tool's behavior beyond basic purpose, lacks usage context, and omits important details like the replacement nature of the 'data' parameter, error handling, or response format. Given the complexity and lack of structured support, it should provide more guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description lists updatable fields (title, slide data, theme, aspect ratio) which aligns with the schema but adds no additional semantic context beyond what's in the parameter descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('update') and resource ('slide presentation'), with specific examples of what can be updated (title, slide data, theme, aspect ratio). It distinguishes from siblings like 'create_slide_presentation' and 'delete_slide_presentation' by specifying modification rather than creation or deletion, though it doesn't explicitly differentiate from other update tools like 'update_document' or 'update_sheet'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing presentation ID), compare with sibling tools like 'append_slides' for partial updates, or specify scenarios where this full-replacement approach is appropriate versus incremental modifications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!