Mila
Server Details
Mila is an AI-native collaborative platform for documents, spreadsheets, and slide presentations. The Mila MCP server lets AI assistants create, read, update, and manage office documents programmatically. 23 MCP tools covering documents, sheets, slides, and servers. Get your API key at https://mila.gg/api-keys.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 23 of 23 tools scored.
Every tool has a clearly distinct purpose with no ambiguity. Tools are organized by resource type (documents, spreadsheets, presentations) and action (create, get, update, delete, list, append), making it easy to distinguish between them. For example, 'append_rows' is specific to spreadsheets while 'append_to_document' is for documents, avoiding overlap.
Tool names follow a highly consistent verb_noun pattern throughout, such as 'create_document', 'get_sheet', 'update_slide_presentation', and 'list_documents'. This predictability aids in understanding and usage, with no deviations or mixed conventions observed.
With 23 tools, the count is slightly high but reasonable for a server handling multiple document types (documents, spreadsheets, presentations) with full CRUD operations. It covers a broad scope effectively, though it might feel heavy compared to simpler servers.
The tool surface is complete with comprehensive CRUD and lifecycle coverage for documents, spreadsheets, and presentations. It includes creation, retrieval, updating, deletion, listing, and appending operations, with no obvious gaps that would hinder agent workflows in this domain.
Available Tools
23 toolsappend_rowsCInspect
Append one or more rows of data to a spreadsheet (sheet, excel, workbook) tab. Use "rows" for multiple rows or "values" for a single row.
| Name | Required | Description | Default |
|---|---|---|---|
| rows | No | Multiple rows. Each row is an array of values or an object keyed by column letter, e.g. [["Alice", 30], ["Bob", 25]] or [{"A": "Alice", "B": 30}] | |
| tab_id | Yes | Tab ID | |
| values | No | Single row as an array or column-letter object, e.g. ["Alice", 30] or {"A": "Alice", "B": 30} | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but lacks critical behavioral details. It doesn't disclose whether this requires write permissions, if it's idempotent, potential rate limits, or what happens on failure (e.g., partial appends). The mention of 'spreadsheet (sheet, excel, workbook)' adds some context but is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose, followed by a helpful parameter tip. Both sentences earn their place, though it could be slightly more structured (e.g., separating usage guidance from parameter advice).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't cover behavioral aspects like error handling, return values, or side effects, leaving significant gaps for an AI agent to understand tool invocation fully.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value by clarifying the 'rows' vs 'values' distinction with examples, but doesn't provide additional semantic meaning beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append') and resource ('rows of data to a spreadsheet tab'), making the purpose evident. However, it doesn't explicitly differentiate from siblings like 'create_sheet' or 'update_sheet', which could involve similar spreadsheet operations, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'create_sheet' or 'update_sheet', nor does it mention prerequisites or exclusions. It only offers internal parameter usage tips ('rows' vs 'values'), not contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
append_slidesCInspect
Append one or more slides to an existing presentation (slides, powerpoint, deck, keynote).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID | |
| slide | No | Single slide to append | |
| slides | No | Array of slides to append | |
| position | No | 0-based position to insert at (appends to end if omitted) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the action is 'append' which implies mutation, but doesn't disclose behavioral traits like required permissions, whether changes are reversible, rate limits, or what happens on failure. The description is minimal and lacks critical operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is appropriately concise and front-loaded with the core action. The parenthetical examples ('slides, powerpoint, deck, keynote') add minor value but don't significantly detract from efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns, error conditions, or important behavioral aspects. The context signals indicate complexity (nested objects, 4 parameters), but the description doesn't adequately address this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are fully documented in the schema. The description adds no additional parameter semantics beyond implying slides can be appended. Baseline score of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append') and target resource ('slides to an existing presentation'), with examples of presentation types. It distinguishes from creation tools but doesn't explicitly differentiate from similar append operations like 'append_rows' or 'append_to_document'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'create_slide_presentation' for new decks or 'update_slide_presentation' for modifications. It mentions 'existing presentation' but doesn't clarify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
append_to_documentAInspect
Append HTML content to the end of an existing document (doc, word, note) without replacing existing content.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID | |
| content | Yes | HTML content to append |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the append behavior but doesn't disclose important behavioral traits like: whether this requires specific permissions, what happens if the document doesn't exist, whether the operation is idempotent, or what format the HTML content should be in beyond the schema's basic description.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence that efficiently conveys the core functionality. No wasted words, front-loaded with the main action, and appropriately sized for a straightforward tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is adequate but has gaps. It explains what the tool does but lacks information about error conditions, permissions needed, or what the tool returns. Given the complexity (write operation) and lack of structured metadata, more behavioral context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters adequately. The description adds minimal value beyond the schema - it mentions 'HTML content' which matches the schema's description, and 'existing document' which relates to the 'id' parameter. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append HTML content'), target resource ('existing document'), and scope ('without replacing existing content'). It distinguishes from siblings like update_document (which likely replaces content) and create_document (which creates new documents).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you want to add content to an existing document without overwriting it, but doesn't explicitly state when to use this vs. alternatives like update_document or create_document. No explicit exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_documentAInspect
Create a new document. Use this when asked to create, write, draft, or compose a document, doc, note, page, article, or word document. Content should be HTML (e.g. "Hello").
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Document title | |
| content | No | HTML content of the document | |
| server_id | No | Server ID to create in (omit for personal) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It states this is a creation tool (implying mutation/write operation) and specifies content format requirements ('Content should be HTML'), which is valuable behavioral context. However, it doesn't mention permission requirements, rate limits, or what happens when server_id is omitted vs provided. The description doesn't contradict any annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two sentences. The first sentence states the core purpose, and the second provides both usage guidelines and parameter guidance. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description does well by covering purpose, usage guidelines, and some behavioral context. However, it could be more complete by mentioning what the tool returns (e.g., document ID or confirmation) or any side effects. Given the 100% schema coverage and clear purpose, it's mostly adequate but has room for improvement on output expectations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds some value by specifying that content 'should be HTML' and implying server_id is optional ('omit for personal'), but these are already somewhat covered in the schema descriptions. The description doesn't provide significant additional parameter semantics beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a new document') and distinguishes it from siblings by focusing on document creation rather than sheets, slides, or other document types. It explicitly mentions the resource type (document) and provides context about what constitutes a document in this system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'when asked to create, write, draft, or compose a document, doc, note, page, article, or word document.' This gives clear alternative names and contexts that should trigger this tool selection, helping the agent distinguish from sibling tools like create_sheet or create_slide_presentation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sheetAInspect
Create a new spreadsheet (sheet, excel, workbook) with an initial tab. Use this when asked to create a spreadsheet, table, workbook, tracker, or organize data in rows and columns.
| Name | Required | Description | Default |
|---|---|---|---|
| rows | No | Number of rows (default 100) | |
| cells | No | Initial cell data in A1 notation. Each cell is an object with: value (string|number), and optional format: { bold, italic, underline (booleans), color (text hex e.g. "#FF0000"), bgColor (background hex), fontSize (number), fontFamily (string), align ("left"|"center"|"right"), numberFormat ("currency"|"percentage"|"number"|"date"), decimals (number), currencySymbol (string) }. Example: {"A1": {"value": "Name", "format": {"bold": true, "bgColor": "#1B3A5C", "color": "#FFFFFF"}}} | |
| title | Yes | Workbook title | |
| columns | No | Number of columns (default 26) | |
| tab_name | No | Name of the first tab (default "Sheet 1") | |
| server_id | No | Server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'Create' implies a write/mutation operation, the description doesn't address important behavioral aspects like permissions required, whether creation is reversible, rate limits, or what happens on success/failure. It mentions 'with an initial tab' which adds some context, but overall lacks critical behavioral information for a creation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the core functionality, and the second provides usage guidance. No wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with no annotations and no output schema, the description provides adequate but incomplete context. It covers the basic purpose and usage scenarios well, but lacks information about behavioral traits, error conditions, and what the tool returns. Given the complexity (6 parameters including nested objects) and absence of structured safety/behavior annotations, the description should do more to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. The baseline score of 3 is appropriate when the schema does the heavy lifting for parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a new spreadsheet with an initial tab, using specific verbs ('create') and resources ('spreadsheet', 'sheet', 'workbook'). It distinguishes from some siblings like 'create_document' by specifying spreadsheet creation, but doesn't explicitly differentiate from 'create_sheet_tab' which creates tabs within existing sheets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with 'Use this when asked to create a spreadsheet, table, workbook, tracker, or organize data in rows and columns.' This gives clear context for when to invoke the tool. However, it doesn't mention when NOT to use it or explicitly name alternatives like 'create_document' for non-spreadsheet documents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_sheet_tabCInspect
Add a new tab to a spreadsheet (sheet, excel, workbook).
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | Tab name | |
| rows | No | Number of rows | |
| cells | No | Initial cells in A1 notation. Each cell: { value, format?: { bold, italic, underline, color (text hex), bgColor (background hex), fontSize, fontFamily, align, numberFormat, decimals, currencySymbol } } | |
| columns | No | Number of columns | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool adds a tab but doesn't disclose behavioral traits like required permissions, whether it's idempotent, error handling, or what happens if a tab with the same name exists. 'Add' implies mutation, but details are missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with the core action, zero waste. It efficiently conveys the purpose without unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It lacks information on return values, error conditions, side effects, and integration with sibling tools. Given the complexity (5 parameters including nested objects), more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds no parameter-specific information beyond implying 'sheet_id' is needed. Baseline 3 is appropriate as the schema handles semantics, but the description doesn't compensate for any gaps (none exist).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add a new tab') and resource ('to a spreadsheet'), specifying it applies to sheets, Excel, or workbooks. It distinguishes from siblings like 'create_sheet' (creates entire spreadsheet) and 'append_rows' (adds data to existing tab), but could be more explicit about differentiation from 'update_sheet_tab'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'create_sheet' (for new spreadsheets) or 'update_sheet_tab' (for modifying existing tabs). The description implies usage for adding tabs but lacks explicit context, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_slide_presentationBInspect
Create a new slide presentation (slides, powerpoint, deck, keynote). Use this when asked to create a presentation, slide deck, or slideshow. Each slide has "html" content and optional "background" and "notes".
| Name | Required | Description | Default |
|---|---|---|---|
| data | No | Array of slide objects | |
| theme | No | Theme name (default "default") | |
| title | Yes | Presentation title | |
| server_id | No | Server ID | |
| aspectRatio | No | Aspect ratio (default "16:9") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is a creation tool but doesn't disclose behavioral traits like authentication requirements, rate limits, whether the operation is idempotent, what happens on failure, or what the output looks like. The description mentions slide structure but doesn't explain creation behavior beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences. The first sentence states the purpose, and the second provides usage guidance and some parameter context. There's no wasted text, and information is front-loaded with the core purpose stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a creation tool with 5 parameters, 100% schema coverage, and no output schema, the description provides adequate purpose and usage context but lacks behavioral transparency. Without annotations, the description should ideally cover more about what happens during creation, error conditions, or output expectations. The current description is minimally complete but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics by mentioning 'Each slide has "html" content and optional "background" and "notes"' which partially covers the 'data' parameter structure. This provides some value beyond the schema but doesn't significantly enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Create a new slide presentation' with specific resources mentioned (slides, powerpoint, deck, keynote). It distinguishes from siblings by focusing on creation rather than operations like update, delete, or get. However, it doesn't explicitly differentiate from 'create_document' which might be a similar creation operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this when asked to create a presentation, slide deck, or slideshow.' This gives clear context for when to invoke the tool. However, it doesn't mention when NOT to use it or provide alternatives among sibling tools like 'create_document' or 'create_sheet'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_documentCInspect
Permanently delete a document (doc, word, note) by ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It states 'permanently delete' which implies destructive and irreversible action, but doesn't cover permissions needed, error conditions, or what happens to linked resources. This is a significant gap for a destructive tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's appropriately sized and front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive deletion tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'permanently' entails, whether deletion can be undone, what the response looks like, or potential side effects on related data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'id' parameter. The description adds no additional parameter context beyond what's in the schema, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('permanently delete') and resource ('document (doc, word, note) by ID'), making the purpose unambiguous. It doesn't explicitly differentiate from sibling deletion tools like delete_sheet or delete_slide_presentation, but the resource type is specified.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, recovery options, or when to choose deletion over other operations like archiving or updating.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sheetAInspect
Permanently delete a spreadsheet (sheet, excel, workbook) and all its tabs.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully clarifies that deletion is 'permanent' and affects 'all its tabs', which are important behavioral traits. However, it doesn't mention authentication requirements, rate limits, error conditions, or what happens if the sheet doesn't exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the most critical information ('Permanently delete') and uses parentheses effectively to clarify terminology. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no annotations and no output schema, the description does the minimum viable job. It clearly states the action and scope but lacks important context about permissions, confirmation requirements, error handling, and what (if anything) is returned. The permanence warning is helpful but insufficient for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single 'id' parameter. The description doesn't add any parameter-specific information beyond what's in the schema (like format examples or where to find the ID). The baseline of 3 is appropriate when the schema provides complete parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Permanently delete') and resource ('a spreadsheet (sheet, excel, workbook) and all its tabs'), distinguishing it from sibling tools like delete_sheet_tab (which only deletes a tab) and delete_document (which deletes a different resource type). The inclusion of 'permanently' adds important context about the operation's irreversibility.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided about when to use this tool versus alternatives like delete_sheet_tab (for deleting individual tabs) or delete_document (for deleting documents). The description doesn't mention prerequisites, permissions required, or recovery options after deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_sheet_tabAInspect
Delete a tab from a spreadsheet (sheet, excel, workbook). Cannot delete the last remaining tab.
| Name | Required | Description | Default |
|---|---|---|---|
| tab_id | Yes | Tab ID | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses the critical constraint about not deleting the last tab, which is valuable behavioral context. However, it lacks details on permissions needed, whether the deletion is reversible, error handling, or what happens to the sheet structure after deletion.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two concise sentences with zero waste. The first sentence states the purpose, and the second adds crucial behavioral context. It's front-loaded and efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is minimally complete. It covers the core purpose and a key constraint, but lacks details on side effects, success/error responses, or prerequisites. Given the complexity of a deletion operation, more context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (sheet_id and tab_id). The description doesn't add any parameter-specific meaning beyond what the schema provides, such as format examples or relationships between parameters. The baseline of 3 is appropriate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete a tab') and resource ('from a spreadsheet'), distinguishing it from sibling tools like delete_sheet (which deletes entire sheets) or delete_document (which deletes documents). It precisely defines the operation's scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by stating 'Cannot delete the last remaining tab,' which implicitly guides when NOT to use this tool. However, it doesn't explicitly mention alternatives (e.g., using update_sheet_tab to rename instead) or compare it to other deletion tools like delete_sheet.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_slide_presentationCInspect
Permanently delete a slide presentation (slides, powerpoint, deck, keynote).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden of behavioral disclosure. It states 'Permanently delete', which implies a destructive, irreversible mutation, but doesn't address permissions, confirmation steps, error handling (e.g., if ID is invalid), or what happens on success (e.g., no return value). For a destructive tool, this leaves critical gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads the key action ('Permanently delete') and resource, and the parenthetical synonyms add clarity without verbosity. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's destructive nature, no annotations, and no output schema, the description is incomplete. It doesn't explain what 'permanently' entails (e.g., no recovery), authentication needs, potential side effects, or what to expect after deletion. For a high-stakes operation, this lacks necessary context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' documented as 'Presentation ID'. The description adds no additional parameter semantics beyond implying the ID refers to a slide presentation. This meets the baseline of 3 since the schema adequately covers the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Permanently delete') and resource ('a slide presentation'), with helpful synonyms in parentheses. It distinguishes from siblings like 'delete_document' or 'delete_sheet' by specifying slide presentations. However, it doesn't explicitly contrast with 'update_slide_presentation' or 'get_slide_presentation' for full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing the presentation ID), exclusions (e.g., not for documents or sheets), or comparisons to siblings like 'delete_document' or 'update_slide_presentation'. The agent must infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_documentBInspect
Get a document (doc, word, note, page) by ID, including its full content (title, HTML body, metadata).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool retrieves content but doesn't disclose behavioral traits like error handling (e.g., if ID is invalid), authentication needs, rate limits, or whether it's read-only. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get a document by ID') and adds necessary detail ('including its full content'). There is zero waste, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what 'full content' entails beyond listing title, HTML body, and metadata, nor does it cover return format, error cases, or other contextual details needed for reliable use by an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'id' documented as 'Document ID'. The description adds that it's used to get a document by ID, which aligns with the schema but doesn't provide additional semantic context (e.g., format examples or where to find IDs). Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('a document'), specifying it retrieves by ID and includes full content. It distinguishes from siblings like list_documents (which lists) and update_document (which modifies), but doesn't explicitly differentiate from other get_* tools like get_sheet, which target different resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need a specific document's full content by ID, but doesn't explicitly state when to use this vs. alternatives like list_documents for browsing or update_document for editing. No exclusions or prerequisites are mentioned, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sheetBInspect
Get a spreadsheet (sheet, excel, workbook) by ID, including all tabs and their cell data in A1 notation.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions the tool retrieves data but does not disclose behavioral traits such as read-only nature, authentication requirements, rate limits, error handling, or data format details beyond 'A1 notation'. This leaves significant gaps for a tool that fetches potentially large datasets.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and includes key details without redundancy. Every word earns its place, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of fetching spreadsheet data with multiple tabs and cells, and the absence of annotations and output schema, the description is insufficient. It lacks details on return structure, pagination, permissions, or error cases, which are critical for proper tool invocation in a real-world context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds minimal meaning beyond the input schema, which has 100% coverage. It clarifies that 'id' refers to a 'Sheet (workbook) ID', but the schema already describes it as 'Sheet (workbook) ID'. No additional context like ID format or examples is provided, so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('spreadsheet'), specifies the scope ('by ID'), and details what is included ('all tabs and their cell data in A1 notation'). It distinguishes from siblings like 'get_sheet_tab' (which likely gets a single tab) and 'list_sheets' (which lists multiple sheets).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing a specific spreadsheet by ID with full tab data, but does not explicitly state when to use this tool versus alternatives like 'get_sheet_tab' or 'list_sheets'. No exclusions or prerequisites are mentioned, leaving some ambiguity for the agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sheet_tabCInspect
Get a single tab from a spreadsheet (sheet, excel, workbook), including all cell data in A1 notation.
| Name | Required | Description | Default |
|---|---|---|---|
| tab_id | Yes | Tab ID | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions data format ('A1 notation') but omits critical behavioral details: whether this is a read-only operation, if it requires specific permissions, potential rate limits, error handling, or what happens if IDs are invalid. For a data retrieval tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured by separating usage context from data format details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It covers the basic action and data format but misses behavioral aspects (e.g., safety, permissions) and output details (e.g., structure of returned cell data). For a tool retrieving tabular data, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('sheet_id', 'tab_id') documented in the schema. The description adds no parameter-specific details beyond what the schema provides (e.g., format examples or relationships). Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('a single tab from a spreadsheet'), including the scope of data returned ('all cell data in A1 notation'). It distinguishes from siblings like 'get_sheet' (which likely returns sheet metadata) by specifying tab-level retrieval with cell data, though it doesn't explicitly contrast with them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing valid IDs), exclusions, or comparisons to siblings like 'get_sheet' (for sheet-level info) or 'list_sheets' (for listing sheets). The description implies usage for retrieving tab data but lacks contextual direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_slide_presentationBInspect
Get a slide presentation (slides, powerpoint, deck, keynote) by ID, including all slide data.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool retrieves data ('Get'), implying a read-only operation, but doesn't disclose behavioral traits like authentication needs, rate limits, error conditions, or what 'all slide data' includes (e.g., metadata, content, formatting). This leaves gaps in understanding how the tool behaves in practice.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get a slide presentation') and adds clarifying details ('by ID, including all slide data') without redundancy. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter with full schema coverage and no output schema, the description is minimally adequate. It clarifies the resource type and scope ('all slide data'), but lacks details on return values, error handling, or operational context (e.g., permissions). For a simple retrieval tool, this is passable but leaves room for improvement in guiding the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'id' documented as 'Presentation ID'. The description adds no additional meaning beyond this, such as format examples (e.g., numeric vs. string ID) or where to obtain the ID. Since the schema fully describes the parameter, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('slide presentation'), and specifies what is retrieved ('including all slide data'). It distinguishes from siblings like list_slides (which lists) and get_document/get_sheet (different resource types). However, it doesn't explicitly differentiate from get_sheet_tab or get_document in terms of resource specificity beyond synonyms like 'slides, powerpoint, deck, keynote'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an ID), contrast with list_slides for browsing, or specify use cases like editing versus viewing. Without such context, the agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_documentsAInspect
List all documents (docs, word, notes, pages). Use this when asked to list, find, or search documents, notes, drafts, or written content. Supports pagination, sorting, and filtering by workspace.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset (default 0) | |
| server_id | No | Filter by server ID, or "personal" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It mentions 'Supports pagination, sorting, and filtering by workspace,' which adds useful behavioral context beyond basic listing. However, it doesn't disclose important details like rate limits, authentication requirements, or whether this is a read-only operation (though implied by 'list').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences. The first states the purpose, the second provides usage guidance and behavioral context. It's front-loaded with the core functionality. Could be slightly more structured but wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with 5 parameters, 100% schema coverage, and no output schema, the description is adequate but has gaps. It covers purpose, usage, and some behavioral traits, but doesn't explain return format, error conditions, or provide examples. With no annotations, it should do more to compensate for the missing output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters. The description mentions 'filtering by workspace' which relates to the server_id parameter, but doesn't add significant semantic value beyond what's in the schema. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all documents (docs, word, notes, pages).' It specifies the verb ('List') and resource ('documents'), though it could be more specific about distinguishing from sibling tools like list_sheets or list_slides. The mention of document types adds some differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Use this when asked to list, find, or search documents, notes, drafts, or written content.' This gives explicit when-to-use guidance. However, it doesn't mention when NOT to use it or explicitly name alternatives like list_sheets or list_slides for different content types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_serversBInspect
List all workspaces (servers) you have access to.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states it lists workspaces/servers the user has access to, implying a read-only operation, but doesn't disclose behavioral traits like pagination, rate limits, authentication needs, or what 'access' entails. This is inadequate for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and resource, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and the tool's role in a set of sibling tools, the description is incomplete. It lacks context on output format, error handling, or how it fits with other tools, leaving significant gaps for an AI agent to understand its full use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter details, which is appropriate, earning a baseline score of 4 for this context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('workspaces/servers you have access to'), making the purpose understandable. However, it doesn't differentiate from sibling tools like list_documents, list_sheets, or list_slides, which are also list operations for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context for usage, or comparisons with other list tools in the sibling set, leaving the agent to infer based on resource type alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_sheetsAInspect
List all spreadsheets (sheets, excel, workbooks) with their tab metadata (no cell data). Use this when asked to list, find, or search spreadsheets, workbooks, tables, or tabular data.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset | |
| server_id | No | Filter by server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses that the tool lists metadata only (no cell data), which is a key behavioral trait. However, it lacks details on permissions, rate limits, or error handling. The description adds some context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, zero waste. The first sentence defines purpose and scope, the second provides usage guidelines. It's front-loaded with essential information and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is reasonably complete for a list operation. It clarifies the scope (metadata only) and usage context. However, it could improve by mentioning pagination behavior (implied by offset/limit) or typical response format, but the parameters cover key aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain server_id filtering or default values). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all spreadsheets') and resource ('spreadsheets, sheets, excel, workbooks') with precise scope ('with their tab metadata, no cell data'). It distinguishes from siblings like get_sheet (which retrieves cell data) and list_documents (which lists documents, not spreadsheets).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool: 'when asked to list, find, or search spreadsheets, workbooks, tables, or tabular data.' It provides clear usage context without naming alternatives, but the sibling tools include specific operations like get_sheet for detailed data, making the guidance sufficient for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_slidesBInspect
List all slide presentations (slides, powerpoint, deck, keynote). Use this when asked to list, find, or search presentations, decks, or slideshows.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Sort field | |
| limit | No | Max results (1-100, default 50) | |
| order | No | Sort order | |
| offset | No | Pagination offset | |
| server_id | No | Filter by server ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral disclosure. The description only states what the tool does ('List all slide presentations') without mentioning any behavioral traits like pagination behavior, rate limits, authentication requirements, error conditions, or what 'all' means in practice (e.g., across all servers or just one). This leaves significant gaps for an agent to understand how to properly use this tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that efficiently communicate purpose and usage guidelines. Every word earns its place with no redundancy or unnecessary elaboration. The structure is front-loaded with the core purpose followed by usage context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list/read operation with 5 parameters and no annotations or output schema, the description is insufficient. It doesn't explain what information is returned, how results are structured, whether there are limitations on what 'all' means, or any behavioral aspects like pagination. The agent would need to guess about the tool's behavior and output format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 5 parameters with descriptions, constraints, and enums. The description adds no parameter-specific information beyond what's in the schema, meeting the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all slide presentations' with specific resource types enumerated (slides, powerpoint, deck, keynote). It uses a specific verb ('List') and identifies the resource, but doesn't explicitly distinguish it from similar sibling tools like 'list_documents' or 'list_sheets'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage context: 'Use this when asked to list, find, or search presentations, decks, or slideshows.' This gives explicit guidance on when to invoke the tool based on user requests. However, it doesn't specify when NOT to use it or mention alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_documentCInspect
Update an existing document (doc, word, note). Supply a new title and/or HTML content to replace the body.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Document ID | |
| title | No | New title | |
| content | No | New HTML content (replaces entire document body) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but offers minimal behavioral context. It mentions that content 'replaces entire document body' which is useful, but doesn't disclose permission requirements, whether the operation is reversible, rate limits, or what happens to unspecified fields. For a mutation tool with zero annotation coverage, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences with zero waste. It front-loads the core purpose and follows with parameter guidance, though it could be slightly more comprehensive given the lack of annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns, error conditions, or important behavioral aspects like whether partial updates are allowed. The context signals show 3 parameters with 1 required, but the description doesn't adequately address the mutation's implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by clarifying that title and content are optional ('and/or') and that content replaces the entire body, but doesn't provide additional syntax or format details beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'update' and resource 'existing document', specifying it works on doc/word/note formats. It distinguishes from siblings like 'create_document' by focusing on updates, but doesn't explicitly differentiate from other update tools like 'update_sheet' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives is provided. The description doesn't mention prerequisites (e.g., needing document ID), when not to use it, or how it differs from similar tools like 'append_to_document' or other update operations on different resource types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_sheetCInspect
Update spreadsheet (sheet, excel, workbook) workbook-level properties (currently only title).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Sheet (workbook) ID | |
| title | Yes | New title |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states this is an update operation, implying mutation, but doesn't disclose behavioral traits like whether changes are reversible, what permissions are needed, if there are rate limits, or what happens on success/failure. The description adds minimal context beyond the basic action.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste. It front-loads key information (update action, resource, scope, and current limitation) without unnecessary elaboration. Every word earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a mutation tool. It lacks details on behavioral aspects (e.g., permissions, side effects), error handling, and what the tool returns. While the purpose is clear, the overall context is insufficient for safe and effective use by an AI agent without additional assumptions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters (id and title) clearly documented in the schema. The description adds marginal value by clarifying that 'title' is a workbook-level property and the only currently updatable one, but doesn't provide additional syntax, format, or constraints beyond what the schema already specifies. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update'), resource ('spreadsheet (sheet, excel, workbook)'), and scope ('workbook-level properties'), with specific mention of the currently supported property ('title'). It distinguishes from siblings like update_sheet_tab (which modifies tabs rather than workbook properties) and update_document (which handles documents rather than spreadsheets). However, it doesn't explicitly contrast with all sibling update tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description mentions 'currently only title' but doesn't specify when to choose this over other update tools (e.g., update_sheet_tab for tab-level changes) or prerequisites like required permissions. Usage is implied by the purpose but lacks explicit context or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_sheet_tabBInspect
Update a spreadsheet (sheet, excel, workbook) tab: merge cells, rename, change color, or resize the grid. Set a cell value to null to clear it.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | New tab name | |
| rows | No | New row count | |
| cells | No | Cells to update in A1 notation (null clears a cell). Each cell is an object with: value (string|number), and optional format: { bold, italic, underline (booleans), color (text hex e.g. "#FF0000"), bgColor (background hex), fontSize (number), fontFamily (string), align ("left"|"center"|"right"), numberFormat ("currency"|"percentage"|"number"|"date"), decimals (number), currencySymbol (string) }. Example: {"A1": {"value": "Revenue", "format": {"bold": true, "bgColor": "#1B3A5C", "color": "#FFFFFF"}}} | |
| color | No | Tab color | |
| tab_id | Yes | Tab ID | |
| columns | No | New column count | |
| sheet_id | Yes | Sheet (workbook) ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden but provides minimal behavioral insight. It mentions that setting a cell value to null clears it, which is useful, but lacks details on permissions, error handling, rate limits, or whether changes are reversible. For a mutation tool, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. The first enumerates key operations, and the second provides a critical behavioral note about clearing cells. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 7 parameters, no annotations, and no output schema, the description is insufficient. It lacks information on required permissions, side effects, response format, error conditions, and how it differs from sibling tools like 'update_sheet'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds marginal value by clarifying that null clears a cell (implied in schema) and listing high-level operations (merge, rename, etc.), but doesn't elaborate on parameter interactions or usage nuances beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('update a spreadsheet tab') and enumerates specific operations (merge cells, rename, change color, resize grid, clear cells). It distinguishes from siblings like 'update_sheet' (which likely modifies the entire workbook) by focusing on tab-level operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'update_sheet' or 'create_sheet_tab'. The description lists capabilities but doesn't specify prerequisites, constraints, or typical scenarios for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_slide_presentationCInspect
Update a slide presentation (slides, powerpoint, deck, keynote): title, slide data, theme, or aspect ratio.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Presentation ID | |
| data | No | Full replacement array of slide objects | |
| theme | No | New theme | |
| title | No | New title | |
| aspectRatio | No | New aspect ratio |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states 'Update' implying mutation, but lacks details on permissions, whether changes are reversible, rate limits, or response behavior. It mentions 'Full replacement array of slide objects' in the schema, but the description doesn't highlight this critical behavior, leaving gaps in understanding the tool's impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and lists key updatable fields. It avoids redundancy and wastes no words, though it could be slightly more structured by separating usage notes from the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a mutation tool with 5 parameters. It lacks behavioral details (e.g., side effects, error handling), usage context, and output expectations. While the schema covers parameters well, the description doesn't compensate for the missing annotation and output information, leaving significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters (id, data, theme, title, aspectRatio). The description adds minimal value by listing updatable fields, but doesn't provide additional context like format examples or constraints beyond what's in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('slide presentation'), and lists specific updatable fields (title, slide data, theme, aspect ratio). It distinguishes from sibling tools like 'create_slide_presentation' by focusing on modification rather than creation, though it doesn't explicitly contrast with 'update_document' or 'update_sheet' which might handle different resource types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention prerequisites (e.g., needing a presentation ID), compare with 'append_slides' for adding slides without full replacement, or specify when updates are appropriate versus creating new presentations. The description only lists what can be updated, not the context for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!