Google_sheets
Server Details
Google Sheets MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-google_sheets
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 10 of 10 tools scored.
Tools have mostly distinct purposes, but 'ask_pipeworx' overlaps with 'discover_tools' and 'sheets_read' in unknown ways, as it claims to pick the right tool and return results. The memory tools (remember, recall, forget) are unrelated to spreadsheets, causing potential confusion.
Sheets tools follow a consistent 'sheets_verb' pattern. Memory tools use simple verbs (remember, recall, forget). However, 'ask_pipeworx' and 'discover_tools' break the pattern, using different naming conventions (verb_noun vs. verb_pipeworx).
10 tools is appropriate for the domain, but the mix of sheets tools (5) and memory tools (3) plus two general-purpose tools feels slightly heavy. Each tool serves a purpose, so still reasonable.
Sheets coverage includes create, read, append, write, and metadata — lacking update/delete for specific cells or sheets. Memory tools cover CRUD for memories. The 'ask_pipeworx' tool suggests a broader data query capability not covered by the other tools, leaving ambiguity.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It discloses that the tool automatically selects the right tool and fills arguments, which is key behavioral context. However, it does not mention potential limitations or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise at three sentences, front-loaded with the key action, and includes examples. Minor improvement could be more structured formatting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema), the description is complete. It explains what the tool does, how to use it, and provides examples. Sibling tools like sheets_* are distinct, so no additional context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single 'question' parameter described as 'Your question or request in natural language'. The description adds value by emphasizing plain English and providing examples, making the parameter's purpose even clearer.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts natural language questions and returns answers from the best data source, which is specific and distinct from sibling tools like 'discover_tools' or the sheets tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides examples of appropriate usage ('What is the US trade deficit with China?') and implies it handles tool selection, but does not explicitly state when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description explains the behavior: it searches by natural language description and returns the most relevant tools with names and descriptions. Although no annotations are provided, the description compensates well by being clear about what it does and its purpose. It does not mention any destructive behavior or side effects, but as a search tool, none are expected.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise: three sentences, each providing essential information. The first sentence states the purpose, the second describes the output, and the third gives usage guidance. No extraneous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (two parameters, no output schema), the description is complete. It explains what the tool does, when to use it, and how to formulate the query. There is no need for additional details like return format, as the description states it returns 'the most relevant tools with names and descriptions.'
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes both parameters (query and limit) with good detail. The description reinforces the query parameter's usage with examples ('e.g., "analyze housing market trends"'), adding value beyond the schema. However, the schema coverage is 100%, so the baseline is 3; the examples only marginally improve it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: searching the Pipeworx tool catalog by describing what you need. It uses specific verbs ('search', 'returns') and specifies the resource ('tool catalog'). It distinguishes itself from siblings by advising to call this FIRST when many tools are available.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly advises when to use the tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear context and implicitly suggests not using other tools without first discovering them.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It states 'Delete' but does not clarify if deletion is permanent, if it requires confirmation, or what happens if the key does not exist (error vs. silent success). There is no mention of authorization needs or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that conveys the essential purpose with no superfluous words. It earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (1 required param, no output schema, no nested objects), the description is adequate but could mention return value or behavior on missing key. It covers the basic operation but lacks completeness for error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage for its single parameter, with a clear description 'Memory key to delete'. The description adds no extra semantics beyond the schema, but the schema itself is sufficient. Baseline 3 is appropriate, and slight bonus for the single-param clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key' clearly specifies the action (delete), the resource (stored memory), and the required identifier (key). It is distinct from sibling tools like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a memory needs to be removed, but provides no guidance on when not to use it, prerequisites (e.g., memory must exist), or alternatives among siblings. The sibling list includes 'recall' and 'remember', but no explicit comparison is made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that omitting the key lists all memories, which is important. However, it does not mention side effects, limitations, or whether this is a read-only operation. The description is adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action, no wasted words. Every sentence adds distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (1 optional parameter, no output schema, no annotations), the description is nearly complete. It could mention that keys are case-sensitive or what happens if a key doesn't exist, but for a retrieval tool this is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'key' parameter. The description adds value by explaining the behavior when key is omitted (list all). This goes beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'retrieve' and the resource 'memory', with explicit mention of two modes: retrieve by key or list all. It distinguishes itself from sibling tools like 'remember' and 'forget' by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool ('to retrieve context you saved earlier') and when to omit the key ('to list all stored memories'). It does not explicitly state when not to use it, but the context is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description goes beyond annotations (none provided) by explaining persistence behavior: authenticated users get persistent memory, anonymous sessions last 24 hours. It does not mention any destructive behavior or rate limits, but given the simple nature of the tool, this is sufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences, each providing essential information: what it does, when to use, and persistence details. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 2 simple parameters, no output schema, and no annotations, the description is complete. It covers purpose, usage, and behavioral context (persistence). The only minor gap is not mentioning any potential size limits or overwrite behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters well. The description adds usage context for the value field (any text) but does not add new semantic meaning beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool stores a key-value pair in session memory, with a specific verb (store) and resource (session memory). It distinguishes from siblings like 'recall' and 'forget' by explicitly focusing on saving data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description explains when to use (save intermediate findings, user preferences, context across calls) and provides context on persistence (authenticated vs anonymous). However, it does not explicitly mention when not to use or alternatives like 'recall' or 'forget'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sheets_appendBInspect
Append new rows to the end of a Google Sheet table. Specify sheet name and row data to add.
| Name | Required | Description | Default |
|---|---|---|---|
| range | Yes | A1 notation range to append after (e.g., "Sheet1!A1") | |
| values | Yes | Array of rows to append | |
| spreadsheet_id | Yes | Spreadsheet ID | |
| value_input_option | No | How to interpret input. USER_ENTERED (default) parses formulas/dates/numbers. RAW stores literal strings. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description bears full responsibility for behavioral disclosure. It states 'append to the end' which implies non-destructive behavior, but does not detail what happens if the range does not match existing table dimensions, or if rows contain formulas. Acceptable but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that conveys the core action and resource. It is concise and efficient, though it could benefit from a brief mention of behavior regarding empty rows or formula parsing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 4 parameters and no output schema, the description covers the basic purpose but lacks detail on return values or side effects. For a simple append tool, this is adequate but could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so parameters are already well-documented. The description adds no extra meaning beyond what the schema provides, such as hinting that 'values' should be 2D arrays or that 'range' is typically 'Sheet1!A1' to detect the table. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Append rows') and the resource ('Google Sheets table'). It distinguishes itself from siblings like 'sheets_write' (which overwrites) and 'sheets_create' (which creates new sheets), though it could be more explicit about the difference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for adding rows to the end of a table, but provides no explicit guidance on when to use this vs 'sheets_write' or other sibling tools. There are no usage examples or caveats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sheets_createAInspect
Create a new Google Spreadsheet. Optionally set title and initial sheet names. Returns spreadsheet ID and sharing URL.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title for the new spreadsheet |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It correctly indicates creation (mutating) behavior. However, it does not disclose details like authentication requirements, rate limits, or what happens if the title already exists. A score of 3 is appropriate as it states the basic behavioral trait but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, short sentence that efficiently conveys the tool's purpose. No unnecessary words, earning the highest score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 1 parameter, no output schema, and no annotations, the description is minimal but covers the essential action. However, it lacks information about return value (e.g., spreadsheet ID) and potential side effects, which would be helpful for an agent. A score of 3 indicates adequacy with gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the only parameter 'title' is described in the schema as 'Title for the new spreadsheet'. The description adds no further semantic context beyond what the schema provides. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create') and resource ('Google Spreadsheet'). It distinguishes from siblings like 'sheets_read' and 'sheets_write' by focusing on creation. However, it does not mention any scope or uniqueness, but the purpose is specific enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies creation use case but provides no guidance on when to use this vs alternatives like 'sheets_get_spreadsheet' or 'sheets_append'. No when-not-to-use or prerequisite information is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sheets_get_spreadsheetBInspect
Explore a spreadsheet's structure. Returns title, sheet/tab names, and properties. Use before reading or writing data.
| Name | Required | Description | Default |
|---|---|---|---|
| spreadsheet_id | Yes | Spreadsheet ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description indicates a read-only operation, which is appropriate given no annotations are present. It adds context about what metadata is returned (title, sheets/tabs, properties), but does not disclose side effects, authorization needs, or rate limits. With no annotations, the description carries full burden but provides only moderate detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one short sentence, concise and to the point. It is front-loaded with the main purpose and lists key properties. While concise, it could be slightly more descriptive without losing brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and no output schema, the description adequately covers the tool's purpose. However, it lacks information about the return format (e.g., whether it returns a structured object), which could be useful for an agent. No context on limitations or edge cases is provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage for the single parameter 'spreadsheet_id', so the schema already documents it. The description adds no additional meaning beyond what the schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves spreadsheet metadata including title, sheets/tabs, and properties. The verb 'get' combined with 'spreadsheet metadata' is specific and distinguishes it from siblings like sheets_read or sheets_append, though it doesn't explicitly differentiate from similar metadata tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining metadata before performing other operations, but provides no explicit guidance on when to use this vs alternatives like sheets_read. No context on prerequisites or exclusions is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sheets_readBInspect
Read data from a Google Sheet range. Specify sheet name and range (e.g., 'A1:C10'). Returns rows as arrays of cell values.
| Name | Required | Description | Default |
|---|---|---|---|
| range | Yes | A1 notation range (e.g., "Sheet1!A1:D10") | |
| spreadsheet_id | Yes | Spreadsheet ID (from the URL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full burden. It discloses that the tool reads data and returns rows as arrays, but does not mention read-only safety, rate limits, or behavior for empty ranges or errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, with two sentences that directly state purpose and output format. No unnecessary words, but could be slightly more front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple parameters and no output schema, the description is adequate but incomplete. It does not explain return format in detail (e.g., how rows are structured) or error handling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description does not add meaning beyond the schema, as it only restates the purpose. No parameter-specific details are given.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'read' and resource 'Google Sheets range', and specifies that it returns rows as arrays. It is distinct from sibling tools like sheets_append, sheets_write, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies reading data but does not provide explicit guidance on when to use this tool versus alternatives like sheets_get_spreadsheet. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
sheets_writeBInspect
Write data to a Google Sheet range, overwriting existing values. Specify sheet name, range (e.g., 'A1:C10'), and row data.
| Name | Required | Description | Default |
|---|---|---|---|
| range | Yes | A1 notation range (e.g., "Sheet1!A1") | |
| values | Yes | Array of rows, each row is an array of cell values | |
| spreadsheet_id | Yes | Spreadsheet ID | |
| value_input_option | No | How to interpret input. USER_ENTERED (default) parses formulas/dates/numbers. RAW stores literal strings. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses the overwrite behavior, which is critical for a write tool. However, with no annotations provided, the description should also mention side effects like whether the entire range is cleared before writing, or if only specified cells are overwritten. This gap reduces transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise, with two short sentences that cover the core purpose and key behavior. Every word is necessary; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is complete enough for a simple write tool given good schema coverage (100%) and no output schema. However, it lacks information about the return value (e.g., updated cells response) and does not clarify overwrite semantics in detail. This is adequate but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds context by implying the 'values' parameter structure (array of arrays) and the overwrite behavior, but it does not elaborate on 'value_input_option' beyond what the schema provides. The description adds modest extra value, raising the score to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Write' and the resource 'Google Sheets range'. It adds 'Overwrites existing data', which distinguishes it from its sibling 'sheets_append' (which presumably appends rather than overwrites). This provides good clarity but could be more explicit about the overwrite behavior compared to append.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Overwrites existing data', which implies when to use this tool over 'sheets_append'. However, it does not explicitly state when not to use it or provide alternatives. A more direct comparison to 'sheets_append' would improve this.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!