Google_docs
Server Details
Google Docs MCP Pack — read, create, and edit Google Docs via OAuth.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-google_docs
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 11 of 11 tools scored. Lowest: 2.9/5.
The server mixes Google Docs operations with a memory system (remember/recall/forget) and a generic query tool (ask_pipeworx/discover_tools). While the Docs tools are distinct, the memory and query tools serve different purposes but may confuse an agent about which tool to use for data retrieval.
The Docs tools follow a consistent 'docs_<action>' pattern (e.g., docs_create, docs_append_text). However, the memory tools (remember, recall, forget) and generic tools (ask_pipeworx, discover_tools) use different naming conventions, breaking the overall pattern.
11 tools is slightly above the typical 3-15 range but still acceptable. However, the inclusion of memory and generic query tools alongside Docs-specific tools suggests a broader scope than just Google Docs, making the count feel slightly high.
The Docs tools cover core CRUD operations (create, read, append, insert, replace) and text retrieval, which is fairly complete for a document server. Missing operations like delete or formatting are minor gaps. The memory and query tools add extra functionality beyond Docs.
Available Tools
11 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries the full burden. It discloses that the tool internally selects the best data source and fills arguments, and that it returns a result. However, it does not mention potential latency, source reliability, or what happens if no answer can be found.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (two sentences plus examples) and front-loaded with the core functionality. Every sentence adds value, and the examples are well-chosen to clarify the tool's scope.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no nested objects), the description is sufficiently complete. It explains the input format, the automated behavior, and provides examples. It does not cover edge cases like unsupported questions, but that is acceptable for a tool of this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, and the parameter 'question' has a clear description. The description adds context by specifying natural language input and giving examples, but this largely overlaps with the schema's description. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool accepts natural language questions and returns answers by automatically selecting the best data source. It gives specific examples (trade deficit, adverse events, 10-K filing) that illustrate its purpose and differentiate it from other tools on the server, which are focused on document operations or memory.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains that users should just ask in plain English without needing to browse tools or learn schemas. It provides usage examples, but it does not explicitly state when not to use this tool (e.g., for very specific structured queries that might be better served by a dedicated tool).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses the tool returns 'the most relevant tools with names and descriptions', which is useful but lacks details on search algorithm, latency, or side effects. Given the tool is a search, a score of 3 is appropriate as it covers the basic behavioral promise.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and front-loads the purpose. The first sentence is the action, the second adds context. One could argue the third sentence ('Call this FIRST...') is part of usage guidelines, but it's necessary and concise. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description states it returns tool names and descriptions, which is sufficient. The tool is simple (search with 2 params) and the description covers when to use it and what it returns. Could mention result format or sorting, but for a straightforward search, it's adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so both parameters are documented in the schema. The description does not add any additional meaning beyond the schema descriptions. Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches a catalog and returns relevant tools. It specifies the action ('search'), the resource ('Pipeworx tool catalog'), and the method ('by describing what you need'). The instruction 'Call this FIRST' differentiates it from sibling tools like ask_pipeworx, which is a general query tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' It also implies not to use it when you already know the tool or have few tools, setting a clear context for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_append_textBInspect
Add text to the end of a Google Doc. Use when insertion position doesn't matter.
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to append | |
| document_id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It correctly identifies the action as append (non-destructive to existing content), but doesn't mention if the operation is idempotent or if it requires write access.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded. Every word contributes meaning, but could benefit from slight expansion on usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given only 2 params, no output schema, and no annotations, the description is adequate but minimal. Lacks details on return value (e.g., success indicator) or side effects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond what schema already provides (text to append, document ID).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it appends text to a Google Doc, distinguishing it from siblings like docs_insert_text and docs_replace_text. However, it does not explicitly mention that it adds to the end, which is implied by 'append'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool over alternatives like docs_insert_text. No mention of prerequisites (e.g., document must exist) or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_createBInspect
Create a new Google Doc with a title. Returns the document ID needed for editing operations.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Title for the new document |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must cover behavioral aspects. It only states it creates a doc with a title, but doesn't mention if it overwrites existing files, requires authentication, or what the response looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: one short sentence. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given it's a creation tool with no output schema or annotations, the description should explain what the tool returns (e.g., document ID or URL) or any side effects. It lacks this information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with only one parameter. Description adds no extra meaning beyond the schema; 'title' is self-explanatory. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it creates a new Google Doc with a title. Verb 'Create' and resource 'Google Doc' are specific. Differentiates from siblings like docs_get, docs_get_text by focusing on creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. However, the name and description imply it's for creating new docs. No alternatives mentioned, but sibling tools suggest other actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_getAInspect
Retrieve a Google Doc by ID. Returns title, formatted body content, and document structure.
| Name | Required | Description | Default |
|---|---|---|---|
| document_id | Yes | Document ID (from the URL) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries full burden. It discloses the return data (title, body content, document structure), but does not mention side effects, rate limits, or whether it modifies state. The description is adequate but could be more explicit about read-only behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and result. Efficient, though could be slightly more concise by removing 'and document structure' if redundant with 'body content'.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description is mostly sufficient. However, it could mention the output format or note that it returns the full document, not just text, to better inform agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with one parameter 'document_id' described as 'Document ID (from the URL)'. The description adds context by explaining what is returned, complementing the schema. No additional parameter details needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'Google Doc', and lists what is returned ('title, body content, and document structure'). It distinguishes from sibling tools like docs_get_text which only returns text, and docs_create which creates documents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by ID but does not explicitly state when to use this tool versus alternatives like docs_get_text. No guidance on prerequisites or when not to use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_get_textBInspect
Extract plain text from a Google Doc without formatting or structure. Use when you need raw text content only.
| Name | Required | Description | Default |
|---|---|---|---|
| document_id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description is minimal and does not disclose behavioral traits beyond the basic function. With no annotations provided, the description carries full burden but fails to mention that the tool only returns plain text (no formatting), whether it works on non-editable docs, or any rate limits or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that directly states the tool's purpose with no unnecessary words. It is well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (single parameter, no output schema, no nested objects), the description is fairly complete for the core purpose. However, it lacks information about the return format (e.g., string length, encoding) and any error conditions. The absence of an output schema is not compensated by the description.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% coverage for its single parameter (document_id), and the description adds no additional meaning beyond the schema. Since the schema already fully describes the parameter, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('plain text content'), and the specific application ('a Google Doc'). It distinguishes itself from sibling tools like docs_get (which likely returns the full doc object) and docs_insert_text/docs_replace_text (which modify content).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives. For instance, it doesn't mention that docs_get might be more appropriate if you need formatting or metadata, or that docs_get_text is specifically for extracting plain text. No context about prerequisites or limitations is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_insert_textBInspect
Insert text at a specific position in a Google Doc (e.g., position 0 for start, position 50 for middle).
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | Text to insert | |
| index | No | Character index to insert at (1 = start of body) | |
| document_id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description should cover behavior. It only states what the tool does, not side effects (e.g., overwrites existing text?), access requirements, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single clear sentence. Efficient but could be slightly more informative about behavior without adding length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple insertion tool with no output schema. However, missing details on index interpretation (e.g., 1-based vs 0-based) and error cases reduce completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so schema already describes parameters. The description adds that index is relative to start of body, but does not clarify edge cases like index out of bounds.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action (insert text) and the target resource (Google Doc) with a specific parameter (at specified index position). It distinguishes from sibling tool docs_append_text which appends without index.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs alternatives. The description implies index-based insertion, but doesn't mention when to prefer append over insert or vice versa.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
docs_replace_textAInspect
Find and replace all occurrences of text in a Google Doc with new text.
| Name | Required | Description | Default |
|---|---|---|---|
| find | Yes | Text to find | |
| replace | Yes | Replacement text | |
| match_case | No | Case-sensitive match (default: false) | |
| document_id | Yes | Document ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool performs find and replace, which is clear but does not disclose side effects (e.g., replaces all occurrences, whether it modifies formatting, or if it affects images). A 3 is appropriate as it conveys basic behavior but lacks detail.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no wasted words. It could be slightly improved by mentioning scope (e.g., all occurrences) but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (4 parameters, no nested objects, no output schema), the description is adequate but not exhaustive. It doesn't mention return value (e.g., count of replacements) or edge cases (e.g., no match). For a find-and-replace tool with no annotations, it is minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters. The description does not add meaning beyond what the schema provides (e.g., it doesn't explain 'find' and 'replace' behavior beyond text replacement). Baseline 3 is correct.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Find and replace text in a Google Doc.' uses a specific verb ('Find and replace') and resource ('text in a Google Doc'), clearly distinguishing it from sibling tools like docs_append_text (append) or docs_insert_text (insert).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description is generic and does not provide explicit guidance on when to use this tool versus alternatives. It implies usage for find-and-replace operations, but lacks context about when not to use it or any prerequisites (e.g., permissions, document edit access).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. It states the action (delete) but does not specify whether deletion is permanent, what happens on missing key, or any side effects. This is a significant gap for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence, front-loading the action and resource. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has one required parameter, no annotations, and no output schema. For a destructive operation, the description should clarify behavior on non-existent keys, success confirmation, or error handling, which are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'by key', which aligns with the required parameter 'key'. Since schema description coverage is 100% and the parameter is self-explanatory, the description adds no additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' and the resource 'stored memory', with the qualifier 'by key', which distinguishes it from sibling tools like 'remember' (store) and 'recall' (retrieve).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives (e.g., 'recall' for retrieval, 'remember' for storage). The description implies the tool is for deletion but provides no context on prerequisites or cautionary notes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only operation by saying 'retrieve' and 'list', but without annotations, the description carries the burden. It does not disclose if the operation has side effects or requires authentication, but it is straightforward enough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. Each sentence adds value: first explains the core behavior, second explains when to use it. Highly concise and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 1 parameter, no output schema, and clear semantics, the description is complete. It explains the key behavior and usage context. No missing information needed for a basic retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (1 parameter fully described). The description adds context about omitting the key to list all, which aligns with the schema's description. No additional semantics beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'retrieve a previously stored memory by key, or list all stored memories (omit key)', which is a specific verb+resource pair that distinguishes it from siblings like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description says 'Use this to retrieve context you saved earlier in the session or in previous sessions', providing context for when to use it. However, it does not explicitly mention when not to use it or compare to siblings like 'discover_tools'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no annotations are provided, the description carries full burden. It discloses persistence behavior (authenticated vs anonymous) and the purpose of memory storage, which adds meaningful behavioral context beyond basic key-value storage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loads the core action, and includes essential details without extraneous text. It earns its place with concrete examples and persistence info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple key-value store with no output schema, the description covers purpose, usage context, and persistence behavior. It does not explain return behavior (e.g., confirmation message), but given the tool's simplicity, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The description adds examples for 'key' (like 'subject_property') and specifies 'value' as 'any text', but does not significantly extend beyond what the schema already provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Store a key-value pair in your session memory' with a specific verb ('store') and resource ('key-value pair'), and differentiates from siblings like 'recall' (retrieval) and 'forget' (deletion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use this tool ('to save intermediate findings, user preferences, or context across tool calls') and includes persistence details for authenticated vs anonymous sessions, but does not explicitly state when not to use it or name alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!