Gmail
Server Details
Gmail MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gmail
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.
The Gmail-specific tools (gmail_get_message, gmail_list_labels, gmail_list_messages, gmail_search, gmail_send) are clearly distinct, but the generic tools like ask_pipeworx, discover_tools, forget, recall, and remember overlap significantly. ask_pipeworx claims to 'pick the right tool' but duplicates the purpose of discover_tools and the Gmail tools. forget, recall, and remember form a memory system that is unrelated to Gmail, creating confusion about the server's primary domain.
The naming is inconsistent: Gmail tools use a gmail_verb_noun pattern (e.g., gmail_get_message), while memory tools use plain verbs (forget, recall, remember). ask_pipeworx and discover_tools follow no clear pattern. This mix of naming conventions makes it hard to predict tool names.
10 tools is a reasonable number, but the tool count is inflated by the inclusion of 5 generic memory/utility tools (ask_pipeworx, discover_tools, forget, recall, remember) that seem out of place in a Gmail server. The actual Gmail functionality is covered by 5 tools, which would be appropriate for a focused server, but the extras dilute the purpose.
The Gmail tools cover basic operations (list, get, search, send) but are missing critical operations like creating labels, managing drafts, deleting messages, or modifying labels. The memory tools (forget, recall, remember) are complete for their own purpose but are unrelated to Gmail, leaving gaps in the Gmail domain.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses that Pipeworx 'picks the right tool, fills the arguments, and returns the result,' indicating autonomous behavior. Since no annotations are provided, the description carries the full burden; it successfully conveys the tool's automatic orchestration without misleading. No contradictions with annotations (none present).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (four sentences) and front-loaded with the core purpose. It includes examples for clarity. One could argue the first sentence could be more direct, but overall it is efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no nested objects), the description is complete enough. It explains how the tool works and what to expect. However, it could mention that the result is a text answer or that the tool may use external APIs, but this is not essential for basic use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with one parameter 'question' described as 'Your question or request in natural language.' The description adds value by explaining that the question should be in plain English and providing examples, but the schema already clearly defines the parameter. Thus, the description reinforces but does not significantly extend beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It uses specific verbs ('ask', 'get an answer') and resources ('best available data source'). The tool is well-distinguished from siblings like discover_tools (which lists tools) and recall (which retrieves memories) by emphasizing natural language querying and automatic tool selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance: 'No need to browse tools or learn schemas — just describe what you need.' It includes three concrete examples showing typical queries. However, it does not explicitly state when not to use this tool or mention alternatives (e.g., when to use gmail_search instead for email-specific queries).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses that it performs a search and returns tool names and descriptions. It implies a non-destructive, read-only operation. However, it does not mention edge cases like no results or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, all essential: purpose, return value, and usage guidance. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains what the tool returns ('most relevant tools with names and descriptions'). It is complete for a search tool with simple input parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description adds value by explaining that query is a natural language description and gives examples. Limit parameter is documented with defaults and max. Description does not add much beyond schema but examples are helpful.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it searches the tool catalog by describing what you need, returns relevant tools, and should be called first when many tools are available. The verb 'Search' and resource 'tool catalog' are specific, and it distinguishes itself as a discovery tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Call this FIRST when you have 500+ tools available', providing clear guidance on when to use this tool and its purpose in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are absent, so the description must carry full burden. It only states 'Delete', which implies mutation, but no details on side effects, authorization needs, or whether deletion is permanent. The sibling 'forget' suggests memory management, but the description lacks behavioral depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, concise and front-loaded. No wasted words, but could be slightly more informative without sacrificing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple tool (1 required param, no output schema), the description is minimally adequate. However, it lacks information about return values, error handling, and behavioral context that would help an agent use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'key' described as 'Memory key to delete'. The description adds no extra meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Delete' and the resource 'stored memory by key', which is specific. It distinguishes from sibling tools like 'remember' (store) and 'recall' (retrieve), though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives like 'remember' or 'recall'. It doesn't state prerequisites or conditions for deletion, such as whether the key must exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gmail_get_messageAInspect
Fetch full email details by message ID. Returns headers, subject, body text, sender, recipients, attachments, and applied labels.
| Name | Required | Description | Default |
|---|---|---|---|
| format | No | Format of the returned message (default: full) | |
| message_id | Yes | The ID of the message to retrieve |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description carries full burden. It states returns full message details, which is adequate but doesn't disclose potential rate limits, authentication requirements, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no filler. Front-loaded with purpose, followed by return details. Efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description could mention return structure or pagination. But for a simple retrieval tool with good schema, it is minimally complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds no additional meaning beyond what schema provides for parameters. It mentions headers, snippet, body, labels but doesn't tie to format parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it retrieves a specific Gmail message by ID, listing included details. It uses specific verb 'Get' and resource 'Gmail message', distinguishing it from siblings like gmail_list_messages and gmail_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs alternatives. Implies use when you have a known message ID, but doesn't mention when to use other tools for listing or searching.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gmail_list_labelsAInspect
Get all your labels including system folders (INBOX, SENT, TRASH, DRAFTS) and custom labels. Returns label names and IDs for filtering or organizing.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It discloses that both system and user-created labels are returned. However, it does not mention any side effects, rate limits, or other behavioral traits beyond the basic function.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that conveys all necessary information without extraneous words. It is front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description is sufficient for the tool's simplicity. It covers what labels are returned. No additional context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has no parameters, so there is nothing to explain. The description adds value by clarifying what the tool returns (all labels including system ones), which is useful context beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists all labels in the user's Gmail account, specifying both system labels and user-created labels. It differentiates from siblings by focusing solely on label listing, which is distinct from other Gmail tools that deal with messages or sending.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates usage when labels need to be enumerated. No explicit when-not or alternatives are given, but the sibling list shows other Gmail tools for messages and search, making it clear this is for labels only.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gmail_list_messagesBInspect
List messages in your inbox with optional filtering by label or read status. Returns message IDs, thread IDs, and preview text. Use gmail_search for complex queries like date ranges or attachments.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Gmail search query to filter messages (e.g., "from:alice subject:meeting") | |
| page_token | No | Token for fetching the next page of results | |
| max_results | No | Maximum number of messages to return (default 10, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description must convey behavior. It states it returns message IDs and thread IDs but does not mention whether results are paginated (though page_token is in schema), rate limits, or that it only accesses inbox. Adequate but not detailed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, straightforward and front-loaded with main action. Could be slightly more structured but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify return format; it does say 'message IDs and thread IDs' but lacks detail on the structure (e.g., array of objects). Complexity is low, so this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and descriptions are clear, so the description adds minimal extra value. It does not elaborate on default behavior when no query is provided or the impact of max_results.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists messages in the user's Gmail inbox, with optional filtering and return of IDs. It distinguishes from siblings like gmail_get_message (which gets a single message) and gmail_search (which likely has different scope or output).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions optional filtering with a search query but does not explicitly state when to use this tool versus alternatives like gmail_search. No guidance on when not to use it or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gmail_searchAInspect
Search emails using Gmail query syntax (e.g., 'from:sender@example.com', 'subject:invoice', 'has:attachment', 'after:2024/01/01', 'is:unread'). Returns matching message IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Gmail search query (e.g., "from:bob@example.com after:2024/01/01 has:attachment") | |
| max_results | No | Maximum number of messages to return (default 10, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool supports operators like from:, to:, etc., which is helpful for understanding query capabilities. However, it does not mention whether the search is case-insensitive, whether it respects user permissions, or that it may return partial results if max_results is set. These are minor omissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, concise and front-loaded with the core purpose. The second sentence adds useful examples. However, it could be slightly more structured (e.g., bullet points) but is still clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a search with 2 parameters, no output schema, and no nested objects, the description is fairly complete. It explains the query syntax and parameters. The lack of return value description is acceptable since no output schema exists, but mentioning typical fields returned could add value.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so both parameters are described in the schema. The description adds value by listing example operators but does not provide additional semantics beyond the schema. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Search Gmail messages' which identifies the verb (search) and resource (Gmail messages). It distinguishes from sibling tools like gmail_list_messages (which lists without search syntax) and gmail_get_message (which retrieves a single message). The mention of Gmail query syntax further clarifies the specific capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (when search syntax is needed) but does not explicitly state when not to use it or provide alternatives. Given siblings like gmail_list_messages and gmail_get_message, explicit guidance on choosing between them would improve this dimension.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gmail_sendCInspect
Send an email with recipient, subject, and body text. Optionally add CC, BCC, reply-to address, and file attachments.
| Name | Required | Description | Default |
|---|---|---|---|
| cc | No | CC recipients (comma-separated email addresses) | |
| to | Yes | Recipient email address | |
| bcc | No | BCC recipients (comma-separated email addresses) | |
| body | Yes | Email body text (plain text) | |
| subject | Yes | Email subject line |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description carries full burden. It does not disclose that sending is a write operation with potential side effects (e.g., email leaves the account, rate limits, draft handling). No mention of irreversible actions or required scopes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no waste. Could add sibling differentiation or behavior notes without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, description is insufficient. Does not explain return values (message ID, success indication), error conditions (invalid email, attachment limits), or authentication requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema; e.g., 'cc' is already described as 'CC recipients (comma-separated email addresses)'. The description does not explain format constraints or optionality beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Send' and resource 'email from the authenticated Gmail account'. It distinguishes from sibling tools which list or search messages, but could more explicitly contrast with gmail_search or gmail_list_messages.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like gmail_get_message (reading) or gmail_search (finding messages). No prerequisites or context about authentication or sending limitations are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavior. It clearly states that omitting key lists all memories, and that memories persist across sessions. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no wasted words. Front-loaded with action and resource, then clarifies behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one optional param and no output schema. Description covers all needed info for invocation. Could mention return format (e.g., string or JSON), but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds context that omitting the key lists all memories, which is not in schema. No further parameter details needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description specifies verb 'retrieve' and resource 'memory', and distinguishes between retrieving a specific key and listing all. It also differentiates from siblings like 'remember' and 'forget' by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use the tool: 'Retrieve context you saved earlier.' However, does not explicitly mention when not to use it or compare with siblings, though the contrast with 'remember' and 'forget' is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description discloses behavioral traits: persistence depends on authentication status ('Authenticated users get persistent memory; anonymous sessions last 24 hours'). This adds useful context beyond the input schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and simple parameters, the description covers the essential behavioral aspects (persistence, session type) and usage context. Could mention data size limits or overwrite behavior for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both parameters described in schema). The description adds value by clarifying the purpose of the tool but doesn't add new semantic details beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Store a key-value pair in your session memory' with a specific verb ('store') and resource ('session memory'). It distinguishes itself from siblings like 'recall' (retrieval) and 'forget' (deletion).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use the tool: 'save intermediate findings, user preferences, or context across tool calls.' However, it does not explicitly state when not to use it or mention alternatives for different storage needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!