Gong
Server Details
Gong MCP — wraps the Gong API v2 (OAuth)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-gong
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 10 of 10 tools scored. Lowest: 2.9/5.
Gong tools are clearly distinct: gong_get_call, gong_get_transcript, gong_list_calls, gong_search_calls, and gong_list_users each target a specific Gong resource. The memory tools (remember, recall, forget) and system tools (ask_pipeworx, discover_tools) have separate purposes. Minor potential confusion between ask_pipeworx and discover_tools since both involve finding information, but descriptions clarify their distinct roles.
Gong-specific tools follow a consistent gong_verb_noun pattern (gong_list_calls, gong_get_call, etc.), which is good. However, memory tools use verbs alone (remember, recall, forget) and system tools are ask_pipeworx and discover_tools, mixing styles and not following the same pattern. This inconsistency lowers the score.
With 10 tools, the count is well-scoped for a domain that combines Gong call management, memory operations, and a general query tool. Each tool serves a clear purpose, and the number is manageable for an agent without being overwhelming.
The Gong call lifecycle is well-covered: list, search, get details, get transcript. Missing update/delete for calls, but those are likely not supported by Gong's API or are not typical use cases. The memory system covers CRUD (create, read, list, delete). The ask_pipeworx and discover_tools add generic capabilities. Overall, no critical gaps for common tasks.
Available Tools
10 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description clearly explains the tool's behavior: it selects the best data source, fills arguments, and returns results. It also mentions it works in plain English and requires no schema knowledge. Since no annotations are provided, the description carries the full burden, and it does so well by disclosing the automated delegation behavior. However, it does not mention limitations or error cases.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise (4 sentences) and front-loaded with the core purpose. It provides examples, which is helpful. However, the phrase 'No need to browse tools or learn schemas — just describe what you need' is somewhat redundant with the first sentence, but still adds clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only one parameter, no output schema, and no annotations, the description is fairly complete. It explains the tool's automated nature and provides usage guidance. It does not mention the return format or error handling, but given the tool's simplicity, this is acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage with a single parameter 'question' described as 'Your question or request in natural language'. The description adds meaning by explaining that the question should be in plain English and providing examples, but does not add much beyond what the schema already conveys. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: answering natural language questions by automatically selecting and invoking the best data source. It uses a specific verb ('Ask') and resource ('answer from best available data source'), and distinguishes itself from sibling tools by emphasizing its ability to handle arbitrary questions without needing to browse tools or learn schemas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says to use this tool when you want to 'ask a question in plain English' and notes that it picks the right tool automatically, implying it should be used when the agent doesn't know which specific tool to call. It also provides concrete examples of appropriate questions, but does not explicitly state when not to use it or suggest alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully carries the behavioral disclosure burden. It states that the tool searches by natural language description and returns the most relevant tools. However, it does not mention any side effects, rate limits, or whether it reads or writes data. A score of 4 is appropriate because it clearly describes the core behavior without going into finer details that are not critical for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences that front-load the purpose and usage guideline. Every sentence is necessary and earns its place. No redundant or filler content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has 2 simple parameters, no output schema, no nested objects, and is a search/discovery tool, the description is complete enough. It explains what the tool does, when to use it, and how to use the parameters. It could optionally mention that the output includes tool names and descriptions, but that is implied. A 4 is appropriate as it covers all necessary aspects without being overbearing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds value by explaining the query parameter as 'Natural language description of what you want to do' with examples, which goes beyond the schema's minimal description. The limit parameter is also well-explained with defaults and max. This extra context justifies a 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search' and the resource 'Pipeworx tool catalog', explaining that it returns relevant tools with names and descriptions. It distinguishes itself from siblings by positioning itself as the discovery tool to call first when 500+ tools are available, which is not claimed by any sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells the agent when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' It provides clear context and an alternative strategy (call first), without needing to list exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose whether deletion is irreversible, requires authentication, or affects other memories.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, no fluff, but could be more informative within the same length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one param, but lacks any behavioral or usage context. With no annotations and no output schema, more detail would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description adds no extra meaning beyond the schema's description of 'key' as 'Memory key to delete'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete a stored memory by key' clearly states the action (delete) and the resource (stored memory). It distinguishes itself from 'remember' (store) and 'recall' (retrieve) among siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'forget' vs 'remember' or 'recall'. No mention of prerequisites or consequences.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gong_get_callAInspect
Get full details for a specific call by ID. Returns participants, duration, call metadata, engagement metrics, and key moments.
| Name | Required | Description | Default |
|---|---|---|---|
| callId | Yes | The Gong call ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so the description carries full burden for behavioral traits. It discloses that the tool returns participants, duration, and metadata, which implies a read-only operation. However, it does not mention any potential side effects, rate limits, or authorization requirements. Given the tool is a simple get-by-id, the description is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It front-loads the purpose and key output. Could be slightly improved by adding a brief note on when to use it, but remains efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is a simple single-parameter get-by-id operation with no output schema, the description is complete enough. It names the primary output fields (participants, duration, metadata). No significant gaps for its complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the only parameter callId already described as 'The Gong call ID'. The description adds no additional semantic meaning beyond the schema. Baseline 3 is appropriate since schema does the full job.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get details' and the resource 'specific Gong call by its ID', and lists the key information returned (participants, duration, metadata). It is distinct from sibling tools like gong_list_calls (which lists calls) and gong_get_transcript (which gets transcript), making it unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use this tool (when you have a specific call ID and need call details), but does not explicitly state when not to use it or mention alternatives. It provides basic usage guidance but lacks exclusion criteria or comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gong_get_transcriptBInspect
Retrieve the full conversation transcript for a call with speaker names, timestamps, and dialogue. Use after gong_get_call to analyze specific conversations.
| Name | Required | Description | Default |
|---|---|---|---|
| callId | Yes | The Gong call ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description partially covers behavior: indicates it's a read operation ('get') and returns a 'full conversation transcript'. However, does not disclose potential length, format, or any restrictions (e.g., access permissions, call type).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, direct and to the point. No unnecessary words. Could combine into one sentence without loss.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single required parameter and no output schema, the description adequately explains input and output. However, lacks details about the transcript format or any limitations, which would be helpful for a tool returning a potentially large text.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters with description for callId. Description adds no further meaning beyond the schema; both state 'transcript' and 'call ID'. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it gets a transcript for a Gong call, specifying the resource ('transcript') and action ('get'). Distinguishes from siblings like 'gong_get_call' which returns call details, not transcript. Could explicitly differentiate from gong_list_calls which lists calls.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like gong_get_call. No mention of prerequisites (e.g., call must exist) or context (e.g., only for completed calls).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gong_list_callsAInspect
List recorded calls from your workspace with optional date filtering. Returns call IDs, dates, participants, duration, and engagement metrics. Supports pagination for large result sets.
| Name | Required | Description | Default |
|---|---|---|---|
| cursor | No | Pagination cursor from a previous response | |
| toDateTime | No | End of date range (ISO 8601) | |
| fromDateTime | No | Start of date range (ISO 8601, e.g. "2024-01-01T00:00:00Z") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It mentions pagination (cursor-based) but does not disclose behavior like ordering, rate limits, or what happens when no filters are applied (likely returns all calls).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, efficient and front-loaded with the core purpose. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with 3 optional parameters and no output schema, the description covers the basics but lacks details on default behavior (e.g., max results) or any caveats about call availability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds context by mentioning 'date range' and 'cursor-based pagination', which aligns with parameters. However, the description does not add much meaning beyond the schema's descriptions; the baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and resource 'recorded calls from Gong', with optional filtering and cursor-based pagination. It distinguishes from sibling tools like gong_search_calls and gong_get_call by focusing on listing with optional date filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions optional date range and pagination, providing clear context for when to use filters. However, it does not explicitly state when not to use this tool or mention alternatives like gong_search_calls for more complex queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gong_list_usersAInspect
List all users in your workspace. Returns user names, IDs, email addresses, roles, and activity status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It correctly indicates a read-only list operation. No behavioral traits beyond 'list all users' are disclosed, but that is sufficient for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, short sentence that is front-loaded and contains no unnecessary words. Perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list tool with no parameters, no output schema, and no nested objects, the description is complete enough. It explains what it does and what resource it returns.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has no parameters, so description coverage is 100%. The description adds no parameter info because none exist; this is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all users'), and the scope ('in the Gong workspace'). It distinguishes from siblings like gong_list_calls which list calls, not users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing users, but does not specify when to use this tool versus alternatives like gong_search_calls. No exclusions or context on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
gong_search_callsCInspect
Search calls by keyword or phrase (e.g., 'pricing', 'objection', 'budget'). Returns matching calls ranked by relevance.
| Name | Required | Description | Default |
|---|---|---|---|
| keyword | Yes | Keyword to search for in calls |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states that the tool returns matching calls but does not disclose behavioral details such as whether the search is case-sensitive, matches partial words, or the scope (e.g., date range, user). No mention of rate limits, authentication, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences, front-loading the main action. No unnecessary words. Could be slightly more detailed without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (1 parameter, no output schema), the description is minimally adequate. It tells the agent what the tool does and what parameter is needed. However, it lacks context on result format, pagination, or limitations, which may be relevant for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'keyword' having a description: 'Keyword to search for in calls'. The tool description adds no further meaning beyond what the schema already provides. Thus baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a clear verb-resource pair: 'Search Gong calls by keyword'. It specifies the action (search) and resource (calls), and adds the result detail 'Returns calls that match the search text'. However, it does not differentiate from sibling tools like gong_list_calls, which might also return calls but without keyword filtering.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs. alternatives like gong_list_calls (which might list all calls without search) or gong_get_call (retrieve a specific call). No when-not-to-use or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It clearly states the tool is for retrieval (non-destructive), and mentions persistence across sessions. It does not describe error behavior (e.g., missing key), but that's minor for a retrieval tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose and immediate usage instruction. Every word adds value; no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description explains return value behavior (single memory vs list). Parameter coverage is complete. No output schema needed for this simple retrieval. Could mention case sensitivity or format of key, but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single optional parameter 'key', and the description adds that omitting it lists all keys. This aligns with the schema description; no additional semantics needed beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Retrieve', 'list') and resource ('memory by key'), clearly distinguishing the operation from 'remember' (store) and 'forget' (delete). It also covers the variant of omitting the key to list all.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use (retrieve context saved earlier) and the optional omission of key to list all. However, no alternative tool is named, though siblings like 'remember' and 'forget' are implied.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses persistence differences ('authenticated users get persistent memory; anonymous sessions last 24 hours'), which is crucial behavioral context. No contradictions with annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with three sentences: purpose, usage examples, and persistence behavior. Each sentence adds value with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity (2 simple params, no output schema, no nested objects), the description covers the key aspects: what it does, how to use it, and persistence behavior. It lacks mention of any size limits or overwrite behavior, but is largely complete for a basic memory store tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers both parameters with descriptions. The description does not add additional meaning beyond the schema, and schema coverage is 100%, so baseline of 3 is appropriate. It does not elaborate on value formatting or size limits.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool stores a key-value pair in session memory, specifying the resource ('session memory') and action ('store'). It distinguishes from sibling tools like 'forget' and 'recall' by focusing on storing, not retrieving or deleting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases ('save intermediate findings, user preferences, or context across tool calls') and differentiates between authenticated and anonymous sessions. However, it does not explicitly state when not to use it or mention alternatives like 'recall' for retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!