Skip to main content
Glama

Server Details

Intercom MCP Pack — contacts, conversations, companies via OAuth.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
pipeworx-io/mcp-intercom
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.7/5 across 10 of 10 tools scored. Lowest: 2.8/5.

Server CoherenceC
Disambiguation3/5

The set mixes general memory/utility tools (remember, recall, forget, ask_pipeworx, discover_tools) with Intercom-specific tools (ic_*). The general tools could overlap with each other (ask_pipeworx vs discover_tools both relate to finding tools), and the ic_* tools are distinct but few.

Naming Consistency2/5

Tool names are inconsistent: Intercom tools follow ic_verb_noun, but others use bare verbs (remember, recall, forget, ask_pipeworx, discover_tools). The use of underscore vs no underscore and different verb styles (ask vs discover) adds inconsistency.

Tool Count3/5

10 tools is within a reasonable range, but the set feels divided: 5 Intercom tools and 5 general-purpose tools. The general tools seem to serve a different platform (Pipeworx) rather than Intercom specifically, making the count feel slightly off for a single server.

Completeness2/5

For Intercom, the tool surface is incomplete: only get contact, get conversation, list companies, list conversations, search contacts are provided. Missing CRUD operations like create/update/delete for contacts and conversations, and no tools for notes, tags, or articles. The general tools (ask_pipeworx, discover_tools) seem to compensate but are not Intercom-specific.

Available Tools

10 tools
ask_pipeworxAInspect

Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".

ParametersJSON Schema
NameRequiredDescriptionDefault
questionYesYour question or request in natural language
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description clearly states that the tool picks the right tool and fills arguments, indicating autonomous behavior. No annotations are provided, so the description carries full burden. It does not disclose potential limitations (e.g., if it can fail, if it requires certain permissions, or if it makes external API calls). However, the description is upfront about its general purpose and does not contradict any annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise (three sentences) and front-loaded with the core purpose. Every sentence adds value: the first states the action, the second explains the mechanism, the third provides concrete examples. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema, no annotations), the description is complete enough for an agent to use it correctly. It covers purpose, usage, and examples. The only minor gap is that it doesn't specify the format or reliability of the answer, but this is acceptable for a general-purpose query tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for the single parameter 'question', with a clear description in the schema. The description adds meaning by explaining the parameter's role in natural language and providing examples. The description effectively adds value beyond the schema by illustrating usage scenarios.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool takes a natural language question and returns an answer from the best data source. It distinguishes itself from other tools on the server (e.g., discover_tools, ic_get_contact) by explicitly saying it picks the right tool and fills arguments, making it a general-purpose query tool rather than a specific data accessor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit when-to-use guidance: 'just describe what you need' and gives examples of appropriate questions. It implies when not to use it (e.g., if you need to browse tools or learn schemas, you don't need to; this tool does it for you). The alternative is to use other tools directly, but the description positions this as the simpler option.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_toolsAInspect

Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of tools to return (default 20, max 50)
queryYesNatural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool searches and returns tool names and descriptions, and implies it's a read-only search. However, it does not mention any rate limits or caching behavior, which would warrant a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first clearly states the function, the second gives actionable usage guidance. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (2 parameters, no output schema, no nested objects), the description is complete enough. It explains what the tool does, when to use it, and how to use the query parameter. Missing details like default limit or maximum are already in the schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds a usage example for the query parameter ('analyze housing market trends'), which is helpful but does not provide additional constraints or semantics beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches a tool catalog by natural language query, returns relevant tools, and explicitly says to call it first when many tools are available. This distinguishes it from siblings like ask_pipeworx or recall which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This provides clear guidance on when to use it versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

forgetBInspect

Delete a stored memory by key.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key to delete
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It confirms deletion but does not mention whether deletion is irreversible, if confirmation is needed, or if related data is affected. For a destructive operation, more behavioral context is expected.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: 5 words, no fluff. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 param, no output schema, no annotations), the description is too minimal. It fails to mention deletion consequences, error conditions, or behavior when key does not exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema already covers the single parameter with a clear description. The tool description does not add any new info beyond what the schema provides, but with 100% coverage, baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a strong verb (Delete) and specific resource (stored memory by key), clearly distinguishing it from sibling tools like recall (retrieve) and remember (store).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this vs. other memory tools (e.g., recall for reading, remember for writing). The description implies deletion is for cleanup but does not specify prerequisites or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ic_get_contactAInspect

Get full contact details by ID. Returns name, email, phone, attributes, tags, and conversation history.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesContact ID
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry the full burden of behavioral disclosure. The description only states the basic action and required parameter. It does not mention whether the contact must exist, what happens if not found, authentication requirements, or any side effects. This is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence, concise and front-loaded with the essential information. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool is simple (single required parameter, no output schema), the description is minimally adequate. It states what the tool does and the required parameter. However, it does not describe the output or any potential errors. For a simple getter, this might be acceptable but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%: the only parameter 'id' is described as 'Contact ID' in the schema. The description does not add extra meaning beyond what the schema provides. Baseline 3 is appropriate since schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves a specific Intercom contact by ID. The verb 'Get' and resource 'contact' are precise, and it distinguishes itself from sibling tools like ic_search_contacts (which is for searching) and ic_list_companies (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives. However, its purpose is clear and limited to fetching by ID, which implies it is for single-record retrieval. No guidance on when not to use it or mention of alternatives is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ic_get_conversationBInspect

Get complete conversation thread by ID. Returns all messages, timestamps, participants, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesConversation ID
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It states it returns 'full message thread,' which is useful, but does not disclose any side effects, rate limits, or error conditions. The behavior is implied as a safe read, but not explicitly stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is concise and front-loaded with the key action. No unnecessary words. However, it could be slightly improved by adding a brief note about what the response contains.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter and no output schema, the description provides the essential purpose. However, it lacks details about the response format or any prerequisites, which would be helpful for an agent. It is adequate but not thorough.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds value by stating 'with full message thread,' which implies the output includes messages, not just conversation metadata. This goes beyond the schema's parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool gets a conversation by ID and includes the full message thread, specifying the resource and scope. However, it does not differentiate from sibling tools like ic_list_conversations, which also return conversations but in a different context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives (e.g., ic_list_conversations for listing vs getting details). The description implies it's for retrieving a single conversation with messages, but no explicit when/when-not or alternative tools mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ic_list_companiesCInspect

List companies with pagination. Returns company ID, name, website, employee count, and custom attributes.

ParametersJSON Schema
NameRequiredDescriptionDefault
pageNoPage number
per_pageNoResults per page (default 20)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are absent, so the description carries the full burden. It only states the basic action without disclosing behavioral traits such as pagination behavior, rate limits, or whether the list is exhaustive. No output schema exists to compensate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the tool's purpose without extraneous text. It could benefit from slightly more detail, but it is concise and front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (list operation with two optional parameters), the description is barely adequate. It lacks details on default behavior (e.g., page size), return format, and potential limitations, making it insufficient for an agent to use confidently without external knowledge.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% for both parameters (page, per_page), so the baseline is 3. The description adds no additional meaning beyond what the schema already provides, but no further elaboration is necessary given the simple parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and the resource ('Intercom companies'), making the purpose immediately understandable. However, it does not differentiate from sibling tools like ic_list_conversations, which is acceptable given the distinct resource name.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. Sibling tools like ic_search_contacts suggest similar functionality for contacts, but no exclusion criteria or context is given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ic_list_conversationsCInspect

List conversations with pagination. Returns conversation ID, participants, status, created date, and last message preview.

ParametersJSON Schema
NameRequiredDescriptionDefault
per_pageNoResults per page (default 20)
starting_afterNoPagination cursor
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations are empty, so description carries full burden. It does not disclose pagination behavior (cursor-based), rate limits, or read-only nature. However, the schema partially fills in pagination details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, no wasted words. Could be slightly improved by including key details like pagination or filtering.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, description should clarify return format, pagination details, and typical use cases. It only states 'List Intercom conversations', which is insufficient for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes parameters. Description adds no additional meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description 'List Intercom conversations' uses a clear verb and resource, but is generic and does not distinguish it from siblings like 'ic_get_conversation' or 'ic_list_companies'. It lacks specificity about scope or filtering.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs alternatives like 'ic_get_conversation' for single conversations. No mention of limitations or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

ic_search_contactsBInspect

Search for contacts by name, email, or custom attributes. Returns contact ID, name, email, and metadata.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryYesSearch by email, name, or other field
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose behavioral traits. It mentions the resource (contacts including users and leads) but does not state whether the search is exact or fuzzy, what fields are searchable beyond the param hint, or any limits/performance traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with no waste. However, it is too short to add significant value, and the single sentence could be slightly more informative without harming conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given only 1 parameter and no output schema, the description is minimally adequate. It explains the tool's scope (users and leads) but lacks details on search behavior, result format, or pagination, leaving gaps for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description confirms the query parameter can search by email, name, or other field, adding meaningful context beyond the schema's generic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool searches Intercom contacts, specifying it includes both users and leads, which clearly identifies the resource and scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs. ic_get_contact (which retrieves a specific contact) or other search/list tools. The description lacks any context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

recallAInspect

Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyNoMemory key to retrieve (omit to list all keys)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses key behavior: if key is omitted, it lists all keys; if key is provided, it retrieves that memory. No annotations exist, so the description carries full burden. It could mention that recall is read-only (no side effects), but the listing behavior is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. The first sentence states functionality; the second provides usage context. Information is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description does not detail the return format (e.g., string vs structured data). However, for a simple key-value recall tool, this is acceptable. The description covers purpose, usage, and parameter behavior completely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents the parameter. The description adds value by explaining the conditional behavior (omit to list). It could be more precise about return format, but the semantics are sufficient.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (retrieve/list) and resource (stored memory) with precise scoping ('by key' vs 'omit key'). It also distinguishes from siblings like 'remember' and 'forget' by specifying retrieval functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use the tool: 'Retrieve context you saved earlier...'. It also implies when not to use an alternative (e.g., 'remember' for storing, 'forget' for deleting). The guidance is complete and actionable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

rememberAInspect

Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g., "subject_property", "target_ticker", "user_preference")
valueYesValue to store (any text — findings, addresses, preferences, notes)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full burden. It discloses persistence behavior (authenticated users get persistent memory, anonymous sessions last 24 hours), which is beyond basic tool purpose. It does not mention rate limits or data limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences: first defines core function, second gives usage examples, third notes persistence behavior. No redundancy, front-loaded with key action and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters (2 strings) and no output schema, the description is sufficient. It covers purpose, usage, and persistence. Could mention that the value is limited to text or size constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both parameters described). The description adds context on what types of values to store ('findings, addresses, preferences, notes') and example keys, which supplements the schema's basic type info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool stores a key-value pair in session memory, with specific use cases like saving findings, preferences, or context. It distinguishes itself from sibling tools like 'recall' (retrieval) and 'forget' (deletion).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use this tool (save intermediate findings, user preferences, context across calls) and implies not to use it for retrieval (handled by 'recall') or deletion (handled by 'forget'). It lacks explicit when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.