Hubspot
Server Details
HubSpot MCP Pack
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-hubspot
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 12 of 12 tools scored. Lowest: 2.9/5.
Tools are split between generic memory/tool discovery functions and HubSpot-specific CRUD operations, which are distinct. However, ask_pipeworx overlaps conceptually with discover_tools and individual HubSpot tools by offering a natural language interface to any data, creating potential ambiguity about when to use ask_pipeworx vs. specific tools.
The HubSpot tools follow a consistent 'hs_verb_noun' pattern (hs_get_company, hs_list_companies), but the memory tools (remember, recall, forget) and discovery tools (ask_pipeworx, discover_tools) use different naming conventions without prefixes. This mix of prefixes and plain verbs reduces consistency.
With 12 tools, the count is reasonable for a server that combines HubSpot CRM operations (6 tools) with auxiliary memory and discovery functions. It's not overblown, and each tool serves a clear purpose, though the generic ask_pipeworx could arguably replace several tools.
The HubSpot CRM tools cover list, get, and search for contacts, companies, and deals, but lack create and update operations, which are common CRM needs. The memory tools provide basic CRUD but are complete for their scope. The overall completeness is moderate due to missing HubSpot mutation endpoints.
Available Tools
12 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It explains that the tool 'picks the right tool, fills the arguments, and returns the result', which gives insight into its autonomous behavior. However, it does not disclose limitations, such as potential latency, error handling, or whether it can access all data sources. No contradiction with missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: three sentences that front-load the purpose and include examples. Every sentence adds value, with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simple interface (one string parameter) and no output schema, the description is largely complete. It explains the behavior of routing to the best data source and filling arguments. However, it could mention that results are returned as text or clarify that the tool does not support follow-up questions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter 'question' described as 'Your question or request in natural language'. The description adds context by emphasizing plain English and providing examples, but the schema already adequately describes the parameter. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to answer natural language questions by selecting the best data source and filling arguments. It provides concrete examples like 'What is the US trade deficit with China?' that illustrate usage. This distinguishes it from sibling tools that are either tool-discovery (discover_tools) or specific CRM lookups (hs_get_company, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly tells users to 'just describe what you need' without browsing tools or learning schemas, implying a high-level query interface. However, it does not explicitly state when not to use this tool (e.g., for direct, structured queries that siblings can handle). The examples help but lack exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses key behavioral traits: it returns 'the most relevant tools with names and descriptions', and it is intended as a discovery step. No annotations are provided, so the description carries the full burden. It does not mention any side effects, permissions, or rate limits, but given the search-only nature, the behavioral transparency is adequate for an AI agent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: three sentences that are front-loaded with the core purpose. Every sentence adds value: first sentence states the action, second explains the return, third gives usage guidance. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple search with 2 parameters, no output schema), the description is complete. It explains what the tool does, how to use it, and when to call it. No missing details that would hinder an AI agent's decision to invoke it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, with descriptions for both parameters (query and limit). The description adds value by explaining the usage of the query parameter (natural language description of what you want to do) and implicitly reinforces the limit's role. Since schema coverage is high, the baseline is 3, but the description provides an example query, raising the score to 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to search the Pipeworx tool catalog using a natural language query and return the most relevant tools. It specifies the verb 'search', the resource 'tool catalog', and the action 'describe what you need', which distinguishes it from sibling tools that perform different operations like asking questions, forgetting, remembering, or working with HS entities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Call this FIRST when you have 500+ tools available and need to find the right ones for your task.' This tells the agent the priority and context for invocation, effectively differentiating it from the other tools listed as siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetAInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavior. It states the action (delete) and identifies the required parameter (key). However, it doesn't mention side effects (e.g., if memory is permanently removed) or any prerequisites like authorization. The description is adequate but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no wasted words. It is front-loaded with the action and resource. Perfect conciseness for a simple tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (1 parameter, no output schema, no annotations), the description is complete enough. It states the action and required input. No output schema means return value doesn't need explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already describes the 'key' parameter as 'Memory key to delete'. The description simply says 'by key', adding minimal value. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'Delete' and the resource 'stored memory by key'. It clearly distinguishes from sibling tools like 'recall' and 'remember' which imply retrieval and storage, not deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you want to delete a memory by key, but it doesn't explicitly state when not to use it or provide alternatives. The sibling tools include 'remember' (store) and 'recall' (retrieve), which offer some context, but no direct comparison.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_get_companyCInspect
Fetch a company's full profile by ID. Returns name, domain, industry, revenue, and all custom properties.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | HubSpot company ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description must cover behavior. It only states it returns a single company by ID, but doesn't mention if it returns all fields, or any potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, clear and concise. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, and tool is simple with one parameter. The description is minimal but lacks details like response format or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a clear parameter 'id'. The description does not add extra meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'a single HubSpot company by ID'. It distinguishes from sibling tools like hs_list_companies which lists multiple companies.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this vs other tools. For example, hs_list_companies or hs_search_contacts are alternatives but no comparison is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_get_contactAInspect
Fetch a contact's full profile by ID. Returns name, email, phone, company, and all custom properties.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | HubSpot contact ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description correctly identifies the tool as a read operation (get). With no annotations provided, the description carries full burden and is adequate but lacks additional context like rate limits, error cases, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, front-loaded with the verb 'Get', and contains no unnecessary words. It is maximally concise while conveying essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool is simple (1 parameter, no output schema, no nested objects), the description is largely complete. It could be improved by noting the return format or potential errors, but it is sufficient for selecting and invoking the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description does not add meaning beyond the input schema for the 'id' parameter. Since schema coverage is 100% (the schema documents the parameter), a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), the resource ('HubSpot contact'), and the identifying mechanism ('by ID'). It distinguishes from sibling tools like hs_list_contacts and hs_search_contacts, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need a single contact by ID, but it does not explicitly state when to use this tool over alternatives like hs_search_contacts. No guidance on prerequisites or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_get_dealBInspect
Fetch a deal's full details by ID. Returns deal name, amount, stage, owner, and linked contacts and companies.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | HubSpot deal ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description states it gets a single deal by ID, which implies a read-only operation. Since annotations are empty, the description partially fulfills the transparency burden by indicating it is a retrieval action. However, it does not disclose any behavioral traits like potential errors (e.g., if ID not found), rate limits, or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that states the tool's purpose without unnecessary words. It is front-loaded and concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no output schema), the description is adequate but not complete. It does not mention what the response contains or any edge cases. However, for a simple retrieval tool, a minimal description can suffice.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% coverage, with one required parameter 'id' described as 'HubSpot deal ID'. The description adds no further semantic meaning beyond the schema. Per guidelines, with high schema coverage, baseline is 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (Get), the resource (a single HubSpot deal), and the identifier (by ID). It distinguishes from sibling tools like hs_list_deals (which returns multiple deals) and hs_get_company/hs_get_contact (which retrieve different entity types). However, it does not explicitly mention that the tool is for retrieval only, which is implied but not explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it does not mention that hs_list_deals should be used to retrieve multiple deals or that hs_search_contacts is for searching. The agent would need to infer from sibling tool names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_list_companiesCInspect
Browse all companies in your HubSpot workspace. Returns company IDs, names, domains, and properties. Paginate with limit and after parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| after | No | Pagination cursor from a previous response | |
| limit | No | Maximum number of companies to return (default 10, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It mentions pagination via 'after' cursor and limit, but does not state if the tool is read-only (presumably it is), what happens on errors, or any rate limits. The lack of annotations increases the burden, and the description falls short.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences and efficiently conveys the purpose and key feature (pagination). It is front-loaded and concise, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (2 optional params, no output schema, no nested objects), the description is adequate but not complete. It explains pagination but omits details like default limit, maximum, and the fact that 'after' comes from a previous response (implicit from schema). With sibling tools, more context on when to list vs search would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds no new parameter info. The description mentions 'pagination via limit and after cursor', which echoes the schema. Baseline of 3 is appropriate since schema already covers parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'companies from HubSpot CRM'. It also mentions pagination support, which adds specificity. However, it does not distinguish from sibling tools like hs_list_contacts or hs_list_deals, though the resource type is inherently distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as hs_get_company for a single company or hs_search_companies (which doesn't exist but is implied). No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_list_contactsBInspect
Browse all contacts in your HubSpot workspace. Returns contact IDs, names, emails, and properties. Paginate with limit and after parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| after | No | Pagination cursor from a previous response | |
| limit | No | Maximum number of contacts to return (default 10, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are empty, so description must cover behavioral traits. Mentions pagination (limit, after), which is helpful. But does not disclose if contacts are sorted, or any potential rate limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with purpose and key feature (pagination). No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given low complexity (2 params, no output schema), description is adequate but could mention return format or default sorting. Completeness is moderate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description does not add meaning beyond schema for parameters, but the schema itself is clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it lists contacts from HubSpot CRM and mentions pagination support. However, it does not differentiate from sibling tools like hs_search_contacts, which also returns contacts but via search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like hs_search_contacts for filtering or hs_get_contact for a single contact. Does not mention when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_list_dealsBInspect
Browse all deals in your HubSpot workspace. Returns deal IDs, names, amounts, pipeline stages, and close dates. Paginate with limit and after parameters.
| Name | Required | Description | Default |
|---|---|---|---|
| after | No | Pagination cursor from a previous response | |
| limit | No | Maximum number of deals to return (default 10, max 100) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds behavioral context about pagination (limit and after cursor) beyond what annotations provide (none). However, it does not disclose default limit, maximum limit, or any side effects. Since annotations are empty, the description carries the burden, but it is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences: first states the core purpose, second adds key functionality. It is concise and front-loaded, but could be slightly more efficient by merging or adding a usage hint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with good schema coverage and no output schema, the description is minimally complete. It explains pagination but does not mention default behavior or response structure. Adequate but not exceptional.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already describes both parameters clearly. The description only mentions pagination generically, adding no new semantic detail beyond what the schema provides. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists deals from HubSpot CRM and mentions pagination support, which is a key feature. It is distinct from sibling tools like hs_get_deal (single deal) and hs_list_contacts/hs_list_companies (different entities).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for listing deals with pagination, but it does not explicitly state when to use this tool versus alternatives like searching or getting a single deal. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hs_search_contactsBInspect
Search contacts by name, email, or custom properties. Use when you need to find specific people in your database.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (default 10, max 100) | |
| query | Yes | Search query (e.g., name or email) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It indicates that matching is against name, email, and 'other default searchable properties,' which is somewhat vague but implies a search operation. No mention of rate limits, authentication, or side effects. Since it's a search tool, destructive behavior is unlikely, but the description could be more explicit about read-only nature.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is clear and to the point. It front-loads the purpose and mentions key matching fields. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with 2 parameters and no output schema, the description adequately explains what the tool does. However, it could mention pagination behavior (limit and offset) or that results are paginated. The lack of output schema means the description doesn't need to explain return values, but it could hint at the response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters. The description adds that the query matches against name, email, and other properties, which provides some context beyond the schema's generic 'Search query' description. However, it doesn't add details about the 'limit' parameter beyond what the schema says.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool searches HubSpot contacts by a query string and matches against default properties like name and email. It distinguishes itself from sibling tools like hs_get_contact (which likely retrieves a single contact) and hs_list_contacts (which probably lists all contacts).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. For example, when to use search vs. list vs. get is not mentioned. No exclusion criteria or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description carries the burden. Describes behavior (list vs retrieve) but does not mention side effects, authorization, or return format beyond 'memory' concept. Adequate but could be richer.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, efficient and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple tool with one optional parameter, no output schema. Description is sufficient for a straightforward retrieval tool, covering both retrieval modes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. Description adds context about 'omit to list all keys' but does not add more detail beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool retrieves memories by key or lists all if key omitted, distinguishing it from sibling tools like 'remember' (store) and 'forget' (delete).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use for retrieving context saved earlier, implying when to use (for recall). Lacks explicit alternatives or when-not-to-use, but sibling names and context provide differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses persistence behavior ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), which is crucial for an agent. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each adding value: action, use case, persistence distinction. No filler, front-loaded with purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (2 required string params), no output schema, and no annotations, the description is complete. It covers purpose, usage, and behavioral nuance (persistence). Sibling tools like 'recall' and 'forget' complement it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already describes both parameters well. The description adds minimal extra meaning beyond what the schema provides (e.g., examples in value description). Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's function: 'Store a key-value pair in your session memory.' It specifies the resource (session memory) and the action (store), distinguishing it from siblings like 'recall' and 'forget'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context: 'Use this to save intermediate findings, user preferences, or context across tool calls.' It also differentiates between authenticated (persistent) and anonymous (24-hour) sessions, but does not explicitly mention when not to use it or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!