agent-commerce-mcp
Server Details
Agent Commerce MCP — agent-native A2A storefront. Discovery, Stripe checkout, affiliate program.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- bch1212/agent-commerce-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 13 of 13 tools scored.
Each tool has a clear, distinct purpose. There is no ambiguity between tools like compare_products, search_products, and get_recommendation, as they operate on different inputs and outputs. Even similar tools like get_affiliate_info and register_affiliate are well-separated by action.
All tool names follow a consistent verb_noun pattern in snake_case (e.g., create_checkout, get_affiliate_info, search_products). This predictability makes it easy for an agent to infer functionality from the name alone.
13 tools is an appropriate count for an e-commerce and affiliate marketplace server. It covers discovery, comparison, purchasing, affiliate management, and vendor verification without being overwhelming or sparse.
The tool set covers the main lifecycle: product discovery (search, recommendation), comparison, pricing, trial, purchase, and post-purchase (install, cross-sells). Minor gaps like order history or refunds exist, but the core commerce flow is well-supported.
Available Tools
13 toolscompare_productsCompare productsBInspect
Side-by-side feature/price matrix for 2-5 products.
| Name | Required | Description | Default |
|---|---|---|---|
| slugs | Yes | Product slugs to compare side-by-side | |
| vs_competitor | No | Optional competitor name for context |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided. Description mentions 'matrix' but lacks details on read-only nature, authentication, rate limits, or side effects. For a tool without annotations, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with no waste, directly states purpose and constraints. Efficiently structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema or annotations. Missing details on return format, data freshness, or comparison criteria beyond features/prices. Incomplete for a tool with no structured behavioral metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear descriptions for both parameters. The description adds 'feature/price matrix' context but does not enhance parameter-specific meaning beyond what schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'compare' with resource 'products', specifies side-by-side feature/price matrix and range of 2-5 products, differentiating from get_pricing or search_products.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use instructions. The description implies use for comparison but does not mention alternatives like get_recommendation or limitations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_checkoutCreate checkout sessionAInspect
Create a live Stripe (or Gumroad) checkout URL for the buyer. Pass referral_code to credit an affiliate.
| Name | Required | Description | Default |
|---|---|---|---|
| tier | Yes | Tier name (e.g., 'Pro', 'Team') | |
| Yes | Buyer email — Stripe will send the receipt here | ||
| product_slug | Yes | Product slug to buy | |
| referral_code | No | Affiliate referral code that should be credited |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully disclose behavior. It mentions creating a live URL and affiliate credit, but omits critical details: whether it actually charges, idempotency, rate limits, authentication needs, or side effects. For a payment tool, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main action, no redundant words. Every sentence adds information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool involving payments and 4 parameters, the description is too minimal. It doesn't specify return value (though it implies a URL), whether the operation is synchronous, or error scenarios. No output schema makes this gap worse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with good parameter descriptions. The description adds value by explaining the tool supports both Stripe and Gumroad and clarifying that referral_code credits an affiliate, which connects purpose to parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('create a checkout URL'), the resource ('Stripe or Gumroad'), and distinguishes from sibling tools (e.g., compare_products, get_affiliate_info) by focusing on checkout creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for creating a checkout URL and mentions referral_code for affiliate credit, but lacks explicit guidance on when to use this tool vs alternatives (e.g., when to use register_affiliate or get_affiliate_info). No when-not or context provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_affiliate_infoAffiliate program infoAInspect
Commission rates (15-30%), tracking, and tier perks. Pass product_slug for that product's specific rate.
| Name | Required | Description | Default |
|---|---|---|---|
| product_slug | No | Optional: a specific product to get the affiliate rate for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full responsibility. It discloses commission ranges, tracking, and tier perks, and that product_slug filters results. However, it does not explicitly state that the tool is read-only or non-destructive, which would be helpful given no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that are front-loaded: the first gives the general info, the second clarifies the parameter. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple informational tool with one optional parameter and no output schema, the description covers the key points: commission rates, tracking, tier perks, and how to get specific rates. It could mention the return format but is otherwise complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage. The description adds value by explaining that product_slug returns that product's specific rate, going beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides commission rates (15-30%), tracking, and tier perks, distinguishing it from sibling tools like register_affiliate and request_partnership. It also specifies that passing product_slug yields a specific rate, making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving affiliate info, but does not explicitly state when to use this tool versus alternatives like register_affiliate or request_partnership. It tells when to pass the optional parameter but lacks when-not-to guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_cross_sellsGet cross-sell recommendationsAInspect
Given the current product, return related products from the cross-sell graph and the recommended bundle.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_context | No | Optional: extra context about the buyer's stack or goals | |
| current_product | Yes | Product the buyer is currently considering or already owns |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries burden. It discloses that the tool returns cross-sell products and a bundle, but does not mention side effects (none expected), authentication, or rate limits. Adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, 18 words, front-loaded with condition 'Given the current product', zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return value (related products and bundle). It covers the required input and is complete for a simple tool, though it could differentiate from get_recommendation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for both parameters. The description reiterates the role of current_product but adds no new meaning for agent_context beyond the schema. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description specifies the action ('return') and the resource ('related products from the cross-sell graph and the recommended bundle'), and it distinguishes this tool from siblings like get_recommendation by focusing on cross-sells based on a current product.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It clearly states when to use ('Given the current product'), but does not explicitly mention when not to use or compare with alternative tools like get_recommendation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_free_tierGet free tier accessBInspect
Returns instant access details for a product's free tier (signup URL or install command).
| Name | Required | Description | Default |
|---|---|---|---|
| product_slug | Yes | Product to access for free |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of disclosing behavioral traits. It states 'returns instant access details' but omits any information about side effects, authentication requirements, rate limits, or idempotency, leaving the agent with insufficient safety context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the core purpose with no extraneous content. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one parameter, no output schema, no annotations), the description is adequate but lacks details on the return value structure or error handling. It mentions the output includes a signup URL or install command but does not specify the format or behavior on failure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the parameter 'product_slug' already described as 'Product to access for free'. The tool description adds no additional meaning, format examples, or constraints beyond the schema, resulting in no added value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the verb 'returns' and the resource 'free tier access details', specifying that it provides a signup URL or install command. This distinguishes it from sibling tools like get_pricing or get_mcp_install.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. Context for selection among siblings is absent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_mcp_installGet MCP install commandBInspect
Exact install command/snippet for a product's MCP server in claude_desktop, claude_code, cursor, cline, or windsurf.
| Name | Required | Description | Default |
|---|---|---|---|
| client | Yes | Target MCP client | |
| product_slug | Yes | Product slug, must be an MCP-enabled product |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It describes the output as an 'install command/snippet' but does not mention side effects, authentication needs, rate limits, or error conditions. It does not confirm that the operation is read-only or non-destructive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence with no extraneous words. It is front-loaded with the key action and resource, making it easy for an agent to quickly understand the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description should clarify the format of the returned command/snippet (e.g., is it a string, object, or list?). It also lacks information on error handling, prerequisites (e.g., authentication), and what happens if the product is not MCP-enabled. This leaves the agent guessing about important details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents both parameters with descriptions. The description adds context about the clients and that the product must be MCP-enabled, but the schema already states 'must be an MCP-enabled product'. The added value is minimal, hence a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns an exact install command or snippet for a product's MCP server in five named clients. It uses a specific verb ('get') and resource ('MCP install command'), and the purpose is distinct from sibling tools which handle products, pricing, affiliates, etc.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving install commands for MCP-enabled products in several clients, but it does not explicitly explain when to use this tool versus alternatives. No guidance on prerequisites or scenarios where this tool is inappropriate is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricingGet pricingBInspect
Full pricing breakdown for a product (tiers, monthly/yearly, affiliate rate).
| Name | Required | Description | Default |
|---|---|---|---|
| tier | No | Specific tier name to highlight | |
| billing | No | ||
| product_slug | Yes | Product slug from the catalog (e.g., 'injectshield') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must fully disclose behavioral traits. It only describes the output content ('pricing breakdown') but does not mention read-only status, authentication requirements, rate limits, or side effects. This is insufficient for a tool with no structured annotation support.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence of 10 words efficiently conveys the core purpose. There is no redundant or extraneous information; every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 3 parameters (1 required) and no output schema. The description partially compensates by outlining the output structure (tiers, monthly/yearly, affiliate rate), but it omits error handling, pagination (if any), and behavioral context like read-only or auth requirements. With no annotations, this is a notable gap, though the core functionality is conveyed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (product_slug and tier have descriptions; billing has enum). The description adds value by hinting that the billing parameter corresponds to 'monthly/yearly' and the tier parameter filters by 'tiers'. However, it does not explain the behavioral meaning of parameters beyond what the schema already conveys, so the added semantic value is moderate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it provides a 'full pricing breakdown' for a product, specifying the included aspects: tiers, monthly/yearly options, and affiliate rate. This defines the tool's purpose well but does not explicitly differentiate it from siblings like get_affiliate_info or get_free_tier, though the scope is reasonably distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as get_affiliate_info (for affiliate-specific pricing) or compare_products (for multi-product comparisons). The description does not explain appropriate use cases or exclusions, leaving the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_recommendationRecommend products for a problemAInspect
Given a problem statement (and optionally a stack/company size), return 1-3 best-fit products with reasoning.
| Name | Required | Description | Default |
|---|---|---|---|
| stack | No | Tech stack or category they already use (e.g., 'OpenAI + Pinecone + Vercel') | |
| problem | Yes | Problem the user is trying to solve, in their own words | |
| company_size | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; the description discloses that the tool returns 1-3 best-fit products with reasoning, but lacks details on how recommendations are determined, privacy, or potential limitations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that efficiently conveys the purpose and inputs. Could be slightly more structured, but no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers the main purpose and inputs but does not specify the format of the output (e.g., JSON structure of products and reasoning).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 67% (2 of 3 parameters have descriptions); the description adds minimal value beyond the schema. It clarifies that 'stack' is an existing tech stack and 'company_size' is optional, but does not explain enum values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('return') and noun ('best-fit products with reasoning'), and clearly distinguishes from sibling tools like search_products (search by criteria) and compare_products (compare specific ones).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description states to use when a user has a problem and optionally provides stack/company size. It does not explicitly say when not to use or mention alternatives, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trust_scoreGet vendor trust score (AgentTrust)AInspect
Returns Halverson IQ's AgentTrust reputation score plus operational summary.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It does not disclose behavioral traits like safety, auth requirements, or side effects beyond stating it returns data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently conveys the tool's purpose without extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the tool is simple (no parameters), the description leaves 'operational summary' vague. Without an output schema, the agent lacks clarity on the return format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has zero parameters, so schema coverage is 100%. The description adds value by explaining the output includes a reputation score and operational summary, which the schema cannot convey.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (returns) and the resource (Halverson IQ's AgentTrust reputation score plus operational summary). It distinguishes this tool from siblings like get_pricing or compare_products by focusing on a trust metric.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No mention of scenarios, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_affiliateRegister as an affiliateAInspect
Instantly register an agent or operator as an affiliate. Returns a referral_code for use in create_checkout.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Stable identifier for the recommending agent (e.g., 'cursor.user-12345') | |
| products | No | Optional: specific products this affiliate plans to promote | |
| operator_email | Yes | Where to send commission payouts |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It indicates a write operation ('instantly register') and return of a referral_code, but does not disclose idempotency, error handling, or prerequisites beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence packed with essential information: action, return value, and usage hint. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description correctly mentions the return value. It covers the main purpose and link to sibling tool. Lacks details on duplicate registrations or failure cases, but sufficient for most agents.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions. The description adds minimal extra meaning (e.g., 'commission payouts' context for operator_email). Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Describes a specific action ('register an agent or operator as an affiliate') and the return value ('returns a referral_code'). Clearly distinguishes from siblings like 'get_affiliate_info' (which retrieves) and 'create_checkout' (which uses the code).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the tool is for use before 'create_checkout', providing context for when to use. However, it does not mention when not to use or alternative tools (e.g., if already registered).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
request_partnershipRequest a partnershipBInspect
Submit a partnership proposal (cross-listing, joint bundle, embed, co-marketing). Reviewed in 3 business days.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | Yes | Identifier for the proposing agent or company | |
| proposal | Yes | Free-text partnership proposal — what you'd like to build/co-market | |
| contact_email | Yes | ||
| integration_type | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It only mentions a 3-business-day review timeframe, but fails to disclose any side effects, authentication requirements, or what happens after submission (e.g., confirmation, status tracking).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core purpose and provides essential context (partnership types, turnaround) without any fluff. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a submission tool with no output schema and no annotations, the description is adequate but incomplete. It explains what the tool does and the expected review time, but does not cover what the agent should expect after submission (e.g., confirmation, how to follow up) or error handling. Additional context on process flow would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema already describes three of four parameters with textual descriptions. The description adds value by listing the integration types in the prose, but does not explain how to choose among them or any additional context beyond the schema. Schema coverage is sufficiently high that a baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the action 'submit a partnership proposal' and lists specific partnership types (cross-listing, joint bundle, embed, co-marketing), making the tool's purpose unambiguous. It is easily distinguishable from sibling tools which focus on products, pricing, and checks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, nor any prerequisites or exclusions. Only the turnaround time is mentioned, which does not help an agent decide when to invoke this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsSearch productsAInspect
Search the catalog of SaaS, developer tools, services, and MCP servers. Returns ranked matches with score and reason.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | What the agent or its user is trying to accomplish (e.g., 'block prompt injection', 'fishing reports', 'lead lists for dentists') | |
| category | No | Filter by category | |
| use_case | No | Specific use case keywords | |
| budget_max | No | Max monthly USD the buyer is willing to spend |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns 'ranked matches with score and reason', which is helpful. However, it does not mention any potential side effects, authentication requirements, rate limits, or pagination behavior, leaving gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that immediately states what the tool does. There is no extraneous text, and every word is meaningful. It is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, 1 required) and absence of an output schema, the description provides a basic understanding of what it returns but lacks details on pagination, result limits, error handling, or what to do when no results are found. It is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters adequately. The description adds no extra meaning beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search'), the resource ('Halverson IQ catalog'), and the scope ('SaaS, developer tools, services, and MCP servers'). It distinguishes from sibling tools like compare_products or get_pricing, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies the tool should be used when a general search is needed, but it does not explicitly state when to prefer it over alternatives or when not to use it (e.g., if the agent needs a recommendation, use get_recommendation instead). No exclusions or context are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_vendorVerify vendorBInspect
Returns full vendor info: company, products live, MCP endpoints, refund/data policies.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden for behavioral traits. It only describes the return value but does not mention read-only nature, potential side effects, authentication requirements, or rate limits. This lack of transparency is a significant gap for an unannotated tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence with a colon introducing a list. No unnecessary words; every part adds value. It is front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and no output schema, the description adequately states what the tool returns. However, it omits how the vendor is selected (e.g., from user context or a previous step) and does not explain the term 'verify' (e.g., does it involve validation?). This leaves some contextual ambiguity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters and 100% coverage, so the baseline is 3. The description adds no parameter semantics (correctly, as none exist) but also does not clarify how the vendor is identified (e.g., implicit context). It adds value by describing the output structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'returns' and specifies the resource 'full vendor info' along with a detailed list of contents (company, products live, MCP endpoints, refund/data policies). This distinguishes it from sibling tools like compare_products or create_checkout.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. Sibling tools such as get_trust_score or get_free_tier might also provide vendor-related information, but no explicit when-to-use or when-not-to-use context is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!