discovery
Server Details
Search and discover advertiser products through an open marketplace for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- nexbid-dev/protocol-commerce
- GitHub Stars
- 0
- Server Listing
- Nexbid
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
5 toolsnexbid_categoriesARead-onlyIdempotentInspect
<tool_description> List all available product categories in the Nexbid marketplace with product counts. Optionally filter by country. </tool_description>
<when_to_use> When user wants to explore what is available before searching. Use BEFORE nexbid_search to help narrow down the query. </when_to_use>
<combination_hints> nexbid_categories → nexbid_search with category filter for targeted results. Good starting point for browse intent. </combination_hints>
<output_format> List of categories with product counts. Optionally filtered by country. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code to filter categories |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying that results include 'product counts' (specific data characteristics) and confirming the optional filtering capability. It does not mention rate limits or pagination behavior, but effectively complements the safety annotations with data structure expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses well-structured sections (tool_description, when_to_use, combination_hints, output_format) with zero wasted words. Each sentence earns its place: purpose is front-loaded, workflow guidance is explicit, and output expectations are set efficiently. The XML-like structure enhances scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional string parameter, no nested objects) and rich annotations (4 hint fields), the description is complete. It compensates for the missing output_schema by including an <output_format> section describing the list structure. The workflow guidance (when to use vs nexbid_search) provides sufficient orchestration context for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (the 'geo' parameter is fully documented as 'ISO 3166-1 alpha-2 country code'), the baseline score applies. The description mentions 'Optionally filter by country' which aligns with the schema but does not add additional semantic details, examples, or validation guidance beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'List[s] all available product categories in the Nexbid marketplace with product counts' and specifies the optional country filter. It clearly distinguishes from sibling nexbid_search by positioning this as a browse/exploration tool rather than a search tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly states to use this 'BEFORE nexbid_search to help narrow down the query' and identifies the specific user intent ('explore what is available'). The <combination_hints> further reinforces the workflow sequence (categories → search with filter), providing clear alternatives and sequencing guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_order_statusARead-onlyIdempotentInspect
<tool_description> Check the status of a purchase intent created via nexbid_purchase. </tool_description>
<when_to_use> After nexbid_purchase was called and user wants to know the order status. Requires the intent_id UUID returned by nexbid_purchase. </when_to_use>
<combination_hints> Always follows nexbid_purchase. No other tool needed after this. </combination_hints>
<output_format> Current status (pending/completed/expired), checkout link if still active. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| intent_id | Yes | Purchase intent UUID from nexbid_purchase |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent). Description adds workflow context (follows purchase creation) and discloses output values ('pending/completed/expired') and conditional fields ('checkout link if still active') not present in structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured XML-like tags to organize distinct sections (tool_description, when_to_use, combination_hints, output_format). Content is front-loaded and dense, though the tag syntax adds slight verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter status check tool, description adequately covers purpose, prerequisites, sequencing, sibling relationships, and output format (enumerating status states and conditional checkout link) despite absence of formal output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage describing intent_id as 'Purchase intent UUID'. Description adds semantic origin context ('from nexbid_purchase', 'returned by nexbid_purchase'), helping the agent understand data flow from the sibling tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with resource 'status of a purchase intent', and explicitly scopes it to intents 'created via nexbid_purchase', clearly distinguishing it from sibling tools like nexbid_purchase (creation) and nexbid_search (discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
<when_to_use> explicitly states the temporal dependency ('After nexbid_purchase was called') and prerequisite ('Requires the intent_id UUID'). <combination_hints> provides clear sequencing guidance ('Always follows nexbid_purchase').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_productARead-onlyIdempotentInspect
<tool_description> Get detailed product information by ID from the Nexbid marketplace. Returns full product details including price, availability, description, and purchase link. </tool_description>
<when_to_use> When you have a specific product UUID from a previous nexbid_search result. Do NOT use for browsing — use nexbid_search instead. </when_to_use>
<combination_hints> Typically called after nexbid_search to get full details on a specific product. If user wants to buy → follow with nexbid_purchase. </combination_hints>
<output_format> Full product details: name, description, price, currency, availability, brand, category, purchase link. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | Product UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context by detailing the return payload (name, description, price, currency, availability, brand, category, purchase link) in the output_format section, which compensates for the lack of output schema. Does not mention error behaviors (e.g., invalid UUID), preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured sections (tool_description, when_to_use, combination_hints, output_format) that front-load critical information. Every sentence serves a distinct purpose; no filler content. The format efficiently separates what the tool does from when to use it and what it returns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool, the description is comprehensive. It compensates for the missing output schema by explicitly listing all returned fields. Covers sibling relationships (nexbid_search, nexbid_purchase), usage constraints, and return values adequately for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (product_id described as 'Product UUID'), establishing a baseline of 3. The description adds workflow context that the ID should come 'from a previous nexbid_search result,' helping the agent understand the parameter's semantic origin beyond just its type/format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get[s] detailed product information by ID from the Nexbid marketplace' using a specific verb and resource. It clearly distinguishes from sibling nexbid_search by stating 'Do NOT use for browsing — use nexbid_search instead' and specifying it requires 'a specific product UUID from a previous nexbid_search result.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit <when_to_use> section stating the prerequisite (having a UUID from previous search) and explicitly naming the alternative tool for browsing (nexbid_search). The <combination_hints> section further clarifies workflow positioning relative to siblings nexbid_search and nexbid_purchase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_purchaseAInspect
<tool_description> Initiate a purchase for a product found via nexbid_search. Returns a checkout link that the user can click to complete the purchase at the retailer. The agent should present this link to the user for confirmation. </tool_description>
<when_to_use> ONLY after user has expressed clear purchase intent for a specific product. Requires a product UUID from nexbid_search or nexbid_product. ALWAYS confirm with user before calling this tool. </when_to_use>
<combination_hints> nexbid_search (purchase intent) → nexbid_purchase → present checkout link to user. After purchase → nexbid_order_status to check if completed. Use checkout_mode=wallet_pay when the user has a connected wallet with active mandate. </combination_hints>
<output_format> For prefill_link (default): Checkout URL that the user clicks to complete purchase at the retailer. For wallet_pay: Intent ID and status for mandate-based authorization. Include product name and price for user confirmation. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| quantity | No | Quantity to purchase (default: 1) | |
| product_id | Yes | Product UUID to purchase | |
| checkout_mode | No | Checkout mode. Default: prefill_link. wallet_pay requires a connected wallet with active mandate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate mutation (readOnlyHint=false) and external interaction (openWorldHint=true). Description adds valuable behavioral context: explains handoff to external retailer ('user clicks to complete the purchase at the retailer'), discloses two distinct output formats (prefill_link vs wallet_pay), and clarifies agent responsibility to 'present this link to the user.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structured format with clear XML-like sections (tool_description, when_to_use, combination_hints, output_format). Information is front-loaded and logically organized. No wasted words; every sentence provides actionable guidance for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description thoroughly documents return values in output_format section (checkout URL vs Intent ID). Covers mutation behavior, external retailer handoff, prerequisite workflow, and confirmation requirements. Complete for a purchase initiation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage establishing baseline 3. Description adds critical workflow context: specifies product_id must come from sibling search/product tools, and explains checkout_mode selection criteria ('when the user has a connected wallet with active mandate') not evident from schema enum alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Initiate a purchase for a product' with specific verb and resource. Explicitly distinguishes scope by requiring product UUID from nexbid_search or nexbid_product siblings, clearly differentiating from browsing tools like nexbid_categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'ONLY after user has expressed clear purchase intent' defines when to use. Explicitly names prerequisite tools (nexbid_search/nexbid_product). Includes 'ALWAYS confirm with user before calling' as guardrail. Combination hints map full workflow and specify checkout_mode conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_searchARead-onlyIdempotentInspect
<tool_description> Search and discover products, recipes AND services in the Nexbid marketplace. Nexbid Agent Discovery — search and discover advertiser products through an open marketplace. Returns ranked results matching the query — products with prices/availability/links, recipes with ingredients/targeting signals/nutrition, and services with provider/location/pricing details. </tool_description>
<when_to_use> Primary discovery tool. Use for any product, recipe or service query. Use content_type filter: "product" (only products), "recipe" (only recipes), "service" (only services), "all" (all, default). For known product IDs use nexbid_product instead. For category overview use nexbid_categories first. </when_to_use>
<intent_guidance> Return top 3, price prominent, include checkout readiness Return up to 10, tabular format, highlight differences Return details, specs, availability info Return varied results, suggest categories. For recipes: show cuisine, difficulty, time. </intent_guidance>
<combination_hints> After search with purchase intent → nexbid_purchase for top result After search with compare intent → nexbid_product for detailed specs For category exploration → nexbid_categories first, then search within For multi-turn refinement → pass previous queries in previous_queries array to consolidate search context Recipe results include targeting signals (occasions, audience, season) useful for contextual ad matching. </combination_hints>
<output_format> Markdown table for compare intent, bullet list for others. Products: product name, price with currency, availability status. Recipes: recipe name, cuisine, difficulty, time, key ingredients, dietary tags. Services: service name, provider, location, price model, duration. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code (default: CH) | |
| brand | No | Filter by brand name | |
| query | Yes | Natural language product or recipe query | |
| intent | No | User intent for the search | |
| category | No | Filter by product category | |
| currency | No | Currency for budget filtering | |
| max_results | No | Maximum number of results (1-50, default: 10) | |
| content_type | No | Filter by content type: product, recipe, or all (default) | all |
| budget_max_cents | No | Maximum budget in cents (e.g. 20000 for CHF 200) | |
| budget_min_cents | No | Minimum budget in cents | |
| previous_queries | No | Previous queries in this search session for multi-turn refinement (oldest first, max 10). Example: ["running shoes", "waterproof only"] |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent status, while the description adds valuable behavioral context: results are 'ranked,' products include 'prices/availability/links' versus recipes with 'ingredients/targeting signals/nutrition,' and output formats vary by intent (Markdown table for compare, bullets for others). It does not mention rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Despite being lengthy, the description is well-structured with clear XML-like sections (tool_description, when_to_use, intent_guidance) that front-load critical information. Every section provides distinct value without redundancy, though the format is more verbose than plain text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, multiple content types, four intent modes) and lack of output schema, the description comprehensively covers return formats, sibling relationships, parameter interactions, and session management through previous_queries. No critical gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (baseline 3), the description adds significant semantic value through the <intent_guidance> section explaining how each enum value (purchase/compare/research/browse) affects behavior, and <combination_hints> explaining the session-based use of previous_queries for multi-turn refinement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Search[es] and discover[s] products AND recipes in the Nexbid marketplace' with specific verbs and dual resources. It clearly distinguishes from siblings by stating 'For known product IDs use nexbid_product instead' and 'For category overview use nexbid_categories first.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly designates this as the 'Primary discovery tool' and provides clear alternates: use nexbid_product for known IDs and nexbid_categories for category overviews. The <combination_hints> section further clarifies workflow sequences (e.g., 'After search with purchase intent → nexbid_purchase').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.