discovery
Server Details
Search and discover advertiser products through an open marketplace for AI agents.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- nexbid-dev/protocol-commerce
- GitHub Stars
- 0
- Server Listing
- Nexbid
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.5/5 across 19 of 19 tools scored.
Most tools have distinct purposes, but there is some overlap between get_product and nexbid_product, which are explicitly described as aliases, and between list_products and nexbid_search with content_type='product'. This could cause confusion, though descriptions help clarify. Other tools like activate, pause, and cancel are well-differentiated for media buy lifecycle management.
The naming is mixed with no consistent pattern. Some tools use verb_noun (e.g., create_media_buy, list_inventory), others use noun_verb (e.g., nexbid_search, nexbid_purchase), and some are single verbs (e.g., activate, pause, cancel). While readable, the lack of a uniform convention across the set reduces predictability.
With 19 tools, the count is on the higher side but reasonable for the dual domains of media buying and marketplace discovery. It covers operations like listing, creating, managing, and reporting, which justifies the number. However, it borders on feeling heavy, especially with overlapping tools like get_product and nexbid_product.
The tool set provides comprehensive coverage for both media buying (create, submit, activate, pause, cancel, track, settle, report, compliance) and marketplace discovery (search, categories, product details, purchase, order status). There are no obvious gaps; workflows are well-supported with clear combination hints, ensuring agents can handle end-to-end tasks without dead ends.
Available Tools
19 toolsactivateAInspect
<tool_description> Activate an approved media buy to start serving. Requires creative to be submitted first. </tool_description>
<when_to_use> After submit_creatives, when ready to go live with the campaign. </when_to_use>
<combination_hints> submit_creatives → activate → track_enriched_snippet (for enriched snippet buys). Can be paused later with pause, or cancelled with cancel. </combination_hints>
<output_format> Activated media buy with won_at timestamp. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| media_buy_id | Yes | Media buy UUID to activate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover readOnlyHint=false, destructiveHint=false, etc., but the description adds valuable context: it requires 'creative to be submitted first' (a prerequisite), mentions the output format ('Activated media buy with won_at timestamp'), and hints at idempotency through combination flows. It does not contradict annotations and enhances understanding of operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, etc.), each sentence is purposeful with zero waste, and it's front-loaded with the core action. It efficiently conveys necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a mutation with prerequisites), rich annotations, and no output schema, the description is complete: it explains the purpose, usage context, prerequisites, output format, and relationships with sibling tools, covering all essential aspects for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'media_buy_id' fully documented in the schema. The description does not add extra semantic details about the parameter beyond implying it's for an 'approved media buy', so it meets the baseline of 3 where the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'activate' and the resource 'approved media buy' with the specific purpose 'to start serving'. It distinguishes from siblings like 'pause', 'cancel', and 'submit_creatives' by focusing on initiating service after approval and creative submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: use 'after submit_creatives, when ready to go live with the campaign'. It also mentions alternatives for later actions ('paused later with pause, or cancelled with cancel') and combination hints for enriched snippet buys ('submit_creatives → activate → track_enriched_snippet').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancelAIdempotentInspect
<tool_description> Cancel a media buy campaign. This is a terminal state — cannot be reactivated. </tool_description>
<when_to_use> When an advertiser wants to permanently stop a campaign. Cannot be undone. Use pause for temporary stops. </when_to_use>
<combination_hints> cancel is terminal. For temporary suspension use pause instead. Remaining budget is released. </combination_hints>
<output_format> Updated media buy with cancelled status. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Reason for cancellation | |
| media_buy_id | Yes | Media buy UUID to cancel |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable behavioral context beyond annotations: it discloses that cancellation is 'terminal — cannot be reactivated', 'Cannot be undone', and 'Remaining budget is released'. These are critical behavioral traits not covered by annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing only essential information. Every sentence earns its place by providing distinct guidance without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive terminal operation), rich annotations, and 100% schema coverage, the description provides complete contextual information. It covers purpose, usage guidelines, behavioral implications, and output format, compensating for the lack of output schema by specifying 'Updated media buy with cancelled status'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('media_buy_id' and 'reason') well-documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Cancel a media buy campaign') and resource ('media buy campaign'), distinguishing it from sibling tools like 'pause' by emphasizing it's a terminal state. The title 'Cancel Campaign' in annotations reinforces this, but the description adds the critical distinction from temporary suspension.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly states when to use this tool ('When an advertiser wants to permanently stop a campaign') and when not to ('Use pause for temporary stops'), with clear alternatives named. The <combination_hints> reinforces this guidance by contrasting with 'pause'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_media_buyAInspect
<tool_description> Create a media buy (bid on publisher inventory). Validates rules, runs auction scoring, and returns approval/rejection status. AdCP-compatible first-price sealed-bid auction. </tool_description>
<when_to_use> When an advertiser wants to place a bid on a publisher inventory slot. Requires inventory_id from list_inventory/get_inventory_item. ALWAYS confirm with user before calling — creates a binding commitment. </when_to_use>
<combination_hints> list_inventory → get_inventory_item → create_media_buy → submit_creatives → activate. If rejected: check rejection_reason and adjust bid/brand/category. Score formula: 0.3bid + 0.3quality + 0.2quality + 0.2context. </combination_hints>
<output_format> Media buy ID, status (approved/rejected), auction score, rejection reason if applicable. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | Advertiser brand name (for rules check) | |
| category | No | Product category (for rules check) | |
| end_date | No | Campaign end date (ISO 8601) | |
| bid_cents | Yes | Bid amount in cents | |
| start_date | No | Campaign start date (ISO 8601) | |
| campaign_id | No | Parent campaign UUID. When set, the campaign's default pacing + frequency cap cascade into this buy unless explicitly overridden. | |
| budget_cents | Yes | Total budget in cents | |
| creative_ref | No | Creative asset URL or reference | |
| inventory_id | Yes | Target inventory slot UUID | |
| advertiser_id | Yes | Advertiser account UUID | |
| creative_data | No | Creative metadata (JSON) | |
| pricing_model | Yes | Pricing model for the buy | |
| quality_score | No | Quality score 0-1 (default: 0.5) | |
| pacing_strategy | No | Override pacing strategy. Falls back to campaign default, then "even". | |
| context_relevance | No | Context relevance 0-1 (default: 0) | |
| frequency_cap_session | No | Per-session impression cap for this buy. NULL means defer to inventory cap. Falls back to campaign default, then 3. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-readOnly, non-destructive, non-idempotent operation. The description adds valuable behavioral context beyond annotations: it discloses the binding commitment nature, validation rules, auction scoring process, and approval/rejection outcomes. However, it doesn't mention rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing essential information with zero waste. Every sentence serves a specific purpose in guiding tool usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a media buy creation tool with 13 parameters and no output schema, the description provides comprehensive context: purpose, usage guidelines, workflow integration, behavioral details (binding commitment, auction mechanics), and output format. It compensates well for the lack of output schema and annotations limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters are documented in the schema. The description adds some context about parameter usage in the combination hints (e.g., 'adjust bid/brand/category' for rejection handling) and the score formula references quality and context parameters, but doesn't provide significant additional semantic meaning beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a media buy'), the resource ('bid on publisher inventory'), and distinguishes it from siblings by specifying it's for creating bids rather than listing, getting, or managing existing buys. It provides detailed context about auction mechanics (AdCP-compatible first-price sealed-bid) that sets it apart.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance is provided in dedicated sections: 'When an advertiser wants to place a bid on a publisher inventory slot' specifies the use case, 'Requires inventory_id from list_inventory/get_inventory_item' states prerequisites, 'ALWAYS confirm with user before calling — creates a binding commitment' gives critical warnings, and combination hints show workflow alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_campaign_reportARead-onlyIdempotentInspect
<tool_description> Get aggregated performance report for a media buy. Shows spend, impressions, clicks, conversions with time-series breakdown. </tool_description>
<when_to_use> To check campaign performance metrics after activation. Supports period filtering and granularity control. </when_to_use>
<combination_hints> list_media_buys → get_campaign_report for performance analysis. Pair with get_compliance_status for full campaign overview. </combination_hints>
<output_format> Totals (spend, impressions, clicks, conversions) + time-series breakdown. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Reporting period (default: all) | |
| granularity | No | Report granularity (default: day) | |
| media_buy_id | Yes | Media buy UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide safety hints (readOnlyHint: true, destructiveHint: false), but the description adds valuable context beyond this: it specifies the tool provides aggregated reports with time-series breakdown, and hints at period filtering and granularity control. It doesn't contradict annotations, but could mention more about rate limits or data freshness.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, etc.), front-loaded with the core purpose, and every sentence adds value without redundancy. It's appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, rich annotations covering safety, and no output schema, the description is complete: it explains the purpose, usage, combinations, and output format. It compensates for the lack of output schema by detailing what the tool returns (totals + time-series breakdown).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters well. The description adds some context by mentioning 'period filtering and granularity control', which aligns with the 'period' and 'granularity' parameters, but doesn't provide additional semantic details beyond what's in the schema. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get aggregated performance report') and resources ('for a media buy'), listing the exact metrics provided (spend, impressions, clicks, conversions). It distinguishes from siblings like 'list_media_buys' by focusing on performance analysis rather than listing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use it ('To check campaign performance metrics after activation') and provides combination hints that guide usage relative to alternatives (e.g., 'list_media_buys → get_campaign_report for performance analysis', 'Pair with get_compliance_status for full campaign overview'). This clearly differentiates it from other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_compliance_statusARead-onlyIdempotentInspect
<tool_description> Check nDSG/GDPR/EU AI Act compliance status for a media buy. Verifies privacy-native architecture compliance. </tool_description>
<when_to_use> Before activating a campaign or for compliance audits. Checks: no cookies, no fingerprinting, contextual targeting, data residency, revenue transparency, consent basis, agent transparency. </when_to_use>
<combination_hints> create_media_buy → get_compliance_status → activate (if compliant). Use for regulatory reporting and audit trails. </combination_hints>
<output_format> Overall compliance status + individual check results with details. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| media_buy_id | Yes | Media buy UUID to check |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations already indicate this is a read-only, non-destructive, idempotent operation, the description adds valuable context about what specific compliance checks are performed (no cookies, fingerprinting, contextual targeting, etc.) and mentions use for regulatory reporting and audit trails, which goes beyond the structured annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing focused, essential information without redundancy. Every sentence serves a clear purpose in helping the agent understand and use the tool effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity, comprehensive annotations, and lack of output schema, the description provides strong context about what compliance checks are performed, when to use the tool, workflow positioning, and expected output format. The main gap is the absence of a formal output schema, but the description compensates well with output format details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents the single required parameter. The description doesn't add any additional parameter semantics beyond what's in the schema, so the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Check', 'Verifies') and resources ('compliance status for a media buy'), distinguishing it from siblings by focusing on regulatory compliance verification rather than campaign management or reporting functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool ('Before activating a campaign or for compliance audits'), includes combination hints showing its place in workflows, and distinguishes it from alternatives by focusing specifically on compliance verification rather than general campaign operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_inventory_itemARead-onlyIdempotentInspect
<tool_description> Get detailed information about a specific publisher inventory slot, including rules and active buy count. </tool_description>
<when_to_use> After list_inventory to get full slot details before bidding. Check capacity (active buys vs max_concurrent) before create_media_buy. </when_to_use>
<combination_hints> list_inventory → get_inventory_item → create_media_buy. Shows brand/category allowlists, blocklists, and current utilization. </combination_hints>
<output_format> Full slot details with rules, pricing, capacity, and active buy count. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| inventory_id | Yes | Inventory slot UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context beyond this: it specifies that the tool shows 'brand/category allowlists, blocklists, and current utilization', which are behavioral traits not covered by annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing concise, front-loaded sentences. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (read-only, single parameter) and rich annotations (readOnlyHint, openWorldHint, etc.), the description is complete. It covers purpose, usage guidelines, combination hints, and output format, compensating for the lack of an output schema by detailing what information is returned. No gaps are evident for this type of tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the inventory_id parameter fully documented in the schema as 'Inventory slot UUID'. The description does not add any further meaning or details about the parameter beyond what the schema provides, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and resource 'detailed information about a specific publisher inventory slot', specifying it includes 'rules and active buy count'. It distinguishes from siblings like list_inventory (which lists items) and create_media_buy (which creates buys), making the purpose specific and differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The when_to_use section explicitly states 'After list_inventory to get full slot details before bidding' and 'Check capacity before create_media_buy', providing clear context and alternatives. The combination_hints further reinforce the workflow with list_inventory → get_inventory_item → create_media_buy, offering explicit guidance on when to use this tool versus others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_productARead-onlyIdempotentInspect
<tool_description> Get detailed product information by ID. Alias for nexbid_product. </tool_description>
<when_to_use> When you have a product UUID from list_products or nexbid_search. </when_to_use>
<combination_hints> list_products → get_product → create_media_buy or nexbid_purchase. </combination_hints>
<output_format> Full product details: name, description, price, currency, availability, brand, category, link. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | Product UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already provide strong behavioral hints (readOnlyHint: true, idempotentHint: true, etc.), so the bar is lower. The description adds valuable context by specifying the output format details (name, description, price, etc.) and workflow combinations, which goes beyond what annotations alone convey. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, etc.), each sentence is purposeful, and there is no redundant information. It efficiently conveys necessary details without waste, making it easy for an agent to parse and understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, 100% schema coverage), rich annotations, and no output schema, the description is complete. It covers purpose, usage guidelines, output format, and workflow hints, providing all needed context for an agent to correctly invoke the tool without gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'product_id' fully documented in the schema as a UUID. The description adds minimal semantic value beyond this, as it only reiterates 'by ID' without providing additional context like format examples or edge cases. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get detailed product information') and resource ('by ID'), and explicitly distinguishes it from its sibling 'nexbid_product' by calling it an alias. This provides precise differentiation from similar tools in the sibling list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit guidance in the <when_to_use> section, specifying when to use this tool ('When you have a product UUID from list_products or nexbid_search') and implicitly when not to use it (e.g., when you don't have a UUID). The <combination_hints> section further clarifies workflow alternatives, making usage context very clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_inventoryARead-onlyIdempotentInspect
<tool_description> List available publisher inventory slots for programmatic media buying. Returns ad slots with pricing, rules, and capacity info. </tool_description>
<when_to_use> Before create_media_buy — discover which slots are available. Use to browse publisher ad inventory for campaign planning. </when_to_use>
<combination_hints> list_inventory → get_inventory_item for slot details → create_media_buy to bid. Filter by publisher_id, slot_type, or pricing_model for targeted results. </combination_hints>
<output_format> Inventory slots with: name, type, floor price, pricing models, capacity, status. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code | |
| slot_type | No | Filter by slot type | |
| max_results | No | Maximum results (default: 20) | |
| publisher_id | No | Filter by publisher UUID | |
| pricing_model | No | Filter by pricing model |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: true), so the bar is lower. The description adds valuable context beyond annotations by specifying the tool's role in a workflow ('list_inventory → get_inventory_item for slot details → create_media_buy to bid') and hinting at filtering capabilities, though it doesn't detail rate limits or auth needs. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing focused, front-loaded information. Every sentence earns its place by adding value, such as workflow guidance or output details, with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (5 parameters, no output schema), the description is complete. It covers purpose, usage guidelines, behavioral context (via annotations and added hints), and output format details, compensating for the lack of an output schema. The annotations provide safety and idempotency info, making this description fully adequate for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 5 parameters. The description adds minimal parameter semantics beyond the schema, only mentioning in <combination_hints> to 'Filter by publisher_id, slot_type, or pricing_model for targeted results,' which doesn't provide new syntax or format details. This meets the baseline of 3 when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('List available publisher inventory slots') and resources ('ad slots with pricing, rules, and capacity info'). It distinguishes itself from siblings like 'get_inventory_item' (which provides slot details) and 'create_media_buy' (which bids on slots), making the scope and differentiation explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes explicit guidance in the <when_to_use> and <combination_hints> sections, stating to use it 'Before create_media_buy — discover which slots are available' and 'Use to browse publisher ad inventory for campaign planning.' It also provides alternatives like filtering by parameters and references sibling tools ('get_inventory_item for slot details'), giving clear context for when and how to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_media_buysARead-onlyIdempotentInspect
<tool_description> List media buys with optional filters. View campaign history for advertisers or publishers. </tool_description>
<when_to_use> To view existing media buys (campaigns). Filter by advertiser, publisher, status, or date. </when_to_use>
<combination_hints> list_media_buys → get_campaign_report for performance data. list_media_buys → get_compliance_status for compliance check. </combination_hints>
<output_format> List of media buys with ID, status, bid, budget, spent, and dates. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by status | |
| date_to | No | Filter to date (ISO 8601) | |
| date_from | No | Filter from date (ISO 8601) | |
| max_results | No | Maximum results (default: 20) | |
| publisher_id | No | Filter by publisher UUID | |
| advertiser_id | No | Filter by advertiser UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true. The description adds valuable context beyond annotations by specifying it's for viewing existing campaigns and listing specific output fields (ID, status, bid, budget, spent, dates). However, it doesn't mention rate limits, authentication requirements, or pagination behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear XML-like sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing only essential information. Every sentence earns its place, with no redundant or unnecessary content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (list operation with 6 optional filters), rich annotations (4 hints), and 100% schema coverage, the description provides complete context. The output format section compensates for the lack of output schema by specifying what fields are returned. The combination hints provide excellent guidance about related tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with all 6 parameters well-documented in the schema. The description mentions filtering by advertiser, publisher, status, or date, which aligns with the schema but doesn't add significant semantic value beyond what's already in the parameter descriptions. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('list media buys') and resources ('media buys'), and distinguishes it from siblings by specifying it's for viewing existing campaigns rather than creating or modifying them. The <tool_description> section explicitly says 'List media buys with optional filters' and 'View campaign history for advertisers or publishers.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly states 'To view existing media buys (campaigns)' and lists specific filter criteria. The <combination_hints> section provides clear alternatives by naming two sibling tools (get_campaign_report, get_compliance_status) and specifying when to use them instead (for performance data or compliance checks).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_productsARead-onlyIdempotentInspect
<tool_description> Search for products in the Nexbid marketplace. Alias for nexbid_search with content_type='product'. </tool_description>
<when_to_use> When an agent needs to discover products (not recipes or services). Convenience alias — delegates to nexbid_search internally. </when_to_use>
<combination_hints> list_products → get_product for details → create_media_buy for advertising. For recipes/services use nexbid_search with content_type filter. </combination_hints>
<output_format> Product list with name, price, availability, score, and link. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code | |
| brand | No | Filter by brand name | |
| query | Yes | Natural language product query | |
| category | No | Filter by product category | |
| max_results | No | Maximum results (1-50, default: 10) | |
| budget_max_cents | No | Maximum budget in cents |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds useful context beyond this: it discloses that this is an alias that delegates to nexbid_search internally, which helps the agent understand implementation behavior. However, it doesn't mention rate limits or authentication needs, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing only essential information. Every sentence earns its place by providing distinct value, such as clarifying the alias relationship, usage context, workflow hints, and output format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search with 6 parameters), rich annotations (covering safety and idempotency), and 100% schema coverage, the description is complete. It adds necessary context like the alias relationship, sibling differentiation, usage guidelines, combination hints, and output format, compensating for the lack of an output schema. No critical gaps remain for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents all 6 parameters. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain how 'query' interacts with 'category' or 'brand'). With high schema coverage, the baseline is 3, and the description doesn't compensate with additional param semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Search for products in the Nexbid marketplace' with a specific verb ('search') and resource ('products'), and distinguishes it from siblings by noting it's an alias for nexbid_search with content_type='product'. This clearly differentiates it from tools like list_inventory or get_product.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance in the <when_to_use> section: 'When an agent needs to discover products (not recipes or services)' and 'For recipes/services use nexbid_search with content_type filter.' It also mentions it's a 'convenience alias' that delegates internally, clarifying when to use this vs. the underlying nexbid_search tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_categoriesARead-onlyIdempotentInspect
<tool_description> List all available product categories in the Nexbid marketplace with product counts. Optionally filter by country. </tool_description>
<when_to_use> When user wants to explore what is available before searching. Use BEFORE nexbid_search to help narrow down the query. </when_to_use>
<combination_hints> nexbid_categories → nexbid_search with category filter for targeted results. Good starting point for browse intent. </combination_hints>
<output_format> List of categories with product counts. Optionally filtered by country. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code to filter categories |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true. The description adds valuable behavioral context by specifying that results include 'product counts' (specific data characteristics) and confirming the optional filtering capability. It does not mention rate limits or pagination behavior, but effectively complements the safety annotations with data structure expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses well-structured sections (tool_description, when_to_use, combination_hints, output_format) with zero wasted words. Each sentence earns its place: purpose is front-loaded, workflow guidance is explicit, and output expectations are set efficiently. The XML-like structure enhances scannability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional string parameter, no nested objects) and rich annotations (4 hint fields), the description is complete. It compensates for the missing output_schema by including an <output_format> section describing the list structure. The workflow guidance (when to use vs nexbid_search) provides sufficient orchestration context for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (the 'geo' parameter is fully documented as 'ISO 3166-1 alpha-2 country code'), the baseline score applies. The description mentions 'Optionally filter by country' which aligns with the schema but does not add additional semantic details, examples, or validation guidance beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'List[s] all available product categories in the Nexbid marketplace with product counts' and specifies the optional country filter. It clearly distinguishes from sibling nexbid_search by positioning this as a browse/exploration tool rather than a search tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly states to use this 'BEFORE nexbid_search to help narrow down the query' and identifies the specific user intent ('explore what is available'). The <combination_hints> further reinforces the workflow sequence (categories → search with filter), providing clear alternatives and sequencing guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_order_statusARead-onlyIdempotentInspect
<tool_description> Check the status of a purchase intent created via nexbid_purchase. </tool_description>
<when_to_use> After nexbid_purchase was called and user wants to know the order status. Requires the intent_id UUID returned by nexbid_purchase. </when_to_use>
<combination_hints> Always follows nexbid_purchase. No other tool needed after this. </combination_hints>
<output_format> Current status (pending/completed/expired), checkout link if still active. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| intent_id | Yes | Purchase intent UUID from nexbid_purchase |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover safety profile (readOnly, idempotent). Description adds workflow context (follows purchase creation) and discloses output values ('pending/completed/expired') and conditional fields ('checkout link if still active') not present in structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured XML-like tags to organize distinct sections (tool_description, when_to_use, combination_hints, output_format). Content is front-loaded and dense, though the tag syntax adds slight verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter status check tool, description adequately covers purpose, prerequisites, sequencing, sibling relationships, and output format (enumerating status states and conditional checkout link) despite absence of formal output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage describing intent_id as 'Purchase intent UUID'. Description adds semantic origin context ('from nexbid_purchase', 'returned by nexbid_purchase'), helping the agent understand data flow from the sibling tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Check' with resource 'status of a purchase intent', and explicitly scopes it to intents 'created via nexbid_purchase', clearly distinguishing it from sibling tools like nexbid_purchase (creation) and nexbid_search (discovery).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
<when_to_use> explicitly states the temporal dependency ('After nexbid_purchase was called') and prerequisite ('Requires the intent_id UUID'). <combination_hints> provides clear sequencing guidance ('Always follows nexbid_purchase').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_productARead-onlyIdempotentInspect
<tool_description> Get detailed product information by ID from the Nexbid marketplace. Returns full product details including price, availability, description, and purchase link. </tool_description>
<when_to_use> When you have a specific product UUID from a previous nexbid_search result. Do NOT use for browsing — use nexbid_search instead. </when_to_use>
<combination_hints> Typically called after nexbid_search to get full details on a specific product. If user wants to buy → follow with nexbid_purchase. </combination_hints>
<output_format> Full product details: name, description, price, currency, availability, brand, category, purchase link. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| product_id | Yes | Product UUID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false. The description adds valuable behavioral context by detailing the return payload (name, description, price, currency, availability, brand, category, purchase link) in the output_format section, which compensates for the lack of output schema. Does not mention error behaviors (e.g., invalid UUID), preventing a 5.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Uses structured sections (tool_description, when_to_use, combination_hints, output_format) that front-load critical information. Every sentence serves a distinct purpose; no filler content. The format efficiently separates what the tool does from when to use it and what it returns.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple single-parameter lookup tool, the description is comprehensive. It compensates for the missing output schema by explicitly listing all returned fields. Covers sibling relationships (nexbid_search, nexbid_purchase), usage constraints, and return values adequately for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (product_id described as 'Product UUID'), establishing a baseline of 3. The description adds workflow context that the ID should come 'from a previous nexbid_search result,' helping the agent understand the parameter's semantic origin beyond just its type/format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Get[s] detailed product information by ID from the Nexbid marketplace' using a specific verb and resource. It clearly distinguishes from sibling nexbid_search by stating 'Do NOT use for browsing — use nexbid_search instead' and specifying it requires 'a specific product UUID from a previous nexbid_search result.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Contains an explicit <when_to_use> section stating the prerequisite (having a UUID from previous search) and explicitly naming the alternative tool for browsing (nexbid_search). The <combination_hints> section further clarifies workflow positioning relative to siblings nexbid_search and nexbid_purchase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_purchaseAInspect
<tool_description> Initiate a purchase for a product found via nexbid_search. Returns a checkout link that the user can click to complete the purchase at the retailer. The agent should present this link to the user for confirmation. </tool_description>
<when_to_use> ONLY after user has expressed clear purchase intent for a specific product. Requires a product UUID from nexbid_search or nexbid_product. ALWAYS confirm with user before calling this tool. </when_to_use>
<combination_hints> nexbid_search (purchase intent) → nexbid_purchase → present checkout link to user. After purchase → nexbid_order_status to check if completed. Use checkout_mode=wallet_pay when the user has a connected wallet with active mandate. </combination_hints>
<output_format> For prefill_link (default): Checkout URL that the user clicks to complete purchase at the retailer. For wallet_pay: Intent ID and status for mandate-based authorization. Include product name and price for user confirmation. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| quantity | No | Quantity to purchase (default: 1) | |
| product_id | Yes | Product UUID to purchase | |
| checkout_mode | No | Checkout mode. Default: prefill_link. wallet_pay requires a connected wallet with active mandate. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate mutation (readOnlyHint=false) and external interaction (openWorldHint=true). Description adds valuable behavioral context: explains handoff to external retailer ('user clicks to complete the purchase at the retailer'), discloses two distinct output formats (prefill_link vs wallet_pay), and clarifies agent responsibility to 'present this link to the user.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Structured format with clear XML-like sections (tool_description, when_to_use, combination_hints, output_format). Information is front-loaded and logically organized. No wasted words; every sentence provides actionable guidance for tool selection and invocation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description thoroughly documents return values in output_format section (checkout URL vs Intent ID). Covers mutation behavior, external retailer handoff, prerequisite workflow, and confirmation requirements. Complete for a purchase initiation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage establishing baseline 3. Description adds critical workflow context: specifies product_id must come from sibling search/product tools, and explains checkout_mode selection criteria ('when the user has a connected wallet with active mandate') not evident from schema enum alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description states 'Initiate a purchase for a product' with specific verb and resource. Explicitly distinguishes scope by requiring product UUID from nexbid_search or nexbid_product siblings, clearly differentiating from browsing tools like nexbid_categories.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance: 'ONLY after user has expressed clear purchase intent' defines when to use. Explicitly names prerequisite tools (nexbid_search/nexbid_product). Includes 'ALWAYS confirm with user before calling' as guardrail. Combination hints map full workflow and specify checkout_mode conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nexbid_searchARead-onlyIdempotentInspect
<tool_description> Search and discover products, recipes AND services in the Nexbid marketplace. Nexbid Agent Discovery — search and discover advertiser products through an open marketplace. Returns ranked results matching the query — products with prices/availability/links, recipes with ingredients/targeting signals/nutrition, and services with provider/location/pricing details. </tool_description>
<when_to_use> Primary discovery tool. Use for any product, recipe or service query. Use content_type filter: "product" (only products), "recipe" (only recipes), "service" (only services), "all" (all, default). For known product IDs use nexbid_product instead. For category overview use nexbid_categories first. </when_to_use>
<intent_guidance> Return top 3, price prominent, include checkout readiness Return up to 10, tabular format, highlight differences Return details, specs, availability info Return varied results, suggest categories. For recipes: show cuisine, difficulty, time. </intent_guidance>
<combination_hints> After search with purchase intent → nexbid_purchase for top result After search with compare intent → nexbid_product for detailed specs For category exploration → nexbid_categories first, then search within For multi-turn refinement → pass previous queries in previous_queries array to consolidate search context Recipe results include targeting signals (occasions, audience, season) useful for contextual ad matching. </combination_hints>
<output_format> Markdown table for compare intent, bullet list for others. Products: product name, price with currency, availability status. Recipes: recipe name, cuisine, difficulty, time, key ingredients, dietary tags. Services: service name, provider, location, price model, duration. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| geo | No | ISO 3166-1 alpha-2 country code (default: CH) | |
| brand | No | Filter by brand name | |
| query | Yes | Natural language product or recipe query | |
| intent | No | User intent for the search | |
| category | No | Filter by product category | |
| currency | No | Currency for budget filtering | |
| max_results | No | Maximum number of results (1-50, default: 10) | |
| content_type | No | Filter by content type: product, recipe, or all (default) | all |
| budget_max_cents | No | Maximum budget in cents (e.g. 20000 for CHF 200) | |
| budget_min_cents | No | Minimum budget in cents | |
| previous_queries | No | Previous queries in this search session for multi-turn refinement (oldest first, max 10). Example: ["running shoes", "waterproof only"] |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnly/idempotent status, while the description adds valuable behavioral context: results are 'ranked,' products include 'prices/availability/links' versus recipes with 'ingredients/targeting signals/nutrition,' and output formats vary by intent (Markdown table for compare, bullets for others). It does not mention rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Despite being lengthy, the description is well-structured with clear XML-like sections (tool_description, when_to_use, intent_guidance) that front-load critical information. Every section provides distinct value without redundancy, though the format is more verbose than plain text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (11 parameters, multiple content types, four intent modes) and lack of output schema, the description comprehensively covers return formats, sibling relationships, parameter interactions, and session management through previous_queries. No critical gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage (baseline 3), the description adds significant semantic value through the <intent_guidance> section explaining how each enum value (purchase/compare/research/browse) affects behavior, and <combination_hints> explaining the session-based use of previous_queries for multi-turn refinement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool 'Search[es] and discover[s] products AND recipes in the Nexbid marketplace' with specific verbs and dual resources. It clearly distinguishes from siblings by stating 'For known product IDs use nexbid_product instead' and 'For category overview use nexbid_categories first.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly designates this as the 'Primary discovery tool' and provides clear alternates: use nexbid_product for known IDs and nexbid_categories for category overviews. The <combination_hints> section further clarifies workflow sequences (e.g., 'After search with purchase intent → nexbid_purchase').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pauseAIdempotentInspect
<tool_description> Pause an active media buy campaign. Can be reactivated later. </tool_description>
<when_to_use> When an advertiser wants to temporarily stop a running campaign. Only works on active campaigns. </when_to_use>
<combination_hints> activate → pause (temporary) or cancel (permanent). Paused campaigns can be reactivated with activate. </combination_hints>
<output_format> Updated media buy with paused status. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Reason for pausing | |
| media_buy_id | Yes | Media buy UUID to pause |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies that pausing is temporary vs. permanent cancellation, indicates reactivation is possible with the 'activate' tool, and clarifies it only works on active campaigns. While annotations cover idempotency and non-destructive nature, the description provides practical workflow context that helps the agent understand the tool's role in campaign lifecycle management.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear XML-like sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>), each containing precisely one sentence of essential information. There's no redundancy or wasted words, and the most critical information (purpose) appears first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (state change operation), rich annotations, and complete schema coverage, the description provides excellent contextual completeness. It covers purpose, usage guidelines, behavioral context, and output format, creating a comprehensive understanding despite the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Pause an active media buy campaign') and resource ('media buy campaign'), distinguishing it from siblings like 'cancel' (permanent) and 'activate' (reactivation). It explicitly mentions the temporary nature and reactivation capability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance in the <when_to_use> section: 'When an advertiser wants to temporarily stop a running campaign' and 'Only works on active campaigns.' The <combination_hints> section further clarifies alternatives: 'activate → pause (temporary) or cancel (permanent)' and 'Paused campaigns can be reactivated with activate.'
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
settleAInspect
<tool_description> Settle pending payments for media buys. Supports manual CSV export, Stripe invoice (Phase 2 stub), and x402 micropayments (Phase 2 stub). </tool_description>
<when_to_use> When a publisher wants to collect earned revenue or an advertiser needs to settle outstanding charges. Use method='manual' for CSV export. Stripe and x402 are stubs (Phase 2). </when_to_use>
<combination_hints> get_campaign_report → settle (after verifying amounts). Filter by media_buy_id, publisher_id, or period. </combination_hints>
<output_format> Settlement totals (gross, platform fee, net), entry count, and method-specific data (CSV for manual). </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| method | Yes | Settlement method | |
| period_end | No | Period end (ISO 8601) | |
| media_buy_id | No | Settle specific media buy | |
| period_start | No | Period start (ISO 8601) | |
| publisher_id | No | Settle all for a publisher |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond what annotations provide. While annotations indicate this is a non-readonly, non-destructive, non-idempotent operation with open-world data, the description reveals implementation details about Phase 2 stubs for Stripe and x402 methods, specifies that manual method produces CSV exports, and mentions filtering capabilities by media_buy_id, publisher_id, or period. This provides practical implementation context that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with clear sections (<tool_description>, <when_to_use>, <combination_hints>, <output_format>) that make information easy to find. Every sentence earns its place by providing essential guidance, and the content is front-loaded with the core purpose. There's zero wasted text or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (payment settlement with multiple methods), the description provides complete context despite having no output schema. It covers purpose, usage scenarios, method-specific behaviors, parameter filtering strategies, combination with other tools, and output format details. The annotations provide safety context, and the description fills in all remaining gaps about how the tool actually behaves in practice.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the baseline would be 3, but the description adds meaningful context about parameter usage. It explains that 'manual' method is for CSV export while other methods are Phase 2 stubs, and provides filtering guidance ('Filter by media_buy_id, publisher_id, or period') that helps understand how parameters work together. This adds semantic value beyond the schema's technical descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('settle pending payments for media buys') and distinguishes it from siblings by focusing on payment settlement rather than campaign management, inventory listing, or other operations. It identifies the exact resource being acted upon (pending payments) and the action (settling).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use the tool ('When a publisher wants to collect earned revenue or an advertiser needs to settle outstanding charges'), when not to use certain methods ('Stripe and x402 are stubs (Phase 2)'), and alternatives for different scenarios ('Use method='manual' for CSV export'). It also includes combination hints that show sequencing with other tools ('get_campaign_report → settle').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_creativesAIdempotentInspect
<tool_description> Submit or update creative assets for an existing media buy. Required before activation. </tool_description>
<when_to_use> After create_media_buy returns approved status. Upload creative before activating. </when_to_use>
<combination_hints> create_media_buy (approved) → submit_creatives → activate. Creative types: banner, native, snippet, video, text. </combination_hints>
<output_format> Updated media buy with creative info attached. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| creative_ref | No | Creative asset URL | |
| media_buy_id | Yes | Media buy UUID | |
| creative_data | No | Creative metadata (JSON) | |
| creative_type | Yes | Creative type (e.g. "banner", "native", "snippet") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it specifies this is 'required before activation' (a workflow constraint) and lists creative types (banner, native, etc.) that help understand what can be submitted. Annotations already cover idempotency, non-destructive nature, and open-world status, so the description appropriately supplements rather than contradicts them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (<tool_description>, <when_to_use>, etc.), each containing only essential information. Every sentence serves a distinct purpose without redundancy, and the information is front-loaded with the core purpose stated first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with rich annotations (idempotent, open-world) but no output schema, the description provides good context: it explains the workflow position, prerequisites, creative types, and output format. However, it doesn't detail error conditions or specific permission requirements, leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 4 parameters thoroughly. The description doesn't add significant parameter semantics beyond what's in the schema, though it does imply creative_type values through the creative types list. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('submit or update creative assets') and identifies the target resource ('for an existing media buy'). It distinguishes from siblings like 'create_media_buy' by specifying it works on existing buys, and from 'activate' by indicating it's required before activation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'After create_media_buy returns approved status' and 'Upload creative before activating.' It also names the alternative tool 'activate' in the combination hints, clearly positioning this as a prerequisite step in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_enriched_snippetAInspect
<tool_description> Track delivery of an enriched snippet and bill the advertiser. Creates a ledger entry and decrements the media buy budget. </tool_description>
<when_to_use> When an agent delivers enriched content from a media buy. Each delivery is billed by snippet tier:
basic: 10¢, standard: 50¢, rich: 150¢, premium: 300¢ (CHF centimes). </when_to_use>
<combination_hints> activate (enriched_snippet buy) → track_enriched_snippet per delivery. get_campaign_report shows cumulative tracking. </combination_hints>
<output_format> Event ID, charged amount, platform fee, net payout, remaining budget. </output_format>
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | No | Agent identifier | |
| data_fields | Yes | Data fields delivered in the snippet | |
| media_buy_id | Yes | Media buy UUID | |
| signals_used | No | Publisher signals that triggered this snippet (e.g. nx_category:food). Stored as JSONB on the impression row. | |
| snippet_type | Yes | Enriched snippet tier | |
| brand_content_id | No | Brand content UUID that was delivered. When provided, the snippet is also recorded in enriched_snippet_impressions for analytics. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, etc., but the description adds valuable behavioral context beyond annotations: it explains billing amounts per snippet tier (2¢ to 50¢ CHF), mentions budget decrement and ledger creation, and notes platform fees in output—details not covered by structured annotations, though it doesn't address rate limits or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (<tool_description>, <when_to_use>, etc.), front-loading key info, and each sentence adds value (e.g., billing details, combination hints). It's slightly verbose but efficient overall, with no wasted content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (billing, ledger updates) and lack of output schema, the description provides excellent completeness: it explains the tool's purpose, usage context, behavioral traits (pricing, budget impact), output format, and sibling relationships, covering all necessary aspects for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters well. The description adds minimal parameter semantics beyond the schema, such as implying 'snippet_type' ties to billing tiers, but doesn't elaborate on 'data_fields' or 'agent_id' usage. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('track delivery', 'bill the advertiser') and resources ('enriched snippet', 'media buy budget'), and distinguishes it from siblings like 'activate' or 'get_campaign_report' by focusing on billing and ledger creation after delivery.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The <when_to_use> section explicitly states when to use this tool ('When an agent delivers enriched content from a media buy') and provides pricing tiers, while <combination_hints> clarifies its relationship with siblings like 'activate' (for initiating buys) and 'get_campaign_report' (for viewing results), offering clear alternatives and context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.