Skip to main content
Glama

Server Details

Ad network for AI agents — monetize MCP servers with contextual ads. 70% revenue share.

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
nicofains1/agentic-ads
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

12 tools
create_adBInspect

Create an ad unit within an existing campaign with creative text, link, and targeting

ParametersJSON Schema
NameRequiredDescriptionDefault
geoNoTarget country code or "ALL"ALL
keywordsYesKeywords for matching (at least 1)
languageNoTarget language codeen
link_urlYesDestination URL when user clicks the ad
categoriesNoProduct/service categories
campaign_idYesCampaign ID to attach this ad to
creative_textYesAd creative text (max 500 chars)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full behavioral burden but only describes input structure. Missing: return value format, side effects (billing implications, review status), error conditions, or reversibility despite being a creation mutation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured sentence front-loaded with action verb. Efficiently bundles key concepts (creation context, content fields, targeting). Minor gap: could mention return behavior given lack of output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Input documentation is complete thanks to rich schema (100% coverage). However, for a mutation tool with no annotations and no output schema, description lacks behavioral completeness regarding success indicators, returned identifiers, or failure modes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with detailed field descriptions, setting baseline at 3. Description adds semantic grouping by labeling keywords/geo/language collectively as 'targeting', but adds no format constraints, valid values, or syntax beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Create' with clear resource 'ad unit' and scope 'within an existing campaign'. Effectively distinguishes from sibling 'create_campaign' by specifying this creates units within campaigns, not campaigns themselves.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied guidance through 'within an existing campaign' indicating prerequisite workflow, but lacks explicit when-to-use guidance, prerequisites (e.g., 'requires campaign_id from list_campaigns'), or named alternatives like update_campaign.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_campaignCInspect

Create a new advertising campaign with budget, pricing model, and objective

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesCampaign name
end_dateNoCampaign end date (ISO format)
objectiveYesCampaign objective
bid_amountYesBid amount per event (CPM: per 1000 impressions, CPC: per click, CPA: per conversion)
start_dateNoCampaign start date (ISO format)
daily_budgetNoDaily budget cap in USD
total_budgetYesTotal budget in USD
pricing_modelYesPricing model
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. States only that a campaign is created, omitting: return value details (critical for a creation tool lacking output schema), side effects (billing activation), date constraints, or idempotency behavior. No disclosure of auth requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action verb. No wasted words or redundant phrases. Efficient structure, though arguably too brief for the tool's complexity. Each element (action, resource, key fields) earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a resource-creation tool with 8 parameters and no output schema. Missing: return value specification (e.g., campaign ID), prerequisite workflows (wallet registration/verification), behavioral constraints (date validation), and relationships to sibling tools (create_ad dependency). Relies entirely on schema for parameter documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description mentions 'budget' ambiguously (could be daily or total) without clarifying the relationship between daily_budget and total_budget parameters. Does not explain optional date fields or bid_amount currency considerations beyond what schema enumerates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('Create') and resource ('advertising campaign') and hints at scope with 'new' to distinguish from update_campaign. Mentions key constituent fields (budget, pricing model, objective) giving concrete context. Falls short of explicitly naming sibling alternatives or lifecycle prerequisites.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus update_campaign, nor prerequisites such as wallet registration (sibling tool register_wallet suggests this may be required). Silent on sequencing (e.g., whether create_ad should follow) and campaign lifecycle implications.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_ad_guidelinesAInspect

Get formatting guidelines for how to present sponsored ads naturally in agent responses

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. While 'Get' implies read-only behavior, the description lacks details on caching behavior, authentication requirements, whether guidelines are static or dynamic, rate limits, or response format. Fails to disclose behavioral traits beyond basic operation type.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no redundant words. Front-loaded with action verb 'Get' immediately followed by the resource and contextual purpose. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a zero-parameter retrieval tool but lacks richness given no output schema or annotations. Description conceptually explains the return value ('formatting guidelines') but does not specify format (JSON, markdown, text?), structure, or whether guidelines vary by campaign or are global. Missing behavioral context expected when annotations are absent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters. As per rubric, 0 parameters warrants baseline score of 4. Description implies no filtering or configuration is needed to retrieve guidelines, which aligns with the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with clear resource 'formatting guidelines' and scope 'for how to present sponsored ads naturally in agent responses'. Effectively distinguishes from siblings like create_ad, search_ads, and update_campaign by focusing on retrieval of display rules rather than ad creation or management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through 'for how to present sponsored ads naturally in agent responses', indicating this is for retrieval when preparing to display ads. However, lacks explicit guidance on when to call this vs alternatives (e.g., before creating ads? per campaign?) and does not state prerequisites or trigger conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_campaign_analyticsBInspect

Get performance metrics for a campaign (impressions, clicks, conversions, spend)

ParametersJSON Schema
NameRequiredDescriptionDefault
campaign_idYesCampaign ID to get analytics for
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. While 'Get' implies read-only, critical behavioral context is missing: time range coverage (lifetime vs windowed), data freshness (real-time vs delayed), pagination, or error handling for invalid campaign IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence efficiently conveys purpose and enumerates return values via parenthetical. Zero redundancy; high information density per word.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately compensates for missing output schema by listing specific metrics returned (impressions, clicks, conversions, spend). Missing only behavioral scope (time windows) to be fully complete for a single-param analytics tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for the single required campaign_id parameter. Description omits parameter discussion, but schema is self-documenting. Baseline score warranted given complete schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' with specific resource 'performance metrics' and enumerates specific metric types (impressions, clicks, conversions, spend). Implicitly distinguishes from list_campaigns by focusing on analytics data versus listing campaigns, though explicit sibling differentiation is absent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus sibling alternatives (e.g., list_campaigns for metadata vs analytics), nor prerequisites like campaign ownership, status requirements, or privacy restrictions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_developer_earningsAInspect

Get earnings summary for the authenticated developer: total revenue, event counts, per-campaign breakdown, and period-based earnings (24h, 7d, 30d, all-time).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions 'authenticated developer' implying auth requirements and details return data structure (24h, 7d, 30d, all-time periods, per-campaign breakdown). Missing: cache behavior, real-time vs delayed data, empty result handling, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single well-structured sentence. Front-loaded with action verb. Colon efficiently introduces enumerated return values. No redundant words. Period-based earnings listed in logical chronological order.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple read operation. Compensates for missing output schema by detailing return data structure (period breakdowns, per-campaign granularity). Minor gap: could explicitly confirm authentication mechanism or behavior when no earnings exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema contains zero parameters (empty object). Per guidelines, 0 params = baseline 4. Description correctly implies no filtering parameters are needed (returns all data for authenticated developer).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity: specific verb 'Get' + resource 'earnings summary' + scope 'authenticated developer' and comprehensive list of returned data points (revenue, event counts, per-campaign breakdown, period-based earnings). Distinguishes from sibling get_campaign_analytics by focusing on financial earnings rather than performance metrics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance through specificity of returned data (earnings vs analytics), but lacks explicit when-to-use guidance or comparisons to alternatives like request_withdrawal or get_campaign_analytics. No mention of prerequisites or error states.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_verification_statusAInspect

Check the verification status of a conversion event (verified, pending, or rejected).

ParametersJSON Schema
NameRequiredDescriptionDefault
event_idYesThe event_id returned by report_event
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the behavioral domain by listing possible return values (verified, pending, rejected). However, it omits other behavioral traits like whether this requires polling, rate limits, or that status transitions are asynchronous.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with zero waste. Front-loaded with the action 'Check the verification status' and efficiently packs the enum values into a parenthetical. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple 1-parameter tool without output schema, the description adequately covers the essential contract by defining the return value domain (three statuses). Minor gap: could explicitly state the asynchronous relationship to report_event in the description text rather than relying solely on parameter schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (event_id described as 'returned by report_event'), establishing baseline 3. The main description adds minimal parameter-specific semantics beyond 'of a conversion event', relying on the schema to carry the semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Check' with resource 'verification status' and distinguishes from siblings (e.g., create_ad, report_event) by being a read-only status lookup. The parenthetical enumeration of states (verified, pending, rejected) further clarifies scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by referencing 'conversion event' and listing status values, but lacks explicit workflow guidance such as 'Use after report_event to poll for asynchronous verification' or when to prefer this over other analytics tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_campaignsBInspect

List all campaigns for the authenticated advertiser with summary stats

ParametersJSON Schema
NameRequiredDescriptionDefault
statusNoFilter by campaign status
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full burden. Adds 'authenticated advertiser' (auth context) and 'summary stats' (return data type), but omits pagination behavior, rate limits, or result set bounds for the 'list all' operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no filler. Information density is high with zero waste words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 1-parameter list tool. Mentions 'summary stats' to compensate for missing output schema, though should ideally note pagination or maximum results for a 'list all' operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Filter by campaign status'), so baseline is 3. Description implies optional filtering by stating 'List all', aligning with optional status parameter, but adds no syntax details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'List' and resource 'campaigns' with scope 'for the authenticated advertiser'. 'Summary stats' distinguishes from get_campaign_analytics sibling, though no explicit comparison is made.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus get_campaign_analytics (detailed data) or search_ads. No mention of prerequisites or when-not-to-use conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_walletAInspect

Register a wallet address for receiving on-chain conversion payouts. Optionally provide a signature to prove ownership (EIP-191).

ParametersJSON Schema
NameRequiredDescriptionDefault
signatureNoOptional EIP-191 signature of challenge message to prove wallet ownership
timestampNoTimestamp used in challenge message (required if signature provided)
wallet_addressYesEthereum wallet address (0x...)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It provides valuable technical context about the EIP-191 signature standard and mentions on-chain payouts, but lacks critical behavioral details: idempotency (can you update/re-register?), side effects (does this replace existing wallets?), authentication requirements, or return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first sentence delivers the core action and purpose immediately; the second sentence efficiently covers the optional advanced feature (signature) without clutter. Perfectly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, and considering this is a mutation tool in a financial workflow (with siblings like 'request_withdrawal'), the description should explain success indicators, return values, or next steps. It covers the 'what' and 'how' but not the 'what happens after' or workflow integration.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description adds meaningful semantic value by explaining that the optional signature is specifically for 'prove ownership,' which provides intent context beyond the schema's technical description. It also clarifies the relationship between timestamp and signature implicitly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Register') with a specific resource ('wallet address') and clarifies the exact purpose ('receiving on-chain conversion payouts'). It clearly distinguishes from sibling 'request_withdrawal' (which implies taking money out) by focusing on the setup/receiving aspect.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies this is a prerequisite for receiving payouts, it lacks explicit guidance on when to use this tool versus alternatives, workflow ordering (e.g., 'register before requesting withdrawal'), or prerequisites. The usage is implied but not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

report_eventAInspect

Report an ad event (impression, click, or conversion). For on-chain verified campaigns, conversions require tx_hash and chain_id.

ParametersJSON Schema
NameRequiredDescriptionDefault
ad_idYesThe ad_id from search_ads results
tx_hashNoTransaction hash for on-chain verified conversions
chain_idNoChain ID for on-chain verified conversions (e.g. 8453 for Base)
metadataNoAdditional event metadata
event_typeYesType of event
context_hashNoHash of the message containing the ad (for verification)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses the on-chain verification domain and cross-chain requirements. However, it omits standard behavioral traits: idempotency (can events be reported twice?), side effects, success confirmation, or rate limits expected for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. Front-loaded with primary action ('Report an ad event'), followed by specific conditional guidance. Every word earns its place; no redundancy with schema field descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the hybrid on-chain/off-chain complexity and conditional parameter requirements. Minor gap: does not elaborate on expected metadata structure or context_hash verification flow, though schema descriptions partially cover this. No output schema exists, so return values need not be explained.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds critical semantic value by specifying the conditional requirement logic for tx_hash and chain_id (only for on-chain verified conversion events), which the schema cannot express through static descriptions alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Uses specific verb 'Report' with resource 'ad event' and enumerates specific event types (impression, click, conversion). The action clearly distinguishes from sibling analytics/CRUD tools (get_campaign_analytics, create_ad) through its distinct reporting verb.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit conditional requirement: 'For on-chain verified campaigns, conversions require tx_hash and chain_id.' This clearly signals when specific parameters become mandatory. Lacks explicit 'when-not-to-use' or named sibling alternatives, preventing a 5.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

request_withdrawalAInspect

Request a withdrawal of earned USDC.e to your registered Base wallet. Minimum $1.00, maximum $50.00 per request. Rate limited to 1 per hour.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountNoAmount in USD to withdraw. If omitted, withdraws full available balance (up to $50).
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden and successfully communicates critical behavioral traits: financial bounds ($1-$50), rate limiting (1/hour), the specific token standard (USDC.e), and the target network (Base). It implies an asynchronous 'request' model rather than instant transfer, though it omits details on failure states, reversibility, or confirmation timing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of three tightly constructed sentences with zero redundancy: the first establishes the operation and destination, the second states financial limits, and the third declares rate limits. Every sentence earns its place, with the most critical information (the withdrawal action) front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a single-parameter financial mutation tool with no output schema and no annotations, the description provides sufficient context by specifying currency, network, amount constraints, and rate limits. It adequately covers the tool's scope, though it could be improved by mentioning the prerequisite registration step explicitly or describing the expected return value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage (the 'amount' parameter is fully documented with type, bounds, and omission behavior), establishing a baseline of 3. The description text reinforces the $1-$50 constraints but does not add semantic meaning beyond what the schema already provides regarding parameter usage or format.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb phrase ('Request a withdrawal'), identifies the exact resource ('earned USDC.e'), and specifies the destination ('registered Base wallet'). It clearly distinguishes this financial operation from its campaign/advertising siblings (create_ad, update_campaign, etc.) by focusing on fund withdrawal rather than marketing asset management.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit operational constraints that guide usage: minimum $1.00, maximum $50.00 per request, and rate limiting of 1 per hour. It implies a prerequisite ('registered Base wallet') which signals the need for prior wallet registration, though it does not explicitly reference the register_wallet sibling tool or specify when to check earnings first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_adsBInspect

Search for relevant ads matching a user intent/context. Returns ranked sponsored suggestions.

ParametersJSON Schema
NameRequiredDescriptionDefault
geoNoCountry code (e.g. "US")
queryNoNatural language intent (e.g. "best running shoes")
categoryNoProduct/service category
keywordsNoExplicit keywords to match against
languageNoLanguage codeen
max_resultsNoMax ads to return
min_relevanceNoMinimum relevance score (0-1). Ads below this threshold are excluded. Default 0 (return all).
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses that results are 'ranked' and 'sponsored,' indicating commercial content and ordering behavior. However, it omits rate limits, pagination behavior, what happens when no matches exist, or the structure/format of returned suggestions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with no redundant words. Front-loaded with the core action. However, given the tool has 7 parameters and no output schema, the extreme brevity may leave agents under-informed rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Minimum viable coverage for a 7-parameter search tool. Mentions the return concept ('ranked sponsored suggestions') which partially compensates for missing output schema, but lacks detail on result structure, filtering logic (AND vs OR across parameters), or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 7 parameters (geo, query, category, etc.). The description adds the conceptual framing ('user intent/context') which helps interpret the query/category/keywords relationship, meeting the baseline expectation when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Search') and resource ('ads'). The phrase 'matching a user intent/context' helps distinguish this from simple listing operations, and 'ranked sponsored suggestions' specifies the nature of the results. However, it doesn't explicitly differentiate from list_campaigns (which returns campaign structures vs ad content).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use search_ads versus list_campaigns or get_campaign_analytics. No mention of prerequisites (e.g., whether campaigns must be active first) or when search is preferred over direct retrieval.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_campaignAInspect

Update an existing campaign: modify name, objective, budget, bid, or status (pause/resume)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoNew campaign name
statusNoNew status (active to pause, paused to resume)
end_dateNoNew end date (ISO format)
objectiveNoNew objective
bid_amountNoNew bid amount
start_dateNoNew start date (ISO format)
campaign_idYesCampaign ID to update
daily_budgetNoNew daily budget cap in USD
total_budgetNoNew total budget in USD
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Provides minimal behavioral context beyond the operation type—no disclosure of partial vs full update semantics, validation rules, side effects, or timing of changes. Only behavioral hint is parenthetical '(pause/resume)' for status values.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action ('Update an existing campaign'), followed by colon-delimited field list. Zero redundancy—every word conveys specific information about scope or mappable parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a standard CRUD operation given excellent schema coverage (100%) documents all 9 parameters. However, as a mutation tool with no annotations or output schema, lacks behavioral specifics (e.g., 'partial update', 'changes immediate') that would complete the contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline of 3. Description adds value by semantically grouping 'daily_budget' and 'total_budget' under 'budget', and mapping 'bid_amount' to 'bid'. However, does not clarify parameter relationships (e.g., start_date vs end_date constraints) or optional vs required nature beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Update' with resource 'campaign' and explicitly qualifies 'existing' to distinguish from sibling 'create_campaign'. Lists specific mutable fields (name, objective, budget, bid, status) providing clear scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context by specifying 'existing campaign' and enumerating modifiable fields, which distinguishes it from 'create_campaign'. However, lacks explicit when-to-use guidance, prerequisites (e.g., campaign_id source), or warnings about mutability constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.