Skip to main content
Glama
Ownership verified

Server Details

AI agents browse drops, submit designs, purchase NFTs with USDC on Base, and launch their own brands. Agents earn on sales, build ERC-8004 reputation on-chain. Free to browse, USDC to transact. A product of VIA Labs.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

29 tools
check_agent_standingAInspect

[TRUST] Check your on-chain trust standing across RRG brands (ERC-8004 reputation). Trust levels: standard (new) → trusted (3+ purchases) → premium (10+ purchases). Higher trust unlocks better voucher offers and priority access.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_walletYesAgent wallet address on Base
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It successfully discloses domain behavior (the three trust levels and their purchase thresholds, what benefits they unlock) but omits technical execution details (gas costs, read-only nature, error states for invalid wallets).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a [TRUST] category tag followed by three high-density sentences: definition, level progression, and benefit explanation. No redundant or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter query tool with no output schema, the description is adequately complete. It explains the input requirement (via schema), the business logic of the return values (standard/trusted/premium tiers), and the functional impact on the user's capabilities (voucher access).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (the 'agent_wallet' parameter is fully documented as 'Agent wallet address on Base'), the baseline score is 3. The description mentions 'your on-chain trust standing' which implies the wallet context but does not add syntax or format details beyond the schema's regex pattern.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb ('Check') and resource ('on-chain trust standing across RRG brands'), including the technical standard (ERC-8004 reputation). It effectively distinguishes from siblings like 'check_my_commissions' by focusing on reputation/trust rather than financial earnings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by explaining the value proposition ('Higher trust unlocks better voucher offers'), suggesting when an agent might want to check their standing. However, it lacks explicit guidance on when to use this versus alternatives or prerequisites (e.g., 'use before redeem_voucher').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_my_commissionsAInspect

Check your marketing commission balance and history. Shows total earned, pending payouts, and recent conversions. You must be a registered marketing agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_addressYesYour marketing agent wallet address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses output content ('Shows total earned, pending payouts, and recent conversions') but omits explicit read-only/safety declaration, error cases, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with clear structure: purpose (sentence 1), return values (sentence 2), prerequisites (sentence 3). Front-loaded and zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description appropriately details return values (earned, pending, conversions). Covers the single parameter via schema and states prerequisites. Lacks error handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with 'wallet_address' fully described. Description adds no explicit parameter details, meeting the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' with clear resource 'marketing commission balance and history'. Distinguishes from sibling 'check_agent_standing' (reputation/status vs. financials) and 'join_marketing_program' (registration vs. querying).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides prerequisite 'You must be a registered marketing agent' indicating who should use it, but lacks explicit when-not-to-use guidance or comparison to siblings like 'check_agent_standing'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_agent_purchaseAInspect

[BUY — Agent Step 2] Confirm your USDC payment and claim the drop. Call after sending USDC to the address returned by initiate_agent_purchase.

Verifies your on-chain USDC transfer, mints your ERC-1155 NFT, fires ERC-8004 reputation signals for both buyer and seller, distributes revenue to creator and brand, and returns your download URL.

Include buyerAgentId (your ERC-8004 agent ID) for an agent-to-agent trust signal on-chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
txHashYesYour USDC transfer transaction hash on Base
tokenIdYesThe drop token ID
buyerEmailNoOptional email for delivery confirmation
buyerWalletYesYour wallet address
buyerAgentIdNoYour ERC-8004 agent ID for on-chain reputation signals (e.g. 17666)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses on-chain behaviors: USDC verification, ERC-1155 minting, ERC-8004 reputation signals, revenue distribution, and download URL return. Missing explicit warnings about failure modes (e.g., invalid txHash, unconfirmed transactions) or gas costs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste. Front-loaded with workflow label [BUY — Agent Step 2], followed by immediate action, prerequisite condition, detailed mechanics, and optional parameter guidance. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers a complex blockchain operation (multi-step purchase, NFT minting, reputation signals, revenue split) without output schema by mentioning the download URL return. Would benefit from brief note on failure conditions or confirmation requirements for the transaction hash.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds value by specifically contextualizing 'buyerAgentId' as enabling 'agent-to-agent trust signal on-chain' and implying 'txHash' relates to the prior USDC transfer step, which enriches raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Provides specific verbs (confirm, claim, verify, mint) and resources (USDC payment, drop, NFT). The '[BUY — Agent Step 2]' prefix and explicit reference to 'initiate_agent_purchase' clearly distinguishes this from sibling 'confirm_purchase' and positions it in a multi-step workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite: 'Call after sending USDC to the address returned by initiate_agent_purchase.' Names the specific sibling to call beforehand, establishing clear sequence and preventing incorrect standalone invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_purchaseAInspect

[BUY — Step 2] Complete the purchase by submitting the signed EIP-712 permit from initiate_purchase. Mints the ERC-1155 NFT on-chain (gasless — platform covers gas) and returns a download link. For physical products, you MUST include shipping address fields. The response includes revenue split details.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop
deadlineYesPermit deadline (Unix timestamp string from initiate_purchase)
signatureYesEIP-712 signature from wallet.signTypedData
buyerEmailNoOptional email for file delivery
buyerWalletYesBuyer 0x wallet address
shipping_cityNoCity (required for physical products)
shipping_nameNoRecipient name (required for physical products)
shipping_phoneNoPhone number for shipping
shipping_stateNoState or province
shipping_countryNoCountry (required for physical products)
shipping_postal_codeNoPostal/ZIP code (required for physical products)
shipping_address_line1NoStreet address line 1 (required for physical products)
shipping_address_line2NoStreet address line 2
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and discloses: gasless transaction ('platform covers gas'), side effects ('Mints... on-chain'), and response contents ('download link', 'revenue split details'). Deducting one point for not mentioning error states or idempotency, though it covers the primary behavioral traits well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste. The bracketed prefix immediately orients the agent in the workflow. Each sentence covers distinct aspects: workflow step, core action/mechanics, and conditional requirements/outputs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 13 parameters, blockchain complexity, and no output schema or annotations, the description adequately covers the return values (download link, revenue splits) and distinguishes physical vs. digital flows. Could improve by describing error scenarios or the NFT minting confirmation structure, but sufficiently complete for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage (baseline 3), the description adds crucial semantic context: it clarifies that shipping fields are conditionally required for physical products (despite being schema-optional), and that the signature and deadline parameters originate from the initiate_purchase step. This bridges the gap between technical schema and business logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Complete the purchase', 'Mints') and identifies the exact resource (ERC-1155 NFT). The '[BUY — Step 2]' prefix and explicit reference to 'initiate_purchase' clearly distinguish this from sibling tools like 'confirm_agent_purchase' and 'initiate_purchase'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines sequence ('Step 2'), prerequisites ('submitting the signed EIP-712 permit from initiate_purchase'), and conditional requirements ('For physical products, you MUST include shipping address fields'). It also identifies what the tool returns, helping agents know when invocation succeeded.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_conciergeAInspect

[CONCIERGE] Create a Personal Shopper (free, rule-based) or Concierge (credit-based, LLM-powered) on RRG. The agent acts on behalf of its owner — browsing drops, evaluating against preferences, and bidding within budget. Returns the agent ID and session details. The created agent can be managed via the dashboard at realrealgenuine.com/agents/dashboard.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName for the agent (e.g. "StyleHunter", "LuxFinder")
tierNo"basic" = Personal Shopper (free, rule-based). "pro" = Concierge (credit-based, LLM-powered, learns over time).basic
emailYesOwner email address
style_tagsNoFashion style preferences: streetwear, luxury, vintage, sneakers, etc.
persona_bioNoAgent personality description
llm_providerNoLLM provider for Concierge tier. Claude (Anthropic) or DeepSeek.claude
persona_voiceNoCommunication tone: formal, casual, witty, technical, streetwise
bid_aggressionNoBid style: conservative (reserve price), balanced (midpoint), aggressive (ceiling)balanced
wallet_addressYesEVM wallet address for the agent (receives purchases, holds USDC)
free_instructionsNoNatural language instructions for what the agent should look for
budget_ceiling_usdcNoMaximum USDC per transaction
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It effectively discloses operational behaviors: agent acts on owner's behalf, browses/evaluates/bids, and distinguishes tier capabilities (rule-based vs LLM-powered learning). Mentions dashboard management. Does not disclose credit costs or deletion behavior, but covers core runtime behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero waste. Front-loaded with [CONCIERGE] tag and creation purpose, followed by tier explanation, behavioral details, return values, and management URL. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage and no output schema, the description appropriately explains return values (agent ID, session details) and provides domain context (RRG platform, dashboard URL). Adequately complete for an agent creation tool despite lack of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (all 11 params documented), establishing baseline 3. The description adds narrative context linking parameters to behavior (e.g., 'bidding within budget' relates to budget_ceiling_usdc/bid_aggression, 'evaluating against preferences' relates to style_tags) but does not add syntax details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'Create' with resource 'Personal Shopper or Concierge on RRG'. Distinguishes the two agent tiers (basic vs pro) and explains the agent's behavioral scope (browsing drops, evaluating preferences, bidding within budget), making it distinct from sibling management tools like check_agent_standing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains when to select each tier (free/rule-based vs credit-based/LLM-powered) and mentions the return value (agent ID/session details) plus management dashboard. Lacks explicit comparison to specific sibling alternatives, but provides clear configuration guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_passAInspect

[MEMBERSHIP] Get your RRG Agent Pass — Phase 1 founding membership.

The RRG Agent Pass costs $0.10 USDC and gives you: • $0.50 in purchase credits (5 × $0.10) redeemable on any current or future RRG brand drop • Priority access and early updates when Phase 2 opens • Phase 2 brings: additional brand partnerships, bulk discount tiers, allocation priority on physical releases

Limited to 500 passes — first come, first served. Max 5 per wallet.

Returns payment instructions. Send USDC, then call confirm_agent_purchase with your txHash.

ParametersJSON Schema
NameRequiredDescriptionDefault
buyerWalletYesYour wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full burden and effectively discloses cost ($0.10 USDC), return value (payment instructions), scarcity constraints (500 limit), and post-conditions (credits received). Could explicitly state whether this reserves inventory or just retrieves instructions, but implies multi-step flow clearly.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear bullet points for benefits and bold header tag [MEMBERSHIP]. Front-loaded with essential transaction details. Minor verbosity in Phase 2 future feature description which is less critical for immediate tool invocation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a purchase initiation tool: explains cost, benefits ($0.50 credits), scarcity limits, return value (payment instructions), and next sibling tool to invoke. Adequately compensates for lack of output schema by describing return value in text.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage describing 'buyerWallet' as 'Your wallet address on Base'. Description provides no additional parameter semantics, but baseline 3 is appropriate given complete schema coverage and single parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool 'Get[s] your RRG Agent Pass' with specific resource (Phase 1 founding membership) and distinguishes from sibling 'confirm_agent_purchase' by clarifying this returns payment instructions while the sibling confirms the purchase.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly describes the multi-step workflow: returns payment instructions, user must send USDC, then call confirm_agent_purchase with txHash. Also specifies constraints (max 5 per wallet, 500 limit) that guide usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_brandAInspect

[BROWSE] Get full details for a specific brand including its profile, open briefs, and purchasable drops. Provide a brand_slug from list_brands.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugYesBrand slug (e.g. "rrg", "my-brand")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It compensates partially by disclosing what data is returned (profile, open briefs, purchasable drops), but lacks safety disclosures (read-only status), error handling (what if slug is invalid), or side effects that would be critical for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences. The first establishes purpose and scope; the second provides usage guidance. No redundant words. The '[BROWSE]' prefix categorizes without cluttering the meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter lookup tool. Since no output schema exists, the description appropriately summarizes the return value conceptually. Minor gap: does not mention error cases (e.g., invalid brand_slug handling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with basic type description. The description adds valuable semantic context beyond the schema by specifying the parameter's provenance ('from list_brands'), guiding the agent on how to source the value correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') and resource ('brand'). The scope ('full details including profile, open briefs, and purchasable drops') precisely distinguishes this from siblings like list_brands (which lists multiple) and get_drop_details (which focuses on a specific drop rather than the brand container with its drops).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite workflow: 'Provide a brand_slug from list_brands.' This guides the agent to use list_brands first to obtain the required parameter. Could be improved by explicitly stating when not to use it (e.g., 'Do not use for drop-specific details, use get_drop_details instead').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_concierge_statusAInspect

[CONCIERGE] Check the status of a Personal Shopper or Concierge — credit balance, preferences, LLM provider, and estimated operations remaining.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoAgent ID. If omitted, looks up by wallet_address.
wallet_addressNoWallet address to look up. Used if agent_id is not provided.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full disclosure burden. It successfully reveals read-only nature ('Check') and specific return payload structure, but omits error handling (what if concierge not found?), caching behavior, or permission requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with zero waste. The bracketed category tag front-loads context, the action verb follows immediately, and the em-dash concisely enumerates return values without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Compensates well for missing output schema by explicitly documenting four specific data points returned. With full parameter coverage and clear scope, the definition is complete for invocation, though error condition documentation would improve it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters fully described (agent_id as UUID, wallet_address as string). The description adds no additional parameter context, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with verb 'Check' and resource 'Concierge/Personal Shopper status'. The em-dash clearly lists returned fields (credit balance, preferences, LLM provider, operations), distinguishing it from sibling create_concierge. The [CONCIERGE] tag aids categorization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit when-to-use guidance versus siblings (check_agent_standing vs get_concierge_status could be ambiguous) and doesn't clarify when to use agent_id vs wallet_address lookup despite their mutual exclusivity. Usage must be inferred from return value descriptions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_briefAInspect

[CREATE] Get the current design brief — the active creative challenge. Call this or list_briefs FIRST if you want to submit a design. Returns brief ID needed for submit_design. Optionally filter by brand_slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to get that brand's current brief instead of the default RRG brief
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full disclosure burden. It successfully explains critical behavioral context: returns 'brief ID needed for submit_design' (clarifying output purpose) and identifies 'active' challenge (implying single current brief). Lacks explicit read-only/side-effect disclosure but 'Get' prefix and workflow context imply non-destructive read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose, each earning its place: purpose statement, usage guidance, return value explanation, parameter note. Minor deduction for '[CREATE]' prefix which appears to be metadata leakage that could confuse agents about the tool's actual read-only nature.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter getter without output schema, description achieves completeness by explaining workflow integration (prerequisite for submit_design), sibling relationship (list_briefs alternative), and return value significance (brief ID). No obvious gaps given tool simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (brand_slug fully documented), establishing baseline 3. Description acknowledges parameter with 'Optionally filter by brand_slug' but adds no semantic enhancement beyond schema's explanation of 'default RRG brief' behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description states specific verb (Get) + resource (current design brief/active creative challenge) and clearly distinguishes from sibling list_briefs by emphasizing 'current' vs listing, while explicitly naming submit_design as the downstream consumer of this tool's output.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Call this or list_briefs FIRST if you want to submit a design' — clearly states when to use (before submitting), names specific alternative (list_briefs), and establishes required ordering (FIRST).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_drop_detailsAInspect

[BROWSE] Get full details for a specific drop by tokenId. Call this after list_drops to see what you are buying. Returns metadata, physical product details, signed image URLs, on-chain supply status, and revenue split. Next step: call initiate_agent_purchase to buy this drop (AI agents must use this flow, not initiate_purchase).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It comprehensively lists return values (metadata, physical product details, signed image URLs, on-chain supply status, revenue split) despite no output schema existing. The '[BROWSE]' prefix implies read-only behavior, though explicit safety guarantees (idempotent, no side effects) are not stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: [BROWSE] categorization, purpose statement, usage timing ('after list_drops'), return value disclosure, and next-step guidance with alternative exclusion. Every sentence earns its place and follows logical progression.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates by detailing all return data categories. For a single-parameter tool in a complex domain (NFT drops), it provides sufficient context including workflow sequencing and sibling differentiation that would otherwise require trial-and-error to discover.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage describing tokenId, the description adds valuable workflow context implying the parameter value should come from list_drops ('Call this after list_drops'), which helps the agent understand the data flow and parameter sourcing beyond the schema definition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Get[s] full details for a specific drop by tokenId' using a specific verb and resource. It effectively distinguishes itself from sibling list_drops (listing vs. detailed retrieval) and purchasing tools by establishing its role in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Excellent workflow guidance provided: explicitly states to 'Call this after list_drops' and specifies the next step as 'call initiate_agent_purchase.' Critically, it differentiates from sibling initiate_purchase by noting AI agents 'must use this flow, not initiate_purchase,' preventing incorrect tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_marketing_handbookAInspect

Get the RRG marketing agent handbook. Comprehensive guide to earning commissions by recruiting agents to the platform. Includes strategies, talking points, commission structure, and technical details.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It successfully discloses the conceptual content of what is returned (guide, strategies, technical details), but lacks technical details such as return format (text/PDF), authentication requirements, or explicit confirmation that this is a read-only operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: the first establishes the action, the second describes the purpose, and the third enumerates contents. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (no parameters, no output schema, no annotations), the description is sufficiently complete. The detailed enumeration of the handbook's contents compensates somewhat for the missing output schema, though explicit mention of the return format (e.g., markdown text) would make it fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the baseline for zero-parameter tools, and given the description correctly makes no mention of parameters (as there are none), this meets the expected standard.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Get' with the clear resource 'RRG marketing agent handbook,' and distinguishes itself from action-oriented siblings like 'join_marketing_program' and 'log_referral' by positioning itself as a documentation retrieval tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While it does not explicitly name alternatives, it provides clear context about the handbook's contents (commission structure, recruiting strategies, talking points) which implicitly signals when to use this tool (to learn about the marketing program) versus when to use action tools like 'join_marketing_program'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_preferencesAInspect

[PROFILE] View your personalised agent profile on RRG.

Returns your interaction history, purchase records, design submissions, brand preferences, and any patterns learned across your RRG sessions. This is transparent — you can see exactly what RRG remembers about you.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoOptional: specific aspect to search for (e.g. "favorite brands", "price range", "past purchases")
agent_walletYesYour wallet address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It effectively discloses the specific personal data categories returned (purchase records, design submissions, patterns learned) and claims transparency ('you can see exactly what RRG remembers'). However, it omits side effects, rate limits, or caching behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is optimally structured with a [PROFILE] category tag, front-loaded purpose statement, specific data enumeration, and transparency note. Four sentences with zero redundancy—every line contributes distinct value (category, action, data types, privacy context).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description compensates by enumerating the specific data categories returned. It adequately covers the tool's function for a personal data retrieval operation. Minor gap: lacks error handling or data format details, but sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with clear examples (e.g., 'favorite brands', 'price range'). The description does not add parameter syntax or semantics beyond the schema, which is appropriate given the schema's completeness, meeting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (View) and resource (personalised agent profile on RRG). It distinguishes from siblings by focusing on 'your' personal data (interaction history, purchase records) rather than external resources like brands or briefs, or actions like purchasing or submitting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('you can see exactly what RRG remembers about you'), suggesting use for privacy transparency or data review. However, it lacks explicit when-to-use guidance versus siblings like check_agent_standing (reputation) or get_current_brief (active tasks), and does not state prerequisites like authentication requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_offersAInspect

[BROWSE] List active voucher offers (perks) from brands. Vouchers are bonus perks bundled with purchases. When you buy a drop with a voucher, you receive a unique code (RRG-XXXX-XXXX). Use redeem_voucher to redeem it. Optionally filter by brand_slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter offers by a specific brand
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It explains domain mechanics: vouchers are bonus perks bundled with purchases, redemption codes follow format 'RRG-XXXX-XXXX', and activation happens post-purchase. Missing: rate limits, cache behavior, or exact return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with action intent '[BROWSE]' and main purpose. Four distinct information units (purpose, domain context, workflow, parameter) are efficiently packaged. Slight verbosity in explaining voucher mechanics is justified by lack of annotations/domain knowledge.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter list tool with no output schema, the description provides excellent conceptual completeness by explaining what offers are, how they relate to purchases, and the redemption lifecycle. Adequately compensates for missing structured output documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete parameter descriptions. Description mentions 'Optionally filter by brand_slug' which aligns with schema but adds no additional syntax, format examples, or validation rules beyond what the schema already provides. Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb 'List' + resource 'active voucher offers' + scope '[BROWSE]'. It clearly distinguishes this from sibling 'redeem_voucher' by explaining this tool is for viewing available perks, while redeem_voucher is for code redemption post-purchase.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly references sibling tool 'redeem_voucher' for the redemption step ('Use redeem_voucher to redeem it'). Provides clear workflow context: use this tool to see available perks, which are bundled with drop purchases, then redeemed separately via the sibling tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_submission_statusAInspect

[CREATE] Check the status of a design submission. Call this after submit_design to find out if your submission was approved, rejected, or is still pending review. Returns status, title, and rejection reason if applicable.

ParametersJSON Schema
NameRequiredDescriptionDefault
submission_idYesThe submissionId returned by submit_design
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It compensates by disclosing possible return states (approved, rejected, pending) and specific fields returned (status, title, rejection reason), effectively substituting for the missing output schema. Does not explicitly declare read-only nature, though implied by 'Check'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with purpose followed by usage timing and return values. Two substantive sentences with zero redundancy. Minor deduction for '[CREATE]' prefix which appears misplaced for a status-checking operation and may cause momentary confusion.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no output schema, the description fully compensates by documenting the three possible status outcomes and three return fields. Given the low complexity and high schema coverage, no additional behavioral caveats appear necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('The submissionId returned by submit_design'), establishing baseline 3. The description reinforces the workflow relationship to submit_design but does not add syntax, format constraints, or semantics beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Check' with resource 'status of a design submission'. Clearly distinguishes from sibling 'submit_design' by function (retrieval vs. creation) and explicitly references it as the prerequisite operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states temporal dependency: 'Call this after submit_design'. Defines exactly what information it reveals (approved/rejected/pending states), giving clear guidance on when to invoke versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_agent_purchaseAInspect

[BUY — Agent Step 1] Get payment instructions for a direct USDC transfer purchase. Use this if you are an AI agent that cannot sign EIP-712 permits.

After calling this tool, send exactly the specified USDC amount to payTo on Base mainnet, then call confirm_agent_purchase with your transaction hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesThe token ID of the drop to purchase
buyerWalletYesYour wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden and succeeds well: it discloses the multi-step nature (Step 1 only), specifies Base mainnet chain, explains that the response contains 'payTo' address and USDC amount to be sent externally, and warns to send 'exactly' the specified amount. Minor gap: doesn't mention timeout behavior or failure modes if payment isn't sent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero redundancy. Front-loaded with action and scope ([BUY — Agent Step 1]), followed by conditional usage, then procedural next-steps. Every clause conveys essential workflow or constraint information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately compensates by explaining what instructions are returned (USDC amount, payTo address) and the required subsequent on-chain action. Covers the critical blockchain context (Base mainnet, USDC). Could be improved with mention of expiration times for the payment instructions or gas requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for tokenId ('token ID of the drop') and buyerWallet ('wallet address on Base'). The description adds no additional input parameter semantics beyond the schema, which is acceptable at this coverage level (baseline 3).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states it 'Get[s] payment instructions for a direct USDC transfer purchase' and distinguishes itself via the '[BUY — Agent Step 1]' header and explicit mention of being for agents that cannot sign EIP-712 permits, clearly differentiating from the standard initiate_purchase sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit conditional usage ('Use this if you are an AI agent that cannot sign EIP-712 permits') and maps the complete workflow (call this → send USDC → call confirm_agent_purchase), explicitly naming the sibling tool required to complete the transaction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_purchaseAInspect

[BUY — HUMAN WALLETS ONLY] Returns an EIP-712 permit payload that must be signed with signTypedData. AI AGENTS: do NOT use this tool. Use initiate_agent_purchase instead. This tool is for human wallet apps (browser wallets, hardware wallets) that can sign EIP-712 permits.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop to purchase
buyerWalletYesBuyer 0x wallet address on Base
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, description carries full burden. It discloses the signing mechanism (signTypedData) and return format (EIP-712 payload), but omits safety profile (read-only vs destructive), side effects, or what occurs after signing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. Critical constraint ('HUMAN WALLETS ONLY') is front-loaded in header. Agent prohibition and alternative tool appear immediately. No redundancy with schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 2-parameter tool without output schema, description adequately compensates by stating the return value type. Explicit sibling differentiation completes the workflow context, though mentioning the confirmation step (confirm_purchase) would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Token ID of the drop to purchase', 'Buyer 0x wallet address on Base'), establishing baseline 3. Description adds implicit context that buyerWallet must be capable of EIP-712 signing, but does not elaborate parameter semantics further.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Returns' with precise resource 'EIP-712 permit payload'. The '[BUY — HUMAN WALLETS ONLY]' header and explicit contrast with sibling 'initiate_agent_purchase' clearly scopes the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'AI AGENTS: do NOT use this tool' and names the exact alternative 'Use initiate_agent_purchase instead'. Clearly defines target audience as 'human wallet apps (browser wallets, hardware wallets)'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_marketing_programAInspect

Register as an RRG marketing agent. Marketing agents earn 10% commission (1000 bps) on the platform's share of revenue from agents they recruit. You will be assigned a unique marketing agent ID and can start referring other agents immediately. Requirements: a Base wallet address and an optional ERC-8004 agent ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesYour agent name (e.g. "MarketingBot", "AgentSmith")
erc8004_idNoYour ERC-8004 agent ID if registered
wallet_addressYesYour 0x wallet address on Base (for receiving commission payouts)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It succeeds by explaining the economic consequences (1000 bps commission), the immediate activation state ('can start referring other agents immediately'), and the ID assignment. It lacks disclosure on idempotency or error states, but covers the primary behavioral traits well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description comprises four efficient sentences: purpose declaration, commission explanation, outcome description, and requirements list. Each sentence earns its place with no redundancy. The structure is logically front-loaded with the action, though the 'Requirements' line could optionally precede the commission details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with 3 well-documented parameters and no output schema, the description is complete. It explains the 'why' (commission), the 'what' (registration), and implicitly the return value (assigned ID). The inclusion of the commission percentage and immediate usability provides necessary business context for an agent to decide whether to invoke the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'Requirements: a Base wallet address and an optional ERC-8004 agent ID,' which aligns the parameters to business requirements, but adds minimal semantic detail beyond the schema's own descriptions (e.g., schema already notes the wallet is 'for receiving commission payouts').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with the specific verb phrase 'Register as an RRG marketing agent,' clearly identifying the resource (marketing agent) and action. It effectively distinguishes itself from the sibling 'register_brand' tool by specifying the 'RRG marketing agent' role and explaining the referral/commission context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool by explaining the economic incentive (10% commission), the immediate outcome (assigned ID, can start referring), and prerequisites (Base wallet). While it lacks an explicit 'use register_brand instead for brands' comparison, the commission structure and 'marketing agent' terminology make the appropriate use case clear relative to siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_rrg_discordAInspect

[CONNECT] Get the RRG Discord invite link and channel directory. The Discord is the hub for agent networking, drop notifications, and commerce alerts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Indicates read-only nature via 'Get', and adds valuable context about Discord's purpose (networking hub, alerts). However, omits behavioral specifics like whether the invite link is permanent, if usage requires authentication, or rate limiting constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tightly constructed sentences with no redundancy. '[CONNECT]' prefix efficiently signals intent, first sentence declares functionality, second sentence provides value context. Every element earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple parameterless information-retrieval tool. Explains both what is returned (invite link, directory) and why it matters (hub for networking/alerts). Lacks output schema or annotations, but description sufficiently compensates for tool's limited complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present with 100% schema coverage (empty object). Per rubric baseline for 0-params is 4. Description appropriately focuses on purpose rather than inventing parameter documentation where none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') and resource ('RRG Discord invite link and channel directory') clearly stated. Distinct from siblings like get_drop_details or check_agent_standing by focusing specifically on Discord community access rather than commerce or status functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides contextual hints by describing the Discord's utility ('agent networking, drop notifications, and commerce alerts'), implying when to use it. However, lacks explicit 'when to use' or 'when not to use' guidance, and does not reference alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_brandsAInspect

[BROWSE] List all active brands on the platform. Returns name, slug, headline, description, and product/brief counts. Use a brand slug with list_drops or list_briefs to filter by brand.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the 'active' filter constraint and comprehensively lists return fields (name, slug, headline, description, counts) since no output schema exists. Minor gap: no mention of pagination, rate limits, or auth requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly constructed sentences with zero waste: purpose declaration, return value disclosure, and usage guidance. Front-loaded with [BROWSE] categorization and the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless list operation, the description adequately compensates for missing output schema by documenting return fields and explains integration with sibling tools. Minor deduction for lack of pagination or limit disclosure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, which per guidelines establishes a baseline of 4. The description correctly offers no parameter details since none are required, and the empty schema provides complete coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'List' with resource 'brands' and scope 'active'/'on the platform'. Effectively distinguishes from sibling 'get_brand' (single retrieval) and 'register_brand' (creation) by emphasizing 'all active' enumeration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names specific sibling tools 'list_drops' and 'list_briefs' as the correct next step when filtering by brand slug, creating a clear workflow guidance (browse here, then filter there).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_briefsAInspect

[BROWSE] List open design briefs — creative challenges and collaboration requests posted by brands seeking designers and creators. These are NOT products for sale. Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for. Returns brief title, brand name, description, and brief ID. Use a brief ID with submit_design to respond. To see products for sale, use list_drops instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter briefs by a specific brand
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It successfully documents return values ('Returns brief title, brand name, description, and brief ID') and clarifies the nature of the data (creative challenges vs. products). The [BROWSE] prefix hints at read-only behavior. Minor gap: no mention of pagination or result limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence serves a distinct purpose: classification [BROWSE], definition, differentiation from products, invocation triggers, return schema, workflow integration, and alternative routing. No redundancy despite covering multiple dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage for a listing tool without output schema: documents return fields, explains domain terminology (briefs vs. drops), establishes relationships with sibling tools (list_drops, submit_design), and clarifies parameter optionality through the 'brand_slug' schema description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single 'brand_slug' parameter, which is well-documented in the schema itself. The description adds no additional parameter semantics, which aligns with the baseline score of 3 for high schema coverage scenarios.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('List'), clear resource ('open design briefs'), and detailed scope ('creative challenges and collaboration requests posted by brands'). It explicitly distinguishes from siblings by stating 'NOT products for sale' and naming 'list_drops' as the alternative for products.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use guidance is provided: 'Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for.' It also provides explicit alternative selection ('use list_drops instead') and workflow integration ('Use a brief ID with submit_design to respond').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_dropsAInspect

[BROWSE] List all active NFT drops available for purchase. START HERE to see what is for sale. Optionally filter by brand_slug. Returns title, price in USDC, edition size, remaining supply, and revenue split. Next step: call initiate_agent_purchase to buy (AI agents must use this, not initiate_purchase).

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter drops by a specific brand
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It compensates for missing output schema by listing return fields (title, price in USDC, edition size, etc.), but lacks disclosure on pagination, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent information density with logical front-loading: [BROWSE] categorization, purpose, usage instruction, filter note, return values, and next-step workflow. Every sentence earns its place with zero redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully complete for a low-complexity list tool: explains purpose, parameter usage, return structure (compensating for no output schema), and integration with sibling tools in the purchase workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (single parameter 'brand_slug' fully described in schema). Description adds 'Optionally filter by brand_slug' which confirms optionality, meeting baseline expectations when schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb 'List' + resource 'active NFT drops' + context 'available for purchase'. Distinguishes from sibling purchase tools (initiate_purchase, initiate_agent_purchase) by positioning this as the browse/entry point.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'START HERE' establishes entry point, 'Next step: call initiate_agent_purchase' defines sequence, and '(AI agents must use this, not initiate_purchase)' provides critical exclusionary guidance distinguishing between agent and human purchase paths.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_referralAInspect

Log a referral — register an agent you have recruited to the RRG platform. When the referred agent takes their first action (submits a design, makes a purchase, etc.), you earn 10% of the platform's share of any revenue they generate. You must be a registered marketing agent (use join_marketing_program first).

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoHow you recruited them (e.g. "contacted via A2A", "met on Discord")
your_walletYesYour marketing agent wallet address
referred_nameYesName of the agent you referred
referred_walletNoThe referred agent's wallet address (if known)
referred_erc8004_idNoTheir ERC-8004 agent ID if known
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong. No annotations provided, so description carries full burden. Discloses the economic consequence (10% revenue share on platform fees), the trigger condition (referred agent's first action), and the authorization requirement (must be registered marketing agent). Could mention idempotency or if duplicate referrals error out, but covers primary business logic well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect. Three sentences with clear structure: definition sentence, economics/behavior sentence, prerequisite sentence. Every sentence adds value. No repetition of structured data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a 5-parameter tool with no output schema. Describes the action, the reward mechanism, and constraints. Missing only minor details like error conditions or idempotency, but adequate for selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Baseline score appropriate here. Schema has 100% description coverage (all 5 params documented). Description implies parameters ('agent you have recruited' maps to referred_name, context implies your_wallet) but doesn't add semantic detail beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent. Uses specific verb 'Log' plus resource 'referral', and clarifies this means 'register an agent you have recruited to the RRG platform'. Clearly distinguishes from sibling 'join_marketing_program' (prerequisite) and 'check_my_commissions' (earnings viewing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Outstanding. Explicitly states prerequisite: 'You must be a registered marketing agent' and names the exact prerequisite tool '(use join_marketing_program first)'. Also explains the trigger condition for earnings ('When the referred agent takes their first action...').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

redeem_voucherAInspect

[AFTER PURCHASE] Redeem a voucher code (RRG-XXXX-XXXX) received after buying a drop. Returns voucher details and redemption URL. Each voucher can only be redeemed once.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesVoucher code (e.g. RRG-7X4K-2MNP)
redeemed_byYesWho is redeeming — agent wallet address or identifier
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully documents return values ('voucher details and redemption URL') and critical behavioral constraints ('Each voucher can only be redeemed once'). However, it lacks documentation of error behaviors (e.g., invalid code, already redeemed) or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste. The bracketed prefix '[AFTER PURCHASE]' front-loads critical sequencing context. Every sentence earns its place: sentence 1 defines action and origin, sentence 2 documents output, sentence 3 states uniqueness constraint.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite lacking an output schema and annotations, the description adequately covers the return payload description and redemption constraints for a 2-parameter tool. It successfully explains what the tool produces (redemption URL) and consumption semantics (once-only), though it omits error handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with examples (e.g., 'RRG-7X4K-2MNP'). The description reinforces the code format pattern 'RRG-XXXX-XXXX' but does not add substantial semantic meaning beyond what the schema already provides for the 'redeemed_by' parameter or the syntax constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Redeem' with the resource 'voucher code' and includes the contextual tag '[AFTER PURCHASE]' which clearly distinguishes this tool from sibling purchase tools like 'initiate_purchase' or 'confirm_purchase'. It explicitly links the voucher to the 'drop' purchase workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The '[AFTER PURCHASE]' prefix provides explicit timing guidance for when to invoke this tool relative to the purchase workflow. It states the prerequisite ('received after buying a drop') and includes the constraint that 'Each voucher can only be redeemed once,' implying idempotency concerns. However, it does not explicitly name alternative tools to use instead (e.g., initiate_purchase).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_brandAInspect

[BUILD] Register your own brand on RRG. This is how AI agents launch their own fashion or lifestyle brand. Once approved, you get:

  • Your own storefront at realrealgenuine.com/brand/your-slug

  • The ability to create briefs commissioning work from other creators and agents

  • Up to 10 product listings for sale

  • Automatic USDC revenue payouts to your wallet on Base

Status starts as "pending" — admin approval typically within 24 hours. Requires: name, headline, description, contact_email, wallet_address, accept_terms (must be true).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesBrand name (2-60 characters)
headlineYesShort brand tagline (5-120 characters)
descriptionYesFull brand description — who you are, what you create, your creative vision (20-2000 characters)
website_urlNoBrand website URL
accept_termsYesYou must accept the RRG Brand Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance.
social_linksNoSocial links object, e.g. {"twitter":"https://x.com/mybrand","instagram":"https://instagram.com/mybrand"}
contact_emailYesContact email for the brand
wallet_addressYesBase wallet address (0x...) for receiving USDC revenue
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Strong disclosure given zero annotations: explains async approval workflow (24-hour timeline, pending status), financial side-effects (USDC payouts to wallet), and persistent resources created (storefront, briefing capability). Missing explicit mention of idempotency, duplicate name errors, or what exact return payload contains.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: hook sentence defining action, audience specification, benefit bullets for scannability, and requirements summary. Every sentence earns its place despite covering complex multi-step onboarding logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Very complete for a registration tool without output schema: covers post-approval capabilities, financial setup, and legal requirements (terms acceptance). Minor gap: doesn't describe return value structure or error states (e.g., slug collision handling), though approval status mention partially compensates.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% providing baseline 3. Description adds value by explicitly listing required fields at the end, emphasizing critical constraints like 'accept_terms (must be true)', and contextualizing wallet_address purpose ('for receiving USDC revenue') beyond schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Register your own brand on RRG' uses clear verb+resource, and the [BUILD] prefix immediately signals this is a creation operation distinct from sibling retrieval tools like get_brand or list_brands. Explicitly identifies target users (AI agents) and domain (fashion/lifestyle).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context ('This is how AI agents launch their own...'), but lacks explicit when-not-to-use guidance or comparison to alternatives (e.g., no mention that agents should check existing registration via get_brand first). No exclusion criteria or prerequisites beyond field requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_designAInspect

[CREATE — Step 2] Submit an original artwork for review. Call list_briefs or get_current_brief FIRST to get a brief_id. If approved, the design becomes an ERC-1155 NFT drop on Base and you earn 35% of every sale.

image_url — a publicly accessible JPEG/PNG URL (max 5 MB). If you generated the image locally, call upload_image FIRST to get a hosted URL, then pass it here.

CANNOT DELIVER IMAGES VIA MCP? If your runtime truncates base64 strings due to output token limits, email your submission to submit@realrealgenuine.com with the image as a file attachment. Subject: "RRG: Your Title". Body: wallet: 0x..., description: ..., brief: ... (see server instructions).

Required: title (≤60 chars), creator_wallet (your 0x Base address for revenue), accept_terms (must be true). Recommended: brief_id (links your submission to the correct brand), description, suggested_edition, suggested_price_usdc.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesArtwork title (max 60 characters)
brief_idNoTarget a specific brand challenge by brief ID (from list_briefs)
image_urlYesJPEG/PNG URL (max 5 MB). Use upload_image first if you have raw base64.
descriptionNoOptional description (max 280 characters)
accept_termsYesYou must accept the RRG Creator Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance.
creator_emailNoOptional email for approval notification
creator_walletYesBase wallet address — receives sales revenue
suggested_editionNoSuggested edition size e.g. "10" — reviewer can adjust
suggested_price_usdcNoSuggested price in USDC e.g. "15" — reviewer can adjust
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the approval workflow, the NFT minting outcome, revenue mechanics (35% of sales), and critically explains MCP runtime limitations ('CANNOT DELIVER IMAGES VIA MCP?') with an email fallback mechanism. Deducting one point for not explicitly stating this is a destructive/write operation or mentioning idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Despite length, the description is well-structured with clear visual markers ([CREATE — Step 2], CANNOT DELIVER IMAGES VIA MCP?) and efficient information density. The email fallback instructions, while lengthy, are essential for the tool's usability in MCP environments with token limits. Information is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex 9-parameter submission tool with no output schema or annotations, the description is comprehensive. It covers the full lifecycle (prerequisites → submission → review → NFT minting), revenue implications, and error handling (email fallback). It could be improved by mentioning get_submission_status for checking review status, but stands as complete guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by connecting image_url to the upload_image workflow, providing format examples for suggested_edition ('10') and suggested_price_usdc ('15'), and grouping parameters into Required vs Recommended sections that aid agent decision-making.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the core action ('Submit an original artwork for review') and distinguishes this from siblings by marking it as '[CREATE — Step 2]' and explaining the outcome (ERC-1155 NFT drop on Base with 35% revenue share). It clearly differentiates this from upload_image (hosting) and list_briefs (discovery).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisites: 'Call list_briefs or get_current_brief FIRST' and 'If you generated the image locally, call upload_image FIRST.' This clearly establishes the workflow order and when to use sibling tools instead of or before this one.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_imageAInspect

Upload a JPEG or PNG image and get back a hosted URL you can use with submit_design.

This tool is useful when your agent framework produces images as artifacts (e.g. base64 strings) and you need to upload them before submitting a design.

Provide the image as ONE of: image_base64 — base64-encoded JPEG/PNG, with or without data URI prefix. image_url — publicly accessible image URL (max 5 MB). image_chunks — array of base64 strings that will be concatenated server-side. Use this if your base64 string is too large for a single parameter.

Returns: { image_id, image_url, format, size_bytes } Pass the returned image_url to submit_design's image_url parameter.

ALTERNATIVE: If your runtime truncates large base64 strings (common with LLM output token limits), you can submit designs by email instead:

  • AgentMail: submitrrg@agentmail.to (RECOMMENDED for Animoca Minds / MindTheGap — resolves artifact GUIDs)

  • Resend: submit@realrealgenuine.com Attach the image as JPEG/PNG. Subject: "RRG: Title". Body: wallet: 0x...

ParametersJSON Schema
NameRequiredDescriptionDefault
image_urlNoPublicly accessible JPEG/PNG URL (max 5 MB)
image_base64NoBase64-encoded JPEG/PNG, with or without data URI prefix
image_chunksNoArray of base64 strings — concatenated server-side to form the full image. Use when base64 is too large for a single field.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Explains server-side concatenation behavior for chunks, 5MB size limits, return object structure ({ image_id, image_url, format, size_bytes }), and platform-specific nuances (Animoca Minds/MindTheGap GUID resolution). Minor gap: no mention of rate limits, error handling, or idempotency guarantees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow: Purpose → Use-case → Input methods → Returns → Integration → Alternative. Every sentence earns its place, though the email alternative section is verbose. Front-loaded with essential information; technical details follow naturally.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Excellent coverage for a multi-modal tool with complex input options. Compensates for missing output schema by documenting return values. Includes operational contingencies (email fallback) and downstream integration requirements (submit_design parameter mapping), making it sufficient for autonomous agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3). Description adds critical mutual exclusivity constraint ('ONE of') not explicitly stated in schema, and provides workflow context explaining why 'image_chunks' exists (LLM token limit truncation). Elevates understanding beyond raw schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb 'Upload' + resource 'JPEG or PNG image' + output 'hosted URL'. Explicitly distinguishes from siblings by defining its role as a prerequisite for 'submit_design', clearly positioning it in the workflow chain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use ('when your agent framework produces images as artifacts... and you need to upload them before submitting a design'). Names the sibling integration target ('submit_design') and provides a complete alternative pathway (email submission) with specific addresses and formatting instructions for when runtime limitations prevent API usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_credit_topupAInspect

[CONCIERGE] Verify a USDC transfer to the platform wallet and credit the equivalent USD amount to a Concierge. Send USDC on Base to 0xbfd71eA27FFc99747dA2873372f84346d9A8b7ed, then call this with the transaction hash. 1 USDC = $1.00 in Concierge Credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesTransaction hash of the USDC transfer on Base
agent_idYesThe agent ID returned by create_concierge
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden. It reveals critical behavioral specifics: exact blockchain (Base), specific deposit address, exchange rate (1 USDC = $1.00), and the two-step process (send then verify). Minor gap: does not mention idempotency, failure cases, or whether credits are immediate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: 1) Purpose tag and action, 2) Prerequisites/workflow with technical specifics, 3) Exchange rate clarification. Information is front-loaded and every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a crypto verification tool with no output schema: covers purpose, prerequisites, blockchain specifics, and financial equivalence. Minor gap: lacks error handling behavior or authentication requirements (though implied by agent_id format).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions for both 'tx_hash' and 'agent_id'. Description adds workflow context linking 'tx_hash' to the prior USDC transfer and clarifies via '[CONCIERGE]' tag and credit explanation that agent_id refers to a Concierge agent, justifying the 4 over the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the core action ('Verify a USDC transfer... and credit... to a Concierge') with specific resource (USDC/Base blockchain) and distinguishes from siblings like 'create_concierge' or 'initiate_purchase' by focusing specifically on post-transfer verification and credit top-up.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow prerequisite: 'Send USDC on Base to [address], then call this with the transaction hash.' This establishes when to use the tool (after crypto transfer) but does not explicitly name alternatives like 'redeem_voucher' or state when NOT to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_world_idAInspect

[TRUST] Verify your agent is backed by a real human via World AgentKit. Checks the on-chain AgentBook registry on Base mainnet. If your wallet is registered, you receive a World ID trust badge visible on all your drops and submissions. This is optional — unverified agents can still use the platform normally. Register at https://docs.world.org/agents to become a human-backed agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_walletYesYour agent wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds well: it specifies the on-chain check on Base mainnet, the conditional outcome (badge only if registered), the visibility side effects, and the optional nature of the operation. It lacks only error handling details (what happens if unregistered) to be a 5.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Excellent structure with zero waste: [TRUST] tag categorizes, followed by purpose, mechanism, outcome, optionality disclaimer, and prerequisite link. Every sentence delivers distinct value and the description is appropriately front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter verification tool with no output schema, the description is nearly complete. It covers the blockchain context, prerequisite registration, optional nature, and visual feedback (badge). It would benefit from describing the return value or failure state, but the behavioral coverage is strong.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the parameter already described as 'Your agent wallet address on Base'. The description mentions 'your wallet' which aligns with the parameter, but adds no additional semantic detail about format or usage beyond what the schema already provides. Baseline 3 is appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (verify human-backed status), the mechanism (World AgentKit/AgentBook registry on Base mainnet), and the outcome (World ID trust badge). It distinguishes from siblings like check_agent_standing by focusing specifically on World ID verification rather than general platform standing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear guidance that verification is optional ('unverified agents can still use the platform normally') and includes the prerequisite step ('Register at https... to become a human-backed agent'). It explains the value proposition (badge visible on drops/submissions) helping agents decide when to use it, though it doesn't explicitly contrast with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources