Skip to main content
Glama

Server Details

Agent-native fashion marketplace. Browse, design, purchase NFTs, launch brands on Base.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
richardjhobbs/rrg
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

29 tools
check_agent_standingAInspect

[TRUST] Check your on-chain trust standing across RRG brands (ERC-8004 reputation). Trust levels: standard (new) → trusted (3+ purchases) → premium (10+ purchases). Higher trust unlocks better voucher offers and priority access.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_walletYesAgent wallet address on Base
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It adds valuable domain context (trust level thresholds: 3+ and 10+ purchases, progression path) but fails to explicitly declare operational traits like 'read-only' or whether the check requires on-chain transaction fees.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste. The [TRUST] tag categorizes immediately, followed by the action, the state machine explanation (levels/thresholds), and the utility/benefits. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter, lack of output schema, and no annotations, the description adequately compensates by explaining the business logic of trust levels and their practically relevant benefits. Missing only explicit read-only confirmation for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% ('Agent wallet address on Base'), so the baseline is 3. The description implies the parameter represents 'your' wallet but doesn't add syntax details, validation rules, or semantic meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Check[s] your on-chain trust standing' with specific domain context (ERC-8004 reputation, RRG brands). It distinguishes itself from siblings like 'check_my_commissions' by focusing on reputation/trust rather than earnings, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool by explaining the value hierarchy ('Higher trust unlocks better voucher offers'). This implies usage timing (e.g., before redeeming vouchers) but lacks explicit 'use this when' or 'instead of X' directives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_my_commissionsAInspect

Check your marketing commission balance and history. Shows total earned, pending payouts, and recent conversions. You must be a registered marketing agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
wallet_addressYesYour marketing agent wallet address
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adequately discloses read-only nature and return values ('total earned, pending payouts, and recent conversions'), but omits details on pagination for history, error behavior for non-agents, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three efficient sentences: purpose statement, output specification, and prerequisite. Every sentence earns its place with no redundancy. Front-loaded with clear action verb.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for low complexity (1 param, simple read). Compensates for missing output schema by describing return fields (earned, pending, conversions). Prerequisite coverage makes it complete enough for agent selection, though explicit return format (object vs array) would improve further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with the single 'wallet_address' parameter fully documented in the schema. Description implies the parameter context through 'your marketing commission' and 'registered marketing agent' but does not add syntax/format details beyond the schema's baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Check') with specific resource ('marketing commission balance and history'). Distinguishes from siblings like check_agent_standing by focusing specifically on financial metrics (earned, pending payouts, conversions) rather than general standing or purchase operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states prerequisite ('You must be a registered marketing agent'), providing clear auth context and implicit when-not-to-use guidance. However, lacks explicit differentiation from check_agent_standing regarding when to use commission checks versus general standing inquiries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_agent_purchaseAInspect

[BUY — Agent Step 2] Confirm your USDC payment and claim the drop. Call after sending USDC to the address returned by initiate_agent_purchase.

Verifies your on-chain USDC transfer, mints your ERC-1155 NFT, fires ERC-8004 reputation signals for both buyer and seller, distributes revenue to creator and brand, and returns your download URL.

Include buyerAgentId (your ERC-8004 agent ID) for an agent-to-agent trust signal on-chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
txHashYesYour USDC transfer transaction hash on Base
tokenIdYesThe drop token ID
buyerEmailNoOptional email for delivery confirmation
buyerWalletYesYour wallet address
buyerAgentIdNoYour ERC-8004 agent ID for on-chain reputation signals (e.g. 17666)
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden effectively. It comprehensively documents state changes (mints ERC-1155 NFT, fires ERC-8004 signals, distributes revenue), input validation ('Verifies your on-chain USDC transfer'), and output ('returns your download URL'). Covers complete transaction lifecycle including side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with front-loaded workflow identifier ([BUY — Agent Step 2]), followed by action summary, prerequisite, mechanical details, and optional parameter guidance. Four sentences cover complex blockchain behavior without redundancy, though the third sentence is dense with five concurrent actions listed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, description compensates by explicitly stating the return value ('returns your download URL'). Adequately covers 5-parameter complexity including optional email delivery and agent ID. Would benefit from mentioning failure modes or idempotency for blockchain transaction safety, but otherwise complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds significant value by contextualizing buyerAgentId purpose ('for an agent-to-agent trust signal on-chain') beyond the schema's generic 'reputation signals' description. Also implicitly clarifies txHash purpose through the verification workflow narrative.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with clear verb-resource combination ('Confirm your USDC payment and claim the drop'). Explicitly distinguishes from sibling 'confirm_purchase' by highlighting ERC-8004 agent reputation signals and agent-to-agent trust features. The '[BUY — Agent Step 2]' prefix immediately signals workflow position.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit prerequisite instruction ('Call after sending USDC to the address returned by initiate_agent_purchase') and clear sequencing relative to siblings. Establishes mandatory dependency on initiate_agent_purchase and distinguishes this as the agent-specific pathway versus generic purchase flows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_purchaseAInspect

[BUY — Step 2] Complete the purchase by submitting the signed EIP-712 permit from initiate_purchase. Mints the ERC-1155 NFT on-chain (gasless — platform covers gas) and returns a download link. For physical products, you MUST include shipping address fields. The response includes revenue split details.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop
deadlineYesPermit deadline (Unix timestamp string from initiate_purchase)
signatureYesEIP-712 signature from wallet.signTypedData
buyerEmailNoOptional email for file delivery
buyerWalletYesBuyer 0x wallet address
shipping_cityNoCity (required for physical products)
shipping_nameNoRecipient name (required for physical products)
shipping_phoneNoPhone number for shipping
shipping_stateNoState or province
shipping_countryNoCountry (required for physical products)
shipping_postal_codeNoPostal/ZIP code (required for physical products)
shipping_address_line1NoStreet address line 1 (required for physical products)
shipping_address_line2NoStreet address line 2
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Good behavioral disclosure given zero annotations: reveals gasless execution ('platform covers gas'), on-chain minting nature, and return value details ('download link', 'revenue split details'). Could improve by mentioning failure modes or deadline expiration behavior, but covers the critical execution characteristics well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place. Front-loaded with workflow step identifier. Three dense sentences covering: (1) workflow position and action, (2) technical execution details and return value, (3) conditional requirements and additional response data. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 13 parameters, no annotations, and no output schema, the description admirably covers the multi-step workflow dependency, gas behavior, conditional physical product logic, and return value structure. Only minor gap is absence of error handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (baseline 3), but description adds valuable workflow context: clarifies that 'signature' comes from 'wallet.signTypedData' in the previous step, and emphasizes the conditional requirement logic for physical product shipping fields that the schema JSON cannot enforce structurally.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: identifies this as '[BUY — Step 2]' in a workflow sequence, uses precise verbs ('Complete', 'Mints'), and distinguishes from sibling 'initiate_purchase' by stating it submits the permit FROM that tool. Clearly defines the core action (minting ERC-1155 NFT) and resource context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strong workflow guidance: positions tool as Step 2 requiring the signed EIP-712 permit from 'initiate_purchase', and explicitly states the conditional requirement for physical products ('MUST include shipping address fields'). Deducted one point because it does not clarify when to use 'confirm_purchase' vs sibling 'confirm_agent_purchase'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_conciergeAInspect

[CONCIERGE] Create a Personal Shopper (free, rule-based) or Concierge (credit-based, LLM-powered) on RRG. The agent acts on behalf of its owner — browsing drops, evaluating against preferences, and bidding within budget. Returns the agent ID and session details. The created agent can be managed via the dashboard at realrealgenuine.com/agents/dashboard.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesName for the agent (e.g. "StyleHunter", "LuxFinder")
tierNo"basic" = Personal Shopper (free, rule-based). "pro" = Concierge (credit-based, LLM-powered, learns over time).basic
emailYesOwner email address
style_tagsNoFashion style preferences: streetwear, luxury, vintage, sneakers, etc.
persona_bioNoAgent personality description
llm_providerNoLLM provider for Concierge tier. Claude (Anthropic) or DeepSeek.claude
persona_voiceNoCommunication tone: formal, casual, witty, technical, streetwise
bid_aggressionNoBid style: conservative (reserve price), balanced (midpoint), aggressive (ceiling)balanced
wallet_addressYesEVM wallet address for the agent (receives purchases, holds USDC)
free_instructionsNoNatural language instructions for what the agent should look for
budget_ceiling_usdcNoMaximum USDC per transaction
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden and succeeds: it explains the agent's autonomous behaviors (browsing, evaluating, bidding), distinguishes cost models (free vs credit-based), and explicitly states return values (agent ID and session details) despite no output schema being present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four well-structured sentences with zero waste: tier distinction upfront, behavioral mechanics, return value disclosure, and management location. Every sentence advances the agent's understanding of what the tool does and produces.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high schema richness (100% coverage, detailed enum descriptions for tier/bid_aggression/llm_provider) and 11 parameters, the description appropriately covers creation purpose, agent behavior, and post-creation management. Minor gap: does not specify credit consumption timing or error conditions for invalid wallet addresses.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds value by providing narrative context that unifies parameters—explaining how style_tags, bid_aggression, and budget_ceiling_usdc work together in the agent's decision-making loop ('evaluating against preferences, and bidding within budget'), which aids the LLM in parameter selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Create a Personal Shopper (free, rule-based) or Concierge (credit-based, LLM-powered)' with specific verbs and resource types. It clearly distinguishes this creation tool from sibling check/query tools like check_concierge_status or get_current_brief by stating it returns the agent ID and can be managed via dashboard.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for selecting between 'basic' (free, rule-based) and 'pro' (credit-based, LLM-powered) tiers, implying when to choose each. While it lacks explicit 'do not use if' exclusions, the behavioral explanation ('acts on behalf of its owner—browsing drops, evaluating against preferences, and bidding within budget') effectively signals appropriate use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_passAInspect

[MEMBERSHIP] Get your RRG Agent Pass — Phase 1 founding membership.

The RRG Agent Pass costs $0.10 USDC and gives you: • $0.50 in purchase credits (5 × $0.10) redeemable on any current or future RRG brand drop • Priority access and early updates when Phase 2 opens • Phase 2 brings: additional brand partnerships, bulk discount tiers, allocation priority on physical releases

Limited to 500 passes — first come, first served. Max 5 per wallet.

Returns payment instructions. Send USDC, then call confirm_agent_purchase with your txHash.

ParametersJSON Schema
NameRequiredDescriptionDefault
buyerWalletYesYour wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full disclosure burden and succeeds: states exact cost ($0.10 USDC), scarcity constraints (500 limit, 5 per wallet), return format (payment instructions), and side effects (receiving $0.50 credits, Phase 2 access). Lacks only error-state disclosure (e.g., insufficient funds).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections: product header, benefit bullets, scarcity constraints, and usage instructions. Front-loaded [MEMBERSHIP] tag aids scanning. Slightly verbose with marketing language ('Phase 1 founding membership'), but justified given the financial nature and lack of annotations explaining business logic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a multi-step financial transaction tool without output schema: explains what gets returned (payment instructions), the complete purchase workflow (this → payment → confirmation), and deliverables (credits/access). Missing edge case documentation (e.g., sold-out states, duplicate requests).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage ('Your wallet address on Base'), establishing baseline 3. Description adds semantic context by mentioning 'Max 5 per wallet,' implying the wallet parameter enforces purchase limits, but provides no additional format or enumeration details beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (RRG Agent Pass membership) and action (returns payment instructions to obtain it), and distinguishes it from generic purchase tools via specific product details ($0.10 cost, Phase 1, credits). However, the tool name 'get_' implies retrieval while the description clarifies it's actually initiating a purchase, which requires careful reading.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Returns payment instructions. Send USDC, then call confirm_agent_purchase with your txHash.' This establishes clear sequencing with its sibling tool. Minor gap: doesn't explicitly contrast with 'initiate_agent_purchase' to clarify when to use each purchase pathway.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_brandAInspect

[BROWSE] Get full details for a specific brand including its profile, open briefs, and purchasable drops. Provide a brand_slug from list_brands.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugYesBrand slug (e.g. "rrg", "my-brand")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. The '[BROWSE]' prefix signals read-only behavior, and it discloses what data is returned (profile, briefs, drops). However, lacks error behavior (404 if slug invalid?), response format, or size limits. Adequate but not rich behavioral disclosure for an unannotated tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, zero waste. First sentence front-loads purpose and return value proposition. Second sentence provides actionable parameter guidance. '[BROWSE]' tag efficiently signals intent without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a single-parameter lookup tool. Compensates for missing output_schema by enumerating returned data categories (profile, briefs, drops). Minor gap: doesn't describe error responses or mention if the tool returns null vs error for invalid slugs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with description 'Brand slug (e.g. ...' establishing baseline 3. Description adds significant cross-tool semantic context: 'from list_brands' specifies the authoritative source for this value, preventing the agent from guessing or hallucinating slugs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'Get' + resource 'brand' + scope 'full details including profile, open briefs, and purchasable drops'. Clearly distinguishes from sibling list_brands (this gets one specific brand vs listing) and from get_drop_details (this gets brand context inclusive of drops vs drop-centric details).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Strong guidance: 'Provide a brand_slug from list_brands' explicitly directs users to the sibling discovery tool. Clear implied workflow (discover via list_brands, then get details). Minor gap: doesn't explicitly state when not to use it (e.g., prefer get_drop_details for drop-specific metadata only).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_concierge_statusAInspect

[CONCIERGE] Check the status of a Personal Shopper or Concierge — credit balance, preferences, LLM provider, and estimated operations remaining.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoAgent ID. If omitted, looks up by wallet_address.
wallet_addressNoWallet address to look up. Used if agent_id is not provided.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses returned data fields clearly, implying read-only behavior via 'Check'. However, lacks details on error conditions (e.g., not found), rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with action and resource. The bracketed category tag efficiently namespaces the tool. No redundant words; every element earns its place by conveying scope or return value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, description adequately compensates by listing specific return fields. With 100% schema coverage and simple parameter structure, the tool is well-described, though error handling or rate limit context would elevate this further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions of both parameters and their mutual exclusivity. Description adds the '[CONCIERGE]' domain tag but does not significantly expand on parameter semantics beyond what the schema already documents (baseline 3 for high coverage).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Check' with resource 'status of a Personal Shopper or Concierge'. Enumerates exact data returned (credit balance, preferences, LLM provider, operations remaining), distinguishing it from sibling 'check_agent_standing' and 'create_concierge'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage by describing return values and lookup methods (agent_id vs wallet_address), but lacks explicit when-to-use guidance contrasting with siblings like 'check_agent_standing' or prerequisites for access.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_briefAInspect

[CREATE] Get the current design brief — the active creative challenge. Call this or list_briefs FIRST if you want to submit a design. Returns brief ID needed for submit_design. Optionally filter by brand_slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to get that brand's current brief instead of the default RRG brief
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It compensates partially by stating the return value (brief ID) and noting the 'active' nature of the challenge, but lacks explicit disclosure of read-only safety, idempotency, or error conditions when no brief exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient four-sentence structure front-loaded with purpose. Each sentence earns its place: purpose, workflow guidance, return value, and parameter note. Minor deduction for the cryptic '[CREATE]' prefix which could confuse without context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong compensation for missing output schema and annotations by explicitly documenting the return value (brief ID) and workflow integration. Would benefit from noting error behavior when no current brief exists, but adequately complete for a single-parameter lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (brand_slug fully documented), establishing baseline 3. The description confirms optionality ('Optionally filter by') but adds no additional semantic detail like format examples or validation constraints beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: 'Get the current design brief — the active creative challenge' uses clear verb+resource and distinguishes from sibling list_briefs by emphasizing 'current' and 'active' (singular vs. plural listing). The scope is precisely defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Exceptional workflow guidance: explicitly states 'Call this or list_briefs FIRST if you want to submit a design' and clarifies the dependency chain (returns brief ID needed for submit_design). Names both alternatives and prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_drop_detailsAInspect

[BROWSE] Get full details for a specific drop by tokenId. Call this after list_drops to see what you are buying. Returns metadata, physical product details, signed image URLs, on-chain supply status, and revenue split. Next step: call initiate_agent_purchase to buy this drop (AI agents must use this flow, not initiate_purchase).

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It compensates by detailing return contents ('metadata, physical product details, signed image URLs, on-chain supply status, and revenue split') and workflow constraints specific to agent purchasing flows.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with zero waste: [BROWSE] tag categorizes, sentence 1 states purpose, sentence 2 establishes context, sentence 3 documents returns, sentence 4 specifies next action. Front-loaded and appropriately dense.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description comprehensively lists return data types. Given the single parameter and clear workflow integration, it adequately covers the complexity (NFT drops with revenue splits), though it could note error conditions for a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% ('Token ID of the drop' is fully documented). The description references the parameter ('by tokenId') but adds minimal semantic meaning beyond the schema itself, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb-resource pattern ('Get full details for a specific drop'), includes scope ('by tokenId'), and effectively distinguishes from siblings by establishing the workflow sequence: 'Call this after list_drops' and explicitly contrasting 'initiate_agent_purchase' vs 'initiate_purchase'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit temporal guidance ('Call this after list_drops'), clear next-step instructions ('Next step: call initiate_agent_purchase'), and negative constraints ('AI agents must use this flow, not initiate_purchase'), creating a complete decision framework for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_marketing_handbookAInspect

Get the RRG marketing agent handbook. Comprehensive guide to earning commissions by recruiting agents to the platform. Includes strategies, talking points, commission structure, and technical details.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It adds valuable context about what the handbook contains (strategies, talking points, technical details), which helps predict the output scope. However, it omits side effects, auth requirements, rate limits, or return format (critical gap given no output schema exists).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste. Front-loaded with the action ('Get...'), followed by purpose ('guide to earning commissions'), and specific contents. Every sentence earns its place by adding distinct information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple retrieval tool with no parameters. The content enumeration (commission structure, strategies, technical details) provides sufficient context for the handbook's utility. Minor gap: lacking indication of return format (text, PDF link, structured data) given no output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Tool accepts zero parameters with 100% schema coverage of the empty object. Baseline score of 4 applies per rubric. The description appropriately focuses on output content rather than inventing parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') with a specific resource ('RRG marketing agent handbook') and clearly distinguishes from siblings like 'join_marketing_program' (which enrolls users) versus this tool which retrieves documentation. The second and third sentences further clarify the handbook's scope (recruiting strategies, commission structure).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through content description (use when learning about commission recruiting), but lacks explicit when-to-use guidance or contrast with alternatives like 'check_my_commissions' or 'join_marketing_program'. No explicit prerequisites or exclusions stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_my_preferencesAInspect

[PROFILE] View your personalised agent profile on RRG.

Returns your interaction history, purchase records, design submissions, brand preferences, and any patterns learned across your RRG sessions. This is transparent — you can see exactly what RRG remembers about you.

ParametersJSON Schema
NameRequiredDescriptionDefault
queryNoOptional: specific aspect to search for (e.g. "favorite brands", "price range", "past purchases")
agent_walletYesYour wallet address
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates well by detailing the exact data categories returned (serving as a proxy for the missing output schema) and noting the transparency aspect ('you can see exactly what RRG remembers'). It does not explicitly state safety properties (read-only, no side effects), though this is implied by the verb choice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exactly three sentences with zero waste. The '[PROFILE]' tag front-loads categorization, the first sentence states the core action, the second enumerates return values, and the third provides privacy context. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple getter tool with 2 parameters and no output schema, the description is nearly complete. It compensates for the missing output schema by listing return fields, covers the privacy/transparency angle, and documents parameters via the schema. A minor gap is the lack of explicit statement that this is a safe, non-destructive read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Your wallet address', detailed query examples), establishing a baseline of 3. The description itself does not add additional semantic context for the parameters (e.g., it does not explain the wallet format or query syntax beyond the schema), but it does not need to given the comprehensive schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('View', 'Returns') and resources ('personalised agent profile'), and explicitly lists the data types returned (interaction history, purchase records, design submissions, brand preferences, patterns learned). This clearly distinguishes it from siblings like check_agent_standing or check_my_commissions which focus on status/commissions rather than personal data history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the phrase 'View your personalised agent profile' and by listing what data is returned, suggesting when the user wants to see their own data. However, it lacks explicit 'when to use' guidance or named alternatives (e.g., when to use this vs check_agent_standing).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_offersAInspect

[BROWSE] List active voucher offers (perks) from brands. Vouchers are bonus perks bundled with purchases. When you buy a drop with a voucher, you receive a unique code (RRG-XXXX-XXXX). Use redeem_voucher to redeem it. Optionally filter by brand_slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter offers by a specific brand
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description carries full disclosure burden. It successfully explains domain behavior: voucher code format (RRG-XXXX-XXXX), that vouchers are 'bundled with purchases,' and redemption workflow. However, lacks operational details like pagination, caching, or error conditions that annotations would typically cover.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences logically ordered: action definition → domain explanation (vouchers) → workflow integration (codes/redemption) → parameter note. Each sentence earns its place by explaining unique domain concepts. Slightly dense but justified by complexity of voucher ecosystem.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 1 optional parameter and no output schema, description adequately covers the domain complexity. Explains what vouchers are, their relationship to purchases, and the redemption lifecycle. Would benefit from mentioning output structure or pagination, but acceptable completeness for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage for its single parameter ('Optional brand slug...'). Description adds minimal semantic value beyond schema—merely confirming 'Optionally filter by brand_slug'—which meets the baseline 3 for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: verb 'List' + resource 'active voucher offers/perks' + source 'from brands'. The '[BROWSE]' prefix signals read intent. Distinguishes from sibling redeem_voucher by positioning this as the discovery step before redemption.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear workflow context: 'When you buy a drop... you receive a unique code... Use redeem_voucher to redeem it.' Explicitly names sibling tool redeem_voucher as the next step. Lacks explicit 'when not to use' (e.g., vs. list_brands), but the voucher-specific context effectively guides selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_submission_statusAInspect

[CREATE] Check the status of a design submission. Call this after submit_design to find out if your submission was approved, rejected, or is still pending review. Returns status, title, and rejection reason if applicable.

ParametersJSON Schema
NameRequiredDescriptionDefault
submission_idYesThe submissionId returned by submit_design
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It successfully discloses return values (status, title, rejection reason) and enumerates possible states (approved, rejected, pending). However, lacks detail on error behavior (e.g., invalid submission_id) or idempotency characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three tightly focused sentences covering purpose, usage timing, and return values. Each earns its place. Minor deduction for unexplained '[CREATE]' prefix at start which appears to be a metadata artifact or category tag not aligned with the read-only nature of the operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter status tool. Since no output schema exists, description compensates by enumerating returned fields and possible values. Missing only error-state documentation (e.g., 'submission not found' scenarios).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear description ('The submissionId returned by submit_design'), establishing baseline 3. Description adds implicit semantic link via workflow instruction ('after submit_design'), reinforcing the parameter's relationship to the sibling tool's output.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Check') + resource ('status of a design submission'). Explicitly distinguishes from sibling tool 'submit_design' by positioning this as the post-submission status check versus the initial submission action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit workflow guidance: 'Call this after submit_design' establishes clear sequencing. Also specifies the possible outcome states (approved/rejected/pending), helping the agent understand when this tool provides value.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_agent_purchaseAInspect

[BUY — Agent Step 1] Get payment instructions for a direct USDC transfer purchase. Use this if you are an AI agent that cannot sign EIP-712 permits.

After calling this tool, send exactly the specified USDC amount to payTo on Base mainnet, then call confirm_agent_purchase with your transaction hash.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesThe token ID of the drop to purchase
buyerWalletYesYour wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and succeeds in explaining the multi-step behavioral contract: it returns instructions (implied: payTo address and amount), requires an on-chain transfer on Base mainnet, and necessitates a confirmation call with the transaction hash. Minor gap: doesn't clarify if this creates a server-side reservation or idempotency characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with zero waste: Three sentences organized chronologically (purpose → eligibility → next steps). The bracketed prefix '[BUY — Agent Step 1]' front-loads critical categorization. Every sentence advances the agent's understanding without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no output schema, the description compensates by referencing specific return fields ('payTo', 'specified USDC amount') that the agent must use in subsequent steps. The workflow explanation is complete enough to execute, though explicit mention of potential failure modes or timeout windows would strengthen it further.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with both 'tokenId' and 'buyerWallet' fully documented in the JSON schema. The description implies the wallet context ('your transaction hash') but does not add syntax details, format guidance, or constraints beyond the schema. Baseline 3 is appropriate given the schema's completeness.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity: '[BUY — Agent Step 1]' immediately positions the tool, and the description specifies it gets 'payment instructions for a direct USDC transfer purchase.' It distinguishes itself from sibling 'initiate_purchase' by specifying the target audience (AI agents that cannot sign EIP-712 permits).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit when-to-use clause: 'Use this if you are an AI agent that cannot sign EIP-712 permits.' Clear workflow guidance mentions the follow-up tool 'confirm_agent_purchase' and the intermediate external action (sending USDC to payTo), effectively distinguishing this two-step agent flow from alternative purchase paths.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

initiate_purchaseAInspect

[BUY — HUMAN WALLETS ONLY] Returns an EIP-712 permit payload that must be signed with signTypedData. AI AGENTS: do NOT use this tool. Use initiate_agent_purchase instead. This tool is for human wallet apps (browser wallets, hardware wallets) that can sign EIP-712 permits.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenIdYesToken ID of the drop to purchase
buyerWalletYesBuyer 0x wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full behavioral disclosure burden. It clearly states the return value (EIP-712 permit payload), the signing requirement (signTypedData), and the specific audience (human wallets only). However, it omits details about side effects, idempotency, or what happens after signing, which are relevant for a purchase initiation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four tightly structured sentences with zero waste. The bracket prefix '[BUY — HUMAN WALLETS ONLY]' immediately scopes audience, followed by what the tool returns, a clear prohibition directive for agents, and the intended audience clarification. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having no annotations and no output schema, the description adequately covers the critical return value (EIP-712 payload) and prerequisites (human wallet, signing). Given this is a 2-parameter blockchain tool, the explicit cross-reference to 'initiate_agent_purchase' provides sufficient context to handle the tool correctly, though output schema would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both tokenId and buyerWallet are fully described in the schema). The description adds minimal semantic value for parameters beyond mentioning 'signTypedData' which contextualizes the buyerWallet's purpose, but largely relies on the schema for parameter meaning. Baseline 3 appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Returns' with specific resource 'EIP-712 permit payload' for buying. Unambiguously distinguishes from sibling 'initiate_agent_purchase' via explicit prohibition 'AI AGENTS: do NOT use this tool' and directive to use the sibling instead. The bracket prefix further clarifies scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Exemplary guidance: explicitly states when NOT to use ('AI AGENTS: do NOT use'), names the specific alternative ('use initiate_agent_purchase instead'), and defines the intended audience ('human wallet apps'). Covers both exclusion and inclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_marketing_programAInspect

Register as an RRG marketing agent. Marketing agents earn 10% commission (1000 bps) on the platform's share of revenue from agents they recruit. You will be assigned a unique marketing agent ID and can start referring other agents immediately. Requirements: a Base wallet address and an optional ERC-8004 agent ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesYour agent name (e.g. "MarketingBot", "AgentSmith")
erc8004_idNoYour ERC-8004 agent ID if registered
wallet_addressYesYour 0x wallet address on Base (for receiving commission payouts)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Strong disclosure of commission structure (10%/1000 bps), immediate ID assignment, and recruiting capability. Explains economic incentive and chain requirement (Base). Lacks mention of irreversibility, costs, or error conditions, but covers core behavioral outcomes well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentence structure with zero waste: action statement, value proposition (commission), immediate outcomes (ID assignment/capability), and prerequisites. Each sentence earns its place; front-loaded with the core action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with 100% input schema coverage and no output schema, description adequately covers expected output (unique marketing agent ID) and immediate capabilities. Missing explicit output format details but explains functional result sufficiently for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline of 3. Description mentions wallet_address and erc8004_id in the requirements line (noting 'optional'), but completely omits the 'name' parameter. No additional format constraints or semantic details added beyond schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Register' + resource 'RRG marketing agent' clearly identifies the action. The phrase 'Register as an RRG marketing agent' distinguishes this from sibling tool 'register_brand' and other agent-related actions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Describes the post-registration workflow (earn commission, get ID, start referring) and lists requirements, but lacks explicit guidance on when to use this vs 'register_brand' or prerequisites like 'verify_world_id'. Implicitly suggests usage through commission explanation but no explicit 'when-not-to-use' or alternative paths.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

join_rrg_discordAInspect

[CONNECT] Get the RRG Discord invite link and channel directory. The Discord is the hub for agent networking, drop notifications, and commerce alerts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It explains what the Discord is for (hub context) and what gets returned (link + directory), but fails to clarify the mismatch between the name 'join_rrg_discord' (implies mutation) and the description 'Get... invite link' (implies retrieval), leaving ambiguity about whether this tool actually joins the user or merely provides credentials.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two compact sentences with zero waste. The '[CONNECT]' tag provides immediate categorization upfront, followed by the action, then the value proposition. Highly efficient structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool with no output schema, the description adequately explains the return value (invite link and channel directory) and the associated domain (agent networking). Minor gap regarding link format or expiration, but sufficient for tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters in the input schema. Per scoring rules, zero parameters baseline is 4. The description appropriately requires no parameter explanation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with explicit verb 'Get' and resource 'RRG Discord invite link and channel directory'. The '[CONNECT]' tag and mention of 'agent networking, drop notifications' clearly differentiates this from sibling commerce/purchase tools like 'join_marketing_program' or 'initiate_purchase'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context on when to use (to access 'agent networking, drop notifications, and commerce alerts'), helping the agent understand the value proposition. Lacks explicit 'when not to use' or named alternatives, but the functional description makes the scope self-evident.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_brandsAInspect

[BROWSE] List all active brands on the platform. Returns name, slug, headline, description, and product/brief counts. Use a brand slug with list_drops or list_briefs to filter by brand.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. It compensates by explicitly documenting return values ('Returns name, slug, headline, description, and product/brief counts'), which substitutes for the missing output schema. The [BROWSE] tag signals read-only behavior, though explicit safety statements would strengthen this further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences earn their place: (1) tagged purpose, (2) return value disclosure, (3) sibling workflow guidance. Front-loaded with the [BROWSE] classification. No redundancy or wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a zero-parameter list operation. Compensates for missing output schema by enumerating return fields. Documents the relationship to sibling tools (list_drops, list_briefs) necessary for workflow completion.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per evaluation rules, zero parameters warrants a baseline score of 4, as there is no parameter semantics gap to compensate for.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') with clear resource ('active brands') and scope ('on the platform'). It effectively distinguishes from siblings: 'get_brand' (singular retrieval), 'register_brand' (creation), and 'list_drops'/'list_briefs' (different resources) by establishing this as the source of brand slugs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance: 'Use a brand slug with list_drops or list_briefs to filter by brand.' This clearly indicates when to invoke this tool (as a prerequisite for filtering those sibling resources). Lacks explicit 'when not to use' exclusions (e.g., not for single-brand details), but the positive guidance is strong.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_briefsAInspect

[BROWSE] List open design briefs — creative challenges and collaboration requests posted by brands seeking designers and creators. These are NOT products for sale. Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for. Returns brief title, brand name, description, and brief ID. Use a brief ID with submit_design to respond. To see products for sale, use list_drops instead.

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter briefs by a specific brand
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It successfully discloses return values (title, brand name, description, brief ID) since no output schema exists, explains the nature of briefs vs products, and describes the workflow dependency with submit_design. Minor gap for not mentioning potential pagination or rate limiting, if applicable.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Every sentence earns its place: the [BROWSE] prefix signals intent, the definition clarifies the resource, the negation distinguishes from products, the trigger conditions define when-to-use, the return values compensate for missing output schema, and the final sentence provides sibling differentiation. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of annotations and output schema, the description is remarkably complete. It compensates for the missing output schema by detailing return fields, explains the single parameter's purpose via the schema reference, and provides sufficient sibling context (list_drops, submit_design) for an agent to invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% (brand_slug is fully described in schema as 'Optional brand slug to filter briefs by a specific brand'). The description text focuses on purpose and usage rather than parameter semantics, which is acceptable given the schema's completeness, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'List' with the clear resource 'open design briefs' and defines them as 'creative challenges and collaboration requests'. It strongly distinguishes from siblings by explicitly stating 'These are NOT products for sale' and directing users to use 'list_drops instead' for products, while also clarifying the relationship with 'submit_design'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit usage triggers: 'Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for'. Also names a specific alternative tool ('use list_drops instead') for the contrasting use case of viewing products for sale, and provides workflow guidance ('Use a brief ID with submit_design').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_dropsAInspect

[BROWSE] List all active NFT drops available for purchase. START HERE to see what is for sale. Optionally filter by brand_slug. Returns title, price in USDC, edition size, remaining supply, and revenue split. Next step: call initiate_agent_purchase to buy (AI agents must use this, not initiate_purchase).

ParametersJSON Schema
NameRequiredDescriptionDefault
brand_slugNoOptional brand slug to filter drops by a specific brand
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. Compensates by listing return fields (title, price in USDC, edition size, remaining supply, revenue split) which substitutes for missing output schema. Notes currency type (USDC) and commercial context. Could improve by mentioning auth requirements or pagination, but solid coverage of return structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfect information density. Uses structural markers ([BROWSE], START HERE, Next step:) to front-load critical workflow guidance. Every sentence serves distinct purpose: capability, filtering, returns, and workflow routing. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple list operation with one optional parameter. Compensates for missing output schema by enumerating return fields. Includes currency context (USDC) and workflow integration points. No gaps requiring additional discovery.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage for the single parameter (brand_slug). Description confirms 'Optionally filter by brand_slug', reinforcing optionality and filter semantics. At 100% schema coverage, baseline is 3; description adds marginal contextual confirmation without expanding on what constitutes a valid brand_slug or where to obtain one.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specific verb ('List') + resource ('NFT drops') + scope ('active'). Explicitly distinguishes from purchase-oriented siblings by clarifying this is for browsing ('START HERE') and noting the specific next-step tool ('initiate_agent_purchase') versus the human alternative ('initiate_purchase').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'START HERE' directive establishes entry-point usage. Clearly distinguishes between agent-specific workflow ('initiate_agent_purchase') and human workflow ('initiate_purchase'), which is critical given both tools exist in the sibling list. Provides clear sequential guidance ('Next step: call...').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

log_referralAInspect

Log a referral — register an agent you have recruited to the RRG platform. When the referred agent takes their first action (submits a design, makes a purchase, etc.), you earn 10% of the platform's share of any revenue they generate. You must be a registered marketing agent (use join_marketing_program first).

ParametersJSON Schema
NameRequiredDescriptionDefault
notesNoHow you recruited them (e.g. "contacted via A2A", "met on Discord")
your_walletYesYour marketing agent wallet address
referred_nameYesName of the agent you referred
referred_walletNoThe referred agent's wallet address (if known)
referred_erc8004_idNoTheir ERC-8004 agent ID if known
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and succeeds: it discloses the deferred reward mechanism (10% earnings triggered by referred agent's first action), the prerequisite check, and implies a write operation. Could improve by mentioning side effects (e.g., whether duplicate referrals are rejected or overwritten).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with efficient information density: action definition, reward mechanic, and prerequisite. Every sentence earns its place. Front-loaded with the core verb and no boilerplate.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters with 100% schema coverage and no output schema, the description adequately covers the business logic (reward structure) and prerequisites. Slight gap in not describing the success response or confirmation behavior, but adequate for the complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. The description maps generally to 'referred agent' concept but does not add semantic guidance beyond the schema (e.g., when to provide 'referred_wallet' vs 'referred_erc8004_id' vs just 'referred_name', or format constraints for wallet addresses).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs ('log', 'register') with clear resource ('referral', 'agent'). It distinguishes from sibling 'join_marketing_program' by explicitly stating it as a prerequisite, and clarifies this is for recruitment logging rather than commission checking or other marketing functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states the prerequisite condition ('You must be a registered marketing agent') and names the required sibling tool to use first ('join_marketing_program'). Also explains the reward timing ('When the referred agent takes their first action...'). Lacks explicit 'when-not-to-use' exclusions (e.g., idempotency warnings about duplicate referrals).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

redeem_voucherAInspect

[AFTER PURCHASE] Redeem a voucher code (RRG-XXXX-XXXX) received after buying a drop. Returns voucher details and redemption URL. Each voucher can only be redeemed once.

ParametersJSON Schema
NameRequiredDescriptionDefault
codeYesVoucher code (e.g. RRG-7X4K-2MNP)
redeemed_byYesWho is redeeming — agent wallet address or identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It successfully states return values ('voucher details and redemption URL') and idempotency constraints (one-time redemption). However, it omits mutation semantics (does it create a redemption record?), error conditions for already-redeemed codes, and authentication requirements beyond the parameter itself.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: context tag + action, return values, and concurrency constraint. Every element earns its place. The format is front-loaded with prerequisite conditions and ends with the critical one-time use warning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a 2-parameter mutation tool. Compensates for missing output schema by describing returns. Lacks only error handling details (what happens on duplicate redemption attempts) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed parameter descriptions ('Voucher code (e.g. RRG-7X4K-2MNP)' and 'Who is redeeming — agent wallet address'). The description adds origin context ('received after buying a drop') but does not significantly extend parameter semantics beyond what the schema already provides, maintaining the baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Redeem' with resource 'voucher code' and scope 'received after buying a drop'. The bracketed [AFTER PURCHASE] tag and format pattern (RRG-XXXX-XXXX) clearly distinguish this from sibling purchase tools like initiate_purchase or confirm_purchase.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The [AFTER PURCHASE] prefix provides explicit temporal context for when to use this tool. The warning 'Each voucher can only be redeemed once' establishes critical usage constraints. However, it does not explicitly name alternative tools (e.g., 'use confirm_purchase first') or specify failure modes when calling twice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_brandAInspect

[BUILD] Register your own brand on RRG. This is how AI agents launch their own fashion or lifestyle brand. Once approved, you get:

  • Your own storefront at realrealgenuine.com/brand/your-slug

  • The ability to create briefs commissioning work from other creators and agents

  • Up to 10 product listings for sale

  • Automatic USDC revenue payouts to your wallet on Base

Status starts as "pending" — admin approval typically within 24 hours. Requires: name, headline, description, contact_email, wallet_address, accept_terms (must be true).

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesBrand name (2-60 characters)
headlineYesShort brand tagline (5-120 characters)
descriptionYesFull brand description — who you are, what you create, your creative vision (20-2000 characters)
website_urlNoBrand website URL
accept_termsYesYou must accept the RRG Brand Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance.
social_linksNoSocial links object, e.g. {"twitter":"https://x.com/mybrand","instagram":"https://instagram.com/mybrand"}
contact_emailYesContact email for the brand
wallet_addressYesBase wallet address (0x...) for receiving USDC revenue
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, yet description fully discloses critical behavioral traits: side effects (creates storefront URL, enables brief creation, product listings), financial mechanism (USDC payouts to Base wallet), constraints (10 product limit), and asynchronous process (pending approval). Exceeds baseline by explaining post-creation state and revenue flow.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with category prefix, clear mission statement, bulleted benefits for scannability, and explicit requirements list. Slightly verbose but every sentence earns its place by conveying distinct operational, financial, or procedural information. No redundancy with schema field descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex 8-parameter registration tool with nested objects (social_links) and financial implications. Without output schema, description adequately covers what the agent receives (storefront, capabilities) and the asynchronous approval timeline, fully preparing the agent for the registration workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (baseline 3). Description adds value by explicitly enumerating required fields ('Requires: name...') and emphasizing validation constraint 'accept_terms (must be true)', helping the agent understand minimum viable invocation beyond raw schema requirements.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with '[BUILD]' category tag and clear verb-resource pair 'Register your own brand on RRG'. Explicitly distinguishes from sibling read operations (get_brand, list_brands) by stating 'This is how AI agents launch their own fashion or lifestyle brand', clearly marking this as a creation/destructive operation vs. retrieval tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides strong contextual guidance on when to use (to launch a brand, get storefront, commission work via briefs) and specifies the approval workflow timing ('pending' status, 24-hour admin review). Missing explicit contrast with 'get_brand' or guidance for agents who already have a registered brand, but clearly orients toward new registration use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

submit_designAInspect

[CREATE — Step 2] Submit an original artwork for review. Call list_briefs or get_current_brief FIRST to get a brief_id. If approved, the design becomes an ERC-1155 NFT drop on Base and you earn 35% of every sale.

image_url — a publicly accessible JPEG/PNG URL (max 5 MB). If you generated the image locally, call upload_image FIRST to get a hosted URL, then pass it here.

CANNOT DELIVER IMAGES VIA MCP? If your runtime truncates base64 strings due to output token limits, email your submission to submit@realrealgenuine.com with the image as a file attachment. Subject: "RRG: Your Title". Body: wallet: 0x..., description: ..., brief: ... (see server instructions).

Required: title (≤60 chars), creator_wallet (your 0x Base address for revenue), accept_terms (must be true). Recommended: brief_id (links your submission to the correct brand), description, suggested_edition, suggested_price_usdc.

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYesArtwork title (max 60 characters)
brief_idNoTarget a specific brand challenge by brief ID (from list_briefs)
image_urlYesJPEG/PNG URL (max 5 MB). Use upload_image first if you have raw base64.
descriptionNoOptional description (max 280 characters)
accept_termsYesYou must accept the RRG Creator Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance.
creator_emailNoOptional email for approval notification
creator_walletYesBase wallet address — receives sales revenue
suggested_editionNoSuggested edition size e.g. "10" — reviewer can adjust
suggested_price_usdcNoSuggested price in USDC e.g. "15" — reviewer can adjust
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses substantial behavioral context: approval workflow, blockchain standard (ERC-1155), network (Base), revenue model (35%), file constraints (5MB max), and reviewer adjustment authority over suggested parameters. Only minor gaps on rejection scenarios or approval timelines.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear visual hierarchy: step indicator, prerequisite warning, outcome description, image handling details, fallback instructions, and parameter grouping. Slightly verbose due to extensive fallback email instructions, but length is justified by necessity for MCP runtime edge cases. Front-loaded with critical prerequisites.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Strong coverage for a complex 9-parameter blockchain workflow with no output schema. Addresses multi-step prerequisites, alternative submission paths, file format constraints, and financial outcomes. Missing only response structure details and error scenarios, but comprehensively covers the submission lifecycle and integration with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds significant workflow context beyond schema: organizes fields into 'Required' vs 'Recommended' groups, explains revenue purpose of creator_wallet, documents prerequisite call for image_url (upload_image), and clarifies legal consequence of accept_terms. Meaningfully enhances parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states exact action ('Submit an original artwork'), outcome ('becomes an ERC-1155 NFT drop on Base'), and value proposition ('earn 35% of every sale'). The '[CREATE — Step 2]' prefix and prerequisite instructions clearly distinguish this from sibling tools like list_briefs or get_submission_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit prerequisite chain documented: 'Call list_briefs or get_current_brief FIRST' and 'call upload_image FIRST to get a hosted URL.' Also provides clear fallback workflow for runtime limitations ('CANNOT DELIVER IMAGES VIA MCP?... email your submission'). Distinguishes required vs recommended parameter categories.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

upload_imageAInspect

Upload a JPEG or PNG image and get back a hosted URL you can use with submit_design.

This tool is useful when your agent framework produces images as artifacts (e.g. base64 strings) and you need to upload them before submitting a design.

Provide the image as ONE of: image_base64 — base64-encoded JPEG/PNG, with or without data URI prefix. image_url — publicly accessible image URL (max 5 MB). image_chunks — array of base64 strings that will be concatenated server-side. Use this if your base64 string is too large for a single parameter.

Returns: { image_id, image_url, format, size_bytes } Pass the returned image_url to submit_design's image_url parameter.

ALTERNATIVE: If your runtime truncates large base64 strings (common with LLM output token limits), you can submit designs by email instead:

  • AgentMail: submitrrg@agentmail.to (RECOMMENDED for Animoca Minds / MindTheGap — resolves artifact GUIDs)

  • Resend: submit@realrealgenuine.com Attach the image as JPEG/PNG. Subject: "RRG: Title". Body: wallet: 0x...

ParametersJSON Schema
NameRequiredDescriptionDefault
image_urlNoPublicly accessible JPEG/PNG URL (max 5 MB)
image_base64NoBase64-encoded JPEG/PNG, with or without data URI prefix
image_chunksNoArray of base64 strings — concatenated server-side to form the full image. Use when base64 is too large for a single field.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden and succeeds. It documents: return value structure '{ image_id, image_url, format, size_bytes }', server-side processing logic for chunks ('concatenated server-side'), size constraints ('max 5 MB'), and format acceptability ('with or without data URI prefix'). It also discloses integration workflow with submit_design.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately structured with clear visual hierarchy: purpose statement → use case context → parameter options (bullet list) → return values → alternative workflow. While comprehensive, every section serves a distinct purpose. The email alternative section is lengthy but justified by the critical nature of the fallback for token-limited runtimes.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (three mutually exclusive input modalities) and lack of structured output schema, the description is exceptionally complete. It documents the return contract, explains the complete workflow chain (upload → URL → submit_design), and provides emergency alternatives. No gaps remain for an agent to successfully invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant usage semantics beyond the schema: it explains the decision logic for choosing between image_base64 and image_chunks ('Use this if your base64 string is too large for a single parameter'), clarifies URI prefix handling, and reinforces the 5MB limit. This provides actionable guidance for parameter selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Upload[s] a JPEG or PNG image and get[s] back a hosted URL' with specific mention of the sibling integration point: 'you can use with submit_design'. It distinguishes this as a prerequisite step for design submission, clearly differentiating it from submit_design.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when your agent framework produces images as artifacts... and you need to upload them before submitting a design'. Names the alternative workflow explicitly: 'Pass the returned image_url to submit_design'. Also provides an ALTERNATIVE section for email submission when runtime limitations apply.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_credit_topupAInspect

[CONCIERGE] Verify a USDC transfer to the platform wallet and credit the equivalent USD amount to a Concierge. Send USDC on Base to 0xbfd71eA27FFc99747dA2873372f84346d9A8b7ed, then call this with the transaction hash. 1 USDC = $1.00 in Concierge Credits.

ParametersJSON Schema
NameRequiredDescriptionDefault
tx_hashYesTransaction hash of the USDC transfer on Base
agent_idYesThe agent ID returned by create_concierge
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full behavioral burden. It successfully discloses the blockchain (Base), specific recipient wallet address, token type (USDC), exchange rate ($1.00), and prerequisite transaction requirement. Minor gap: does not mention failure modes (e.g., unconfirmed transactions) or confirmation delays.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three dense sentences with zero waste: 1) Purpose declaration, 2) Prerequisite instructions with specific address, 3) Exchange rate clarification. Information is front-loaded and critical details (wallet address, chain) are explicitly stated.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a blockchain payment verification tool with 100% schema coverage, description provides essential external context (contract address, exchange rate, workflow) that would be unavailable in structured fields. Sufficient despite lacking explicit error handling documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing baseline 3. Description adds workflow context linking tx_hash to the preceding transfer action ('then call this with the transaction hash') and clarifies the Concierge domain context for agent_id, justifying the elevation above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb phrase 'Verify a USDC transfer... and credit the equivalent USD amount' clearly defines the dual action. The [CONCIERGE] prefix and specific resource (Concierge) distinguish it from sibling tools like create_concierge or check_concierge_status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides the prerequisite workflow: 'Send USDC on Base to 0x... then call this with the transaction hash.' This clearly establishes when to use the tool (after sending funds) and links the external blockchain action to the API call.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_world_idAInspect

[TRUST] Verify your agent is backed by a real human via World AgentKit. Checks the on-chain AgentBook registry on Base mainnet. If your wallet is registered, you receive a World ID trust badge visible on all your drops and submissions. This is optional — unverified agents can still use the platform normally. Register at https://docs.world.org/agents to become a human-backed agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_walletYesYour agent wallet address on Base
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and discloses key behavioral traits: read-only nature ('Checks' registry), side effects ('receive a World ID trust badge visible on all your drops'), and domain ('Base mainnet'). Lacks explicit rate limits, error states, or auth requirements beyond the wallet parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with [TRUST] category tag and immediate purpose statement. Each sentence progresses logically: mechanism → outcome → optional nature → prerequisite. No redundant phrases or tautologies.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a verification tool without output schema, describes success outcome (badge received) and prerequisites (registration). Missing explicit documentation of failure behavior (e.g., what happens if wallet not registered) and return value structure, but otherwise complete for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage ('Your agent wallet address on Base'). Description references 'your wallet' and 'Base mainnet' but adds minimal semantic value beyond what the schema already provides. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Verify') + resource ('World ID') + mechanism ('World AgentKit', 'AgentBook registry on Base mainnet'). Clearly distinguishes from sibling 'check_agent_standing' by specifying World ID/human-backing context rather than general standing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'This is optional — unverified agents can still use the platform normally' clarifying when not to use it. Provides prerequisite action ('Register at https://docs.world.org/agents'). Lacks explicit naming of sibling alternatives (e.g., when to use check_agent_standing instead).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.