Real Real Genuine
Server Details
Agent-native fashion marketplace. Browse, design, purchase NFTs, launch brands on Base.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- richardjhobbs/rrg
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 30 of 30 tools scored. Lowest: 3.4/5.
The tool set has clear functional groupings (browse, buy, create, affiliate, concierge, etc.), but there is significant overlap within groups. For example, initiate_purchase and initiate_agent_purchase serve similar purposes for different user types, and check_agent_standing vs verify_world_id both relate to trust but with unclear boundaries. Descriptions help differentiate, but agents may still misselect due to overlapping scopes.
Most tools follow a consistent verb_noun naming pattern (e.g., list_brands, get_brand, submit_design), which aids predictability. However, there are minor deviations like join_rrg_discord (verb_noun_noun) and get_marketing_handbook (no brackets like others), slightly breaking the pattern. Overall, naming is largely consistent and readable.
With 30 tools, the count is excessive for the server's purpose of e-commerce and agent interactions. Many tools could be consolidated (e.g., multiple purchase flows, overlapping browse tools), leading to a bloated interface. This high number increases complexity and potential for confusion, making it feel heavy and poorly scoped.
The tool surface is quite comprehensive, covering browsing, purchasing, design submission, affiliate programs, concierge services, and trust management. Minor gaps exist, such as no tool for updating or deleting concierge preferences or managing brand listings after registration, but core workflows are well-supported and agents can likely work around these omissions.
Available Tools
32 toolscheck_agent_standingAInspect
[TRUST] Check your on-chain trust standing across RRG brands (ERC-8004 reputation). Trust levels: standard (new) → trusted (3+ purchases) → premium (10+ purchases). Higher trust unlocks better voucher offers and priority access.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_wallet | Yes | Agent wallet address on Base |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses behavioral traits such as the trust level thresholds (standard, trusted, premium) and the benefits of higher trust. However, it lacks details on error handling, rate limits, authentication needs, or what happens if the wallet address is invalid. The description does not contradict any annotations (since none exist).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by trust levels and benefits. Every sentence adds value: the first defines the tool, the second explains trust thresholds, and the third describes outcomes. There is no wasted text, and it is appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (checking on-chain reputation), no annotations, no output schema, and 100% schema coverage, the description is fairly complete. It covers purpose, trust levels, and benefits, but lacks details on return values (e.g., what specific data is returned) and error cases. For a read-only tool with simple inputs, this is adequate but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'agent_wallet' well-documented in the schema. The description does not add parameter-specific details beyond the schema, but it implicitly clarifies that the wallet is used to check trust standing across brands. With high schema coverage and only one parameter, a baseline of 4 is appropriate as the description provides context without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: checking on-chain trust standing across RRG brands using ERC-8004 reputation. It specifies the verb ('check') and resource ('trust standing') and distinguishes itself from siblings by focusing on reputation rather than purchases, commissions, or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning trust levels and benefits (e.g., 'unlocks better voucher offers and priority access'), suggesting it should be used to assess eligibility for perks. However, it does not explicitly state when to use this tool versus alternatives like 'get_offers' or 'redeem_voucher', nor does it provide exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
check_my_commissionsAInspect
[AFFILIATE / REFERRAL / MARKETING] Check your referral / marketing / affiliate commission balance and history. Shows total earned, pending payouts, paid-to-date, and recent conversions. Identified by wallet. Works for humans and AI agents alike.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet_address | Yes | Your marketing agent wallet address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (checking commission balance and history) and its scope (works for humans and AI agents), but lacks details on potential behavioral traits such as rate limits, authentication requirements, error conditions, or data freshness. It does not contradict any annotations, as none are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the first sentence clearly stating the core purpose. Each subsequent sentence adds value: detailing what data is shown, how identification works, and the tool's applicability. There is no wasted text, and the structure efficiently conveys essential information in a few concise sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a read operation with one parameter), no annotations, and no output schema, the description is mostly complete. It covers the purpose, data returned, and parameter context well. However, it lacks details on output format (e.g., structure of commission history) and potential limitations (e.g., pagination for recent conversions), which would enhance completeness for an agent's use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'wallet_address' well-documented in the schema. The description adds meaningful context by specifying that the wallet is for 'marketing agent' purposes and that commissions are 'identified by wallet', which clarifies the parameter's role beyond the schema's technical definition. With only one parameter, the baseline is high, and the description compensates adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('check', 'shows', 'identified') and resources ('commission balance and history', 'total earned, pending payouts, paid-to-date, recent conversions', 'wallet'). It distinguishes itself from siblings like 'log_referral', 'verify_credit_topup', or 'get_my_preferences' by focusing on commission tracking rather than logging, verification, or preference retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: for checking commission-related data identified by a wallet address, applicable to both humans and AI agents. However, it does not explicitly state when not to use it or name specific alternatives among siblings (e.g., 'log_referral' for logging referrals instead of checking commissions), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
confirm_agent_purchaseAInspect
[BUY — Agent Step 2] Confirm your USDC payment and claim the listing. Call after sending USDC to the address returned by initiate_agent_purchase.
Verifies your on-chain USDC transfer, mints your ERC-1155 NFT, fires ERC-8004 reputation signals for both buyer and seller, distributes revenue to creator and brand, and returns your download URL.
Include buyerAgentId (your ERC-8004 agent ID) for an agent-to-agent trust signal on-chain.
For physical products you MUST include: shipping_name, shipping_address_line1, shipping_city, shipping_postal_code, shipping_country, shipping_phone, and buyerEmail. shipping_phone is required for delivery confirmation. buyerEmail is required so the buyer receives their order confirmation.
| Name | Required | Description | Default |
|---|---|---|---|
| txHash | Yes | Your USDC transfer transaction hash on Base | |
| tokenId | Yes | The listing token ID | |
| buyerEmail | No | Email address for order confirmation and file delivery. Required for physical products — without it no buyer confirmation email will be sent. | |
| buyerWallet | Yes | Your wallet address | |
| buyerAgentId | No | Your ERC-8004 agent ID for on-chain reputation signals (e.g. 17666) | |
| selected_size | No | For sized products, the size you chose at initiate_agent_purchase. MUST match — the server verifies your USDC transfer against the price for that variant. | |
| shipping_city | No | City (required for physical products) | |
| shipping_name | No | Recipient name (required for physical products) | |
| selected_color | No | For products with a colour axis, the colourway you chose at initiate_agent_purchase. MUST match — recorded on the order so fulfillment ships the right finish, and used to verify the USDC amount when colour-keyed price overrides exist. | |
| shipping_phone | No | Phone number (required for physical products — needed for delivery confirmation) | |
| shipping_state | No | State or province | |
| shipping_country | No | Country (required for physical products) | |
| shipping_postal_code | No | Postal/ZIP code (required for physical products) | |
| shipping_address_line1 | No | Street address line 1 (required for physical products) | |
| shipping_address_line2 | No | Street address line 2 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does an excellent job describing the multi-step process (verifies transfer, mints NFT, fires reputation signals, distributes revenue, returns download URL) which gives the agent clear expectations about what will happen. It also mentions the optional buyerAgentId parameter for 'agent-to-agent trust signal on-chain,' adding valuable context about behavioral implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise. It begins with a clear purpose statement, provides usage guidance, details the multi-step behavioral process, and ends with specific parameter guidance - all in four well-organized sentences with zero wasted words. Every sentence earns its place by adding distinct value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex purchase confirmation tool with no annotations and no output schema, the description does an excellent job covering the behavioral sequence and workflow context. The main gap is the lack of information about return values (only mentions 'returns your download URL' without specifying format or other response details). However, given the comprehensive behavioral disclosure and clear workflow positioning, it provides substantial contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter-specific information beyond what's in the schema - it only mentions buyerAgentId for 'agent-to-agent trust signal on-chain.' This provides some additional semantic context for that specific parameter but doesn't significantly enhance understanding beyond the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Confirm your USDC payment and claim the listing') and resources (USDC payment, NFT minting, reputation signals). It explicitly distinguishes from sibling 'initiate_agent_purchase' by positioning this as 'Step 2' after that tool, providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Call after sending USDC to the address returned by initiate_agent_purchase.' This clearly states when to use this tool versus its sibling, establishing a sequential workflow relationship that helps the agent understand the proper context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
confirm_purchaseAInspect
[BUY — Step 2] Complete the purchase by submitting the signed EIP-712 permit from initiate_purchase. Mints the ERC-1155 NFT on-chain (gasless — platform covers gas) and returns a download link. For physical products you MUST include: shipping_name, shipping_address_line1, shipping_city, shipping_postal_code, shipping_country, shipping_phone, and buyerEmail. shipping_phone is required for delivery confirmation. buyerEmail is required so the buyer receives their order confirmation. The response includes revenue split details.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | Token ID of the listing | |
| deadline | Yes | Permit deadline (Unix timestamp string from initiate_purchase) | |
| signature | Yes | EIP-712 signature from wallet.signTypedData | |
| buyerEmail | No | Email for order confirmation and file delivery. Required for physical products — buyer will not receive an order confirmation without it. | |
| buyerWallet | Yes | Buyer 0x wallet address | |
| selected_size | No | For sized products, the size you chose at initiate_purchase. MUST match the size whose price was used to build the permit. | |
| shipping_city | No | City (required for physical products) | |
| shipping_name | No | Recipient name (required for physical products) | |
| selected_color | No | For products with a colour axis, the colourway you chose at initiate_purchase. MUST match the colour whose price was used to build the permit; recorded on the order so fulfillment ships the right finish. | |
| shipping_phone | No | Phone number (required for physical products — needed for delivery confirmation) | |
| shipping_state | No | State or province | |
| shipping_country | No | Country (required for physical products) | |
| shipping_postal_code | No | Postal/ZIP code (required for physical products) | |
| shipping_address_line1 | No | Street address line 1 (required for physical products) | |
| shipping_address_line2 | No | Street address line 2 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a write operation ('Complete the purchase', 'Mints'), specifies gas coverage ('gasless — platform covers gas'), indicates output ('returns a download link' and 'revenue split details'), and notes conditional requirements for physical products. It doesn't cover error cases or rate limits, but provides substantial context for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: the first states the core purpose and prerequisite, the second explains the on-chain action and gas coverage with output details, and the third provides conditional requirements. Each sentence adds essential information with zero waste, and it's front-loaded with the primary action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex mutation tool with 13 parameters and no annotations or output schema, the description does well by explaining the tool's role in a two-step process, on-chain behavior, gas coverage, conditional requirements, and expected outputs. However, it lacks details on error handling, response format beyond mentions of 'download link' and 'revenue split details', and doesn't clarify if this applies to all purchase types or just specific ones.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description adds minimal value beyond the schema, only implying that shipping fields are conditionally required for physical products. It doesn't explain parameter interactions or provide additional semantic context, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Complete the purchase'), resource ('ERC-1155 NFT'), and process ('by submitting the signed EIP-712 permit from initiate_purchase'). It distinguishes itself from sibling tools like 'initiate_purchase' by being the second step in the buying flow and explicitly mentions minting on-chain with gasless execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Step 2' after 'initiate_purchase') and includes conditional guidance ('For physical products, you MUST include shipping address fields'). However, it doesn't explicitly state when NOT to use this tool or mention alternatives among siblings, such as 'confirm_agent_purchase' or 'redeem_voucher'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_conciergeAInspect
[CONCIERGE] Create a Personal Shopper (free, rule-based) or Concierge (credit-based, LLM-powered) on RRG. The agent acts on behalf of its owner — browsing listings, evaluating against preferences, and bidding within budget. Returns the agent ID and session details. The created agent can be managed via the dashboard at realrealgenuine.com/agents/dashboard.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Name for the agent (e.g. "StyleHunter", "LuxFinder") | |
| tier | No | "basic" = Personal Shopper (free, rule-based). "pro" = Concierge (credit-based, LLM-powered, learns over time). | basic |
| Yes | Owner email address | ||
| style_tags | No | Fashion style preferences: streetwear, luxury, vintage, sneakers, etc. | |
| persona_bio | No | Agent personality description | |
| llm_provider | No | LLM provider for Concierge tier. Claude (Anthropic) or DeepSeek. | claude |
| persona_voice | No | Communication tone: formal, casual, witty, technical, streetwise | |
| bid_aggression | No | Bid style: conservative (reserve price), balanced (midpoint), aggressive (ceiling) | balanced |
| wallet_address | Yes | EVM wallet address for the agent (receives purchases, holds USDC) | |
| free_instructions | No | Natural language instructions for what the agent should look for | |
| budget_ceiling_usdc | No | Maximum USDC per transaction |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the agent's actions (browsing, evaluating, bidding) and mentions the agent can be managed via a dashboard, but doesn't cover permissions needed, rate limits, error conditions, or what happens if creation fails. It adequately explains the core behavior but lacks operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, starting with the core purpose and key differentiators. The second sentence explains the agent's role, and the third covers return values and management. Each sentence adds value, though the last sentence about the dashboard could be slightly more integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex creation tool with 11 parameters and no output schema, the description is moderately complete. It explains what the tool does and the agent's behavior but doesn't detail the return format beyond 'agent ID and session details' or address potential errors. Without annotations or output schema, more operational context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 11 parameters thoroughly. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining interactions between parameters or providing examples of valid combinations. It meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates either a Personal Shopper or Concierge agent on RRG, specifying both are agent types that browse listings, evaluate preferences, and bid within budget. It distinguishes from siblings by focusing on agent creation rather than checking status, making purchases, or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to create an agent that acts on behalf of its owner. It mentions the two tiers (free vs. credit-based) but doesn't explicitly state when to choose one over the other or provide alternatives among sibling tools like checking agent status or managing purchases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_passAInspect
[MEMBERSHIP] Get your RRG Agent Pass — Phase 1 founding membership.
The RRG Agent Pass costs $0.10 USDC and gives you: • $0.50 in purchase credits (5 × $0.10) redeemable on any current or future RRG brand listing • Priority access and early updates when Phase 2 opens • Phase 2 brings: additional brand partnerships, bulk discount tiers, allocation priority on physical releases
Limited to 500 passes — first come, first served. Max 5 per wallet.
Returns payment instructions. Send USDC, then call confirm_agent_purchase with your txHash.
| Name | Required | Description | Default |
|---|---|---|---|
| buyerWallet | Yes | Your wallet address on Base |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the tool returns payment instructions (not executing the purchase), mentions the cost ($0.10 USDC), wallet limits (max 5 per wallet), and supply constraints (limited to 500 passes). However, it doesn't cover potential errors, authentication needs, or rate limits, leaving some gaps for a financial transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with the core purpose. It uses bullet points for benefits efficiently and ends with clear workflow instructions. While informative, some details like Phase 2 features could be slightly trimmed for conciseness, but overall it's well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a financial membership tool with no annotations or output schema), the description does a good job covering purpose, usage, costs, benefits, limitations, and next steps. It lacks details on error handling or return format specifics, but for a tool that primarily returns payment instructions, it's reasonably complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with 'buyerWallet' clearly documented as 'Your wallet address on Base' with a regex pattern. The description doesn't add any parameter-specific semantics beyond what's in the schema, but the high schema coverage justifies the baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get your RRG Agent Pass — Phase 1 founding membership' and 'Returns payment instructions.' It specifies the verb 'get' and resource 'RRG Agent Pass' with context about membership benefits. However, it doesn't explicitly differentiate from sibling tools like 'initiate_agent_purchase' or 'confirm_agent_purchase' beyond mentioning the latter in the workflow.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool: to obtain a Phase 1 founding membership pass, with details on costs, benefits, and limitations. It mentions the follow-up action ('then call confirm_agent_purchase') but doesn't explicitly state when not to use it or compare it to alternatives like 'initiate_purchase' or other sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_brandAInspect
[BROWSE] Get full details for a specific brand including its profile, open briefs, and purchasable listings. Provide a brand_slug from list_brands.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | Yes | Brand slug (e.g. "rrg", "my-brand") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the tool as a read operation ('Get') and specifies the scope of returned details, but does not disclose behavioral traits like authentication needs, rate limits, or error handling. The description adds some context but lacks comprehensive behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and includes a concise usage note in the second sentence. Every sentence earns its place by providing essential information without redundancy, making it appropriately sized and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is adequate but has gaps. It covers the purpose and usage but lacks details on behavioral aspects like response format or error conditions, making it minimally viable but not fully complete for an unannotated tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single parameter 'brand_slug' with a description and example. The description adds minimal value by mentioning the parameter source ('from list_brands') but does not provide additional semantics beyond what the schema provides, aligning with the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('full details for a specific brand'), specifying what details are included ('profile, open briefs, and purchasable listings'). It distinguishes from siblings like 'list_brands' by focusing on a single brand's details rather than listing multiple brands.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('Provide a brand_slug from list_brands'), implying it should be used after obtaining a slug from 'list_brands'. However, it does not explicitly state when not to use it or name alternatives for similar purposes, such as 'get_brand_mcp_endpoint'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_brand_mcp_endpointAInspect
[DISCOVER] Get a brand's dedicated per-brand MCP endpoint URL for deeper product browsing, live stock checks, and sizing guides. Use this to connect directly to a brand for richer interaction. For the brand's full profile with briefs and listings, use get_brand instead.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | Yes | The brand slug (e.g. "unknown-union", "clooudie") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It explains what the tool returns ('dedicated per-brand MCP endpoint URL') and its purpose ('for deeper product browsing, live stock checks, and sizing guides'), but doesn't cover potential limitations like rate limits, authentication requirements, or error conditions. It adds useful context but lacks comprehensive behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences with zero waste. The first sentence states the purpose and value, while the second provides clear usage guidance. Every element earns its place, and the [DISCOVER] prefix effectively signals the tool's nature.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is reasonably complete. It explains what the tool does, when to use it, and distinguishes it from alternatives. However, without an output schema or annotations, it could benefit from more detail about the returned endpoint format or usage constraints.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'brand_slug' well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get a brand's dedicated per-brand MCP endpoint URL') and resources ('brand'), and explicitly distinguishes it from its sibling tool get_brand by contrasting 'deeper product browsing, live stock checks, and sizing guides' with 'full profile with briefs and listings'. This provides precise differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('for deeper product browsing, live stock checks, and sizing guides') and when to use an alternative ('For the brand's full profile with briefs and listings, use get_brand instead'). This provides clear guidance on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_concierge_statusAInspect
[CONCIERGE] Check the status of a Personal Shopper or Concierge — credit balance, preferences, LLM provider, and estimated operations remaining.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_id | No | Agent ID. If omitted, looks up by wallet_address. | |
| wallet_address | No | Wallet address to look up. Used if agent_id is not provided. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While it describes what information is returned, it doesn't mention whether this is a read-only operation, whether it requires authentication, potential rate limits, error conditions, or how the tool behaves when both parameters are omitted. The description provides some behavioral context but leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and provides specific details about what information is retrieved. Every word serves a purpose with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with 2 parameters (both optional) and 100% schema coverage but no output schema, the description provides adequate purpose information. However, without annotations or output schema, it should ideally mention the return format or structure. The description covers what information is retrieved but not how it's organized or returned.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema, but with 0 required parameters, the baseline is 4. The description's mention of 'looks up by wallet_address' aligns with but doesn't enhance the schema's documentation of the parameter relationship.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Check') and resource ('status of a Personal Shopper or Concierge'), with specific details about what information is retrieved (credit balance, preferences, LLM provider, estimated operations remaining). It distinguishes itself from siblings like 'check_agent_standing' or 'get_my_preferences' by focusing on concierge-specific status rather than general agent standing or user preferences.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing concierge status information, but provides no explicit guidance on when to use this tool versus alternatives like 'check_agent_standing' or 'get_my_preferences'. There's no mention of prerequisites, timing considerations, or specific scenarios where this tool is preferred over others.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_current_briefAInspect
[CREATE] Get the current design brief — the active creative challenge. Call this or list_briefs FIRST if you want to submit a design. Returns brief ID needed for submit_design. Optionally filter by brand_slug.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | No | Optional brand slug to get that brand's current brief instead of the default RRG brief |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool returns a brief ID needed for 'submit_design', which is useful behavioral context. However, it doesn't mention potential errors, rate limits, authentication needs, or what happens if no current brief exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that efficiently cover purpose, usage, and parameter hint. It's front-loaded with the core purpose, though the bracketed '[CREATE]' prefix is unexplained and slightly distracting.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic completeness for a simple read operation. It explains the purpose, usage context, and return value relevance, but lacks details on output format, error conditions, or behavioral constraints that would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single optional parameter 'brand_slug'. The description adds minimal value by mentioning 'Optionally filter by brand_slug' but doesn't provide additional semantics beyond what the schema states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get') and resource ('current design brief — the active creative challenge'), making the purpose understandable. However, it doesn't explicitly distinguish this tool from its sibling 'list_briefs' beyond mentioning both can be called first for submitting a design.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('Call this or list_briefs FIRST if you want to submit a design') and mentions an optional filter. It doesn't explicitly state when NOT to use it or detail alternatives beyond the sibling mention.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_download_linksAInspect
[AFTER PURCHASE] Retrieve signed download URLs for a previously purchased listing. Use if you lost the original download link from confirm_purchase.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | Token ID of the purchased listing | |
| buyerWallet | Yes | Buyer wallet used at purchase |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool retrieves 'signed download URLs' (implying authentication/security) and is for 'previously purchased' items (implying read-only access to existing data), but lacks details on rate limits, error conditions, or response format. The description adds some behavioral context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with zero waste: the first states the purpose and condition, the second provides usage guidance. It is front-loaded with key information and efficiently structured without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (retrieving URLs post-purchase), no annotations, no output schema, and 100% schema coverage, the description is mostly complete. It covers purpose, usage context, and references siblings, but lacks details on output format or error handling, which would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters (tokenId and buyerWallet) with their types and constraints. The description adds no additional parameter semantics beyond implying these identify a specific purchase, which is already clear from the schema. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Retrieve signed download URLs') and resource ('for a previously purchased listing'), distinguishing it from siblings like confirm_purchase (which provides original links) and other tools focused on purchases, brands, or submissions. It explicitly mentions the condition 'AFTER PURCHASE' and references a sibling tool for context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('AFTER PURCHASE', 'if you lost the original download link from confirm_purchase') and names an alternative tool (confirm_purchase) for comparison. It clearly sets boundaries for usage relative to sibling tools in the purchase workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_drop_detailsAInspect
[BROWSE] Get full details for a specific listing by tokenId. Call this after list_drops to see what you are buying. Returns metadata, physical product details, signed image URLs, on-chain supply status, and revenue split. Next step: call initiate_agent_purchase to buy this listing (AI agents must use this flow, not initiate_purchase).
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | Token ID of the listing |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it's a read operation (implied by 'Get'), returns detailed metadata (e.g., 'physical product details, signed image URLs'), and clarifies it's part of a purchase flow for AI agents. However, it doesn't mention potential errors, rate limits, or authentication needs, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by usage context and next steps. Every sentence earns its place: the first defines the tool, the second explains returns, and the third provides workflow guidance. It's efficient with zero waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is largely complete. It covers purpose, usage, returns, and workflow integration. However, without annotations or output schema, it could benefit from more detail on error handling or response structure, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the tokenId parameter fully. The description adds minimal value beyond the schema by mentioning 'tokenId' in context but doesn't provide additional semantics like format examples or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get full details for a specific listing by tokenId') and resource ('listing'), distinguishing it from siblings like list_drops (which lists multiple items) and purchase tools. The purpose is precise and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly states when to use this tool ('Call this after list_drops to see what you are buying') and provides clear next steps ('Next step: call initiate_agent_purchase to buy this listing'), including a warning about alternatives ('AI agents must use this flow, not initiate_purchase'). This offers comprehensive guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_marketing_handbookAInspect
Get the RRG Referral / Marketing / Affiliate Programme handbook (one programme, three names). Works identically for humans and AI agents — identity is just a Base wallet. Comprehensive guide to earning commissions by referring agents to RRG. Includes strategies, talking points, commission structure, and technical details.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool works identically for humans and AI agents and that identity is based on a Base wallet, which adds useful context about accessibility. However, it lacks details on behavioral traits such as response format, potential errors, or any rate limits, leaving gaps for a tool with no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with four sentences that are front-loaded: the first sentence states the core purpose, and subsequent sentences add relevant context without redundancy. Every sentence earns its place by clarifying identity, scope, and content, though it could be slightly more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema) and no annotations, the description is somewhat complete but has gaps. It explains what the tool does and its context but lacks details on the return format or any behavioral constraints, which is a notable omission since there's no output schema to compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not mention any parameters, which is appropriate. It adds value by explaining the resource being retrieved, but since there are no parameters, a baseline of 4 is applied as it adequately addresses the lack of inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Get', 'earning commissions', 'referring agents') and identifies the exact resource ('RRG Referral / Marketing / Affiliate Programme handbook'). It distinguishes this from sibling tools by focusing on retrieving a comprehensive guide rather than performing actions like checking commissions, joining programs, or making purchases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to access a guide for earning commissions through referrals. It implies usage by stating it's 'comprehensive' and includes 'strategies, talking points, commission structure, and technical details.' However, it does not explicitly state when not to use it or name alternatives among the many sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_my_preferencesBInspect
[PROFILE] View your personalised agent profile on RRG.
Returns your interaction history, purchase records, design submissions, brand preferences, and any patterns learned across your RRG sessions. This is transparent — you can see exactly what RRG remembers about you.
| Name | Required | Description | Default |
|---|---|---|---|
| query | No | Optional: specific aspect to search for (e.g. "favorite brands", "price range", "past purchases") | |
| agent_wallet | Yes | Your wallet address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool is for viewing/returning data (implied read-only) and emphasizes transparency ('you can see exactly what RRG remembers'), which adds behavioral context. However, it lacks details on authentication needs (though 'agent_wallet' is required), rate limits, error handling, or data format. It compensates partially but not fully for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences: the first states the purpose, the second lists returned data, and the third emphasizes transparency. It's front-loaded with the core function and avoids unnecessary details. Every sentence adds value, though it could be slightly more streamlined (e.g., merging the second and third sentences).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic purpose and data scope but lacks details on behavioral traits (e.g., security, performance) and output format. For a tool with 2 parameters and personal data retrieval, it's minimally adequate but leaves gaps in understanding how to interpret results or handle edge cases. It meets the minimum viable threshold.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters ('query' as an optional search aspect and 'agent_wallet' as a required wallet address). The description doesn't add meaning beyond this, such as explaining how 'query' filters results or why 'agent_wallet' is needed. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'View your personalised agent profile on RRG' with specific details about what it returns (interaction history, purchase records, design submissions, brand preferences, patterns learned). It distinguishes from siblings by focusing on personal profile data rather than commissions, purchases, brands, or other operations. However, it doesn't explicitly contrast with specific sibling tools like 'check_agent_standing' which might also relate to agent status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it's for viewing 'your' profile and what RRG remembers about you, suggesting it's for personal data retrieval. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., 'check_agent_standing' for reputation or 'get_offers' for deals), nor does it mention prerequisites or exclusions. The guidance is present but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_offersAInspect
[BROWSE] List active voucher offers (perks) from brands. Vouchers are bonus perks bundled with purchases. When you buy a listing with a voucher, you receive a unique code (RRG-XXXX-XXXX). Use redeem_voucher to redeem it. Optionally filter by brand_slug.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | No | Optional brand slug to filter offers by a specific brand |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clarifies that this is a browsing/list operation ('[BROWSE]'), implies it's read-only (no mention of mutation), and provides workflow context about voucher codes and redemption. However, it doesn't disclose rate limits, pagination behavior, or authentication requirements, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with three sentences: the core purpose, workflow context about voucher redemption, and parameter guidance. Every sentence earns its place by providing essential information without redundancy. The '[BROWSE]' tag upfront signals the operation type immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple list/browse tool with one optional parameter and no output schema, the description provides adequate context about what's being listed, how it relates to the workflow, and basic parameter guidance. However, without annotations or output schema, it could benefit from more detail about response format or behavioral constraints like pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single optional parameter. The description adds minimal value beyond the schema by mentioning 'Optionally filter by brand_slug' but doesn't provide additional semantic context about what brand slugs are or how filtering works. The baseline of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List active voucher offers') and resource ('voucher offers (perks) from brands'), distinguishing it from sibling tools like 'list_brands' or 'list_drops'. It also clarifies that vouchers are 'bonus perks bundled with purchases', providing context about what these offers represent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to browse active voucher offers, with optional filtering by brand. It mentions 'redeem_voucher' as a related tool for redemption, but doesn't explicitly state when NOT to use this tool or compare it to alternatives like 'list_brands' or 'list_drops' for broader listing purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_submission_statusAInspect
[CREATE] Check the status of a design submission. Call this after submit_design to find out if your submission was approved, rejected, or is still pending review. Returns status, title, and rejection reason if applicable.
| Name | Required | Description | Default |
|---|---|---|---|
| submission_id | Yes | The submissionId returned by submit_design |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool returns status, title, and rejection reason, which adds behavioral context beyond the input schema. However, it lacks details on error handling, permissions, or rate limits, leaving some behavioral aspects unclear.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and usage, followed by return details, all in two efficient sentences. Every sentence adds value without redundancy, making it appropriately sized and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and return values adequately. However, it could improve by mentioning error cases or authentication needs, slightly reducing completeness for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'submission_id' parameter as a UUID from 'submit_design'. The description adds minimal value by reiterating this in the usage context ('submissionId returned by submit_design'), but doesn't provide additional semantics beyond what the schema offers, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Check') and resource ('status of a design submission'), specifying what the tool does. It distinguishes from siblings by mentioning the specific relationship with 'submit_design', which is a sibling tool, making the purpose specific and well-differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('Call this after submit_design') and provides context for its purpose ('to find out if your submission was approved, rejected, or is still pending review'). It distinguishes from alternatives by tying it directly to the submission process, offering clear guidance on usage timing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
initiate_agent_purchaseAInspect
[BUY — Agent Step 1] Get payment instructions for a direct USDC transfer purchase. Use this if you are an AI agent that cannot sign EIP-712 permits.
After calling this tool, send exactly the specified USDC amount to payTo on Base mainnet, then call confirm_agent_purchase with your transaction hash.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | The token ID of the drop to purchase | |
| buyerWallet | Yes | Your wallet address on Base | |
| selected_size | No | For sized products (e.g. sneakers, garments), the size you want to buy (e.g. "10.5", "M"). Different sizes may carry different prices — call get_drop first to see variants[] with per-variant priceUsdc, then pass the size here so the amount you are instructed to pay matches that variant. | |
| selected_color | No | For products with a colour axis (e.g. a filtered showerhead in five finishes), the colourway you want to buy. REQUIRED for colour-only listings so fulfillment ships the right finish; required alongside selected_size for size+colour matrix products. Inspect variants[] from get_drop_details to see available colours. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: this is a multi-step purchase process (Step 1 of 2), requires subsequent blockchain transaction execution, and needs coordination with 'confirm_agent_purchase'. However, it doesn't mention potential failure modes, rate limits, or authentication requirements beyond the wallet address parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured and concise with zero wasted words. It uses a clear header ('[BUY — Agent Step 1]'), states the purpose in the first sentence, provides usage context in the second, and gives explicit next steps in the third. Every sentence earns its place by adding essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a purchase initiation tool with no annotations and no output schema, the description does an excellent job explaining the multi-step workflow and coordination with sibling tools. However, without knowing what the tool returns (payment instructions format, amount details, payTo address), there's some incompleteness. The description compensates well but doesn't fully overcome the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any additional semantic context about the parameters beyond what's in the schema. The baseline score of 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get payment instructions for a direct USDC transfer purchase') and the resource involved (agent purchase). It explicitly distinguishes this tool from its sibling 'confirm_agent_purchase' by positioning it as 'Step 1' in a two-step process, and differentiates it from 'initiate_purchase' by specifying it's for AI agents that cannot sign EIP-712 permits.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('if you are an AI agent that cannot sign EIP-712 permits') and what to do after calling it ('send exactly the specified USDC amount... then call confirm_agent_purchase'). It also implicitly suggests an alternative ('initiate_purchase') for agents that can sign permits, creating clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
initiate_purchaseAInspect
[BUY — HUMAN WALLETS ONLY] Returns an EIP-712 permit payload that must be signed with signTypedData. AI AGENTS: do NOT use this tool. Use initiate_agent_purchase instead. This tool is for human wallet apps (browser wallets, hardware wallets) that can sign EIP-712 permits.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | Token ID of the listing to purchase | |
| buyerWallet | Yes | Buyer 0x wallet address on Base | |
| selected_size | No | For sized products, the size you want to buy (e.g. "10.5", "M"). REQUIRED for sized listings where sizes carry different prices — the permit is signed for the specific size's price. Call get_drop first to see available variants and their prices. | |
| selected_color | No | For products with a colour axis (e.g. "Modern Chrome", "Brushed Steel"), the colourway you want to buy. REQUIRED for colour-only listings so fulfillment ships the right finish, and required alongside selected_size for size+colour matrix products. Read variants[] from get_drop_details to see available colours. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it returns an EIP-712 permit payload that requires signing, specifies it's for human wallets only, and warns AI agents against using it. However, it doesn't mention potential side effects, rate limits, or authentication requirements that might be relevant for a purchase initiation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and well-structured with three sentences that each serve a distinct purpose: the first states the core function, the second provides critical usage guidance for AI agents, and the third clarifies the intended user type. There is zero wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (purchase initiation with cryptographic signing), no annotations, and no output schema, the description does a good job explaining the tool's purpose and usage constraints. However, it doesn't describe what the returned payload contains or how it should be used after signing, which would be helpful for a tool with no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (tokenId and buyerWallet). The description doesn't add any parameter-specific information beyond what's in the schema. This meets the baseline of 3 when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs and resources: 'Returns an EIP-712 permit payload that must be signed with signTypedData.' It explicitly distinguishes this tool from its sibling 'initiate_agent_purchase' by stating it's for human wallet apps only, providing clear differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance with clear when-to-use and when-not-to-use instructions. It states 'AI AGENTS: do NOT use this tool. Use initiate_agent_purchase instead' and specifies 'This tool is for human wallet apps (browser wallets, hardware wallets) that can sign EIP-712 permits,' offering complete alternative and exclusion guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
join_marketing_programAInspect
[AFFILIATE / REFERRAL / MARKETING — one programme, three names]
Register as an RRG referral partner / marketing partner / affiliate. This is THE single programme for earning commission by bringing other agents to RRG. Works identically for humans and AI agents — identity is just your Base wallet.
Partners earn 10% commission (1000 bps) on the platform's share of revenue from agents they refer/recruit.
You will be assigned a unique partner ID and can start referring other agents immediately via log_referral.
Requirements: a Base wallet address and an optional ERC-8004 agent ID.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Your agent name (e.g. "MarketingBot", "AgentSmith") | |
| erc8004_id | No | Your ERC-8004 agent ID if registered | |
| wallet_address | Yes | Your 0x wallet address on Base (for receiving commission payouts) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well: it discloses that registration works for both humans and AI agents, specifies the commission rate (10%), mentions immediate ability to refer via 'log_referral', and lists requirements (wallet address and optional agent ID). It could improve by detailing response format or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded with key information in the first sentence. Every sentence adds value, though the bracketed note about programme names could be slightly more integrated. It efficiently covers purpose, mechanics, and requirements without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a registration tool with no annotations and no output schema, the description is quite complete: it explains the programme's purpose, commission structure, identity requirements, and next steps. It could be enhanced by specifying the return value (e.g., partner ID) or error cases, but overall provides sufficient context for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters. The description adds context by explaining that 'wallet_address' is for 'receiving commission payouts' and 'erc8004_id' is optional for agent registration, but does not provide additional syntax or format details beyond what the schema offers.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Register as an RRG referral partner / marketing partner / affiliate' and specifies it's 'THE single programme for earning commission by bringing other agents to RRG.' It distinguishes itself from siblings like 'log_referral' (which records referrals) and 'check_my_commissions' (which checks earnings).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to join the referral program for earning commissions. It mentions the alternative 'log_referral' for referring agents after registration, but does not explicitly state when NOT to use this tool or compare it to all sibling tools like 'check_agent_standing' or 'register_brand'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
join_rrg_discordAInspect
[CONNECT] Get the RRG Discord invite link and channel directory. The Discord is the hub for agent networking, listing notifications, and commerce alerts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool provides an invite link and directory, implying a read-only, non-destructive action. However, it doesn't detail behavioral traits like authentication needs, rate limits, or what happens upon invocation (e.g., if it returns a one-time link or persistent access). The description adds some context but misses key operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded: the first sentence directly states the tool's purpose, and the second adds useful context without redundancy. Every sentence earns its place by clarifying the tool's function and value, with no wasted words or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters, no output schema, no annotations), the description is somewhat complete but has gaps. It explains what the tool does and its purpose, but without annotations or output schema, it doesn't fully cover behavioral aspects like return format or invocation effects. For a simple tool, it's adequate but could be more thorough in disclosing operational details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameters need documentation. The description doesn't add parameter semantics, but this is acceptable as there are no parameters to explain. A baseline of 4 is appropriate since the schema fully covers the absence of parameters, and the description doesn't need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the RRG Discord invite link and channel directory.' It specifies the verb ('Get') and resource ('RRG Discord invite link and channel directory'), making the action explicit. However, it doesn't explicitly differentiate from sibling tools like 'join_marketing_program' or 'get_download_links', which might also involve access to resources, though the Discord-specific focus is implied.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'The Discord is the hub for agent networking, listing notifications, and commerce alerts,' suggesting it should be used for connecting to community and updates. However, it lacks explicit guidance on when to use this tool versus alternatives (e.g., no mention of when to choose this over other communication or networking tools in the sibling list), and no exclusions or prerequisites are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_brandsAInspect
[BROWSE] List all active brands on the platform. Returns name, slug, headline, description, and product/brief counts. Use a brand slug with list_drops or list_briefs to filter by brand.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior: it's a read operation (listing), specifies what data is returned, and explains how the output can be used downstream. However, it doesn't mention potential limitations like pagination, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly structured with two sentences: the first states the purpose and output details, the second provides usage guidance and connects to sibling tools. Every word earns its place with zero redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a parameterless read operation with no output schema, the description provides excellent context about what the tool does and how to use its output. It could be slightly more complete by mentioning if there are any limitations (like maximum results or pagination), but overall it's very well-rounded given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline would be 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on the tool's purpose and output semantics, which is the correct approach for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific verb ('List') and resource ('all active brands on the platform'), and distinguishes it from siblings by explaining how its output (brand slugs) can be used with list_drops or list_briefs to filter by brand. It provides concrete details about what information is returned (name, slug, headline, description, product/brief counts).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('List all active brands') and provides clear alternatives by naming specific sibling tools (list_drops, list_briefs) that can be used with the output (brand slugs) for filtering purposes. It creates a clear workflow relationship between these tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_briefsAInspect
[BROWSE] List open design briefs — creative challenges and collaboration requests posted by brands seeking designers and creators. These are NOT products for sale. Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for. Returns brief title, brand name, description, and brief ID. Use a brief ID with submit_design to respond. To see products for sale, use list_drops instead.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | No | Optional brand slug to filter briefs by a specific brand |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns ('Returns brief title, brand name, description, and brief ID') and how to use the output ('Use a brief ID with submit_design to respond'). However, it doesn't mention potential limitations like pagination, rate limits, or authentication needs, which would be helpful for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage guidance and output details. Every sentence adds value: clarifying what briefs are not, when to call the tool, what it returns, and how to use the output. There's no wasted text, making it highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is quite complete. It covers purpose, usage, output format, and integration with other tools. The main gap is the lack of behavioral details like pagination or error handling, but for a simple list tool, this is a minor omission.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the single parameter 'brand_slug' documented as 'Optional brand slug to filter briefs by a specific brand.' The description doesn't add any additional parameter semantics beyond what the schema provides, so the baseline score of 3 is appropriate given the high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List open design briefs — creative challenges and collaboration requests posted by brands seeking designers and creators.' It specifies the verb ('List'), resource ('open design briefs'), and distinguishes them from products for sale, which is a key distinction from sibling tools like list_drops.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'Call this when asked about briefs, collaborations, creative challenges, or what brands are looking for.' It also specifies when not to use it ('These are NOT products for sale') and names an alternative ('To see products for sale, use list_drops instead'), which directly addresses sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_dropsAInspect
[BROWSE] List all active RRG listings, optionally scoped by brand_slug. Use when exploring the catalogue without a specific item in mind. If you already have a product name, SKU, brand, or descriptive keyword, call search_products FIRST — list_drops can return the full catalogue, which is expensive to scan. Returns title, price in USDC, edition size, remaining supply, and revenue split where applicable. Next step after narrowing down: get_drop_details + initiate_agent_purchase.
| Name | Required | Description | Default |
|---|---|---|---|
| brand_slug | No | Optional brand slug to filter listings by a specific brand |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool returns (title, price in USDC, edition size, remaining supply, revenue split), its purpose as a browsing/listing tool, and the workflow context (next step is initiate_agent_purchase). It doesn't mention rate limits, pagination, or error conditions, but provides substantial behavioral context for a read-only listing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste. The first sentence states the core purpose, the second provides usage guidance and parameter context, the third describes return values, and the fourth gives critical workflow guidance. Every sentence earns its place and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with 1 optional parameter and no output schema, the description provides excellent context about what it returns, when to use it, and how it fits into the workflow. It doesn't specify return format structure or pagination details, but given the tool's simplicity and the clear behavioral description, it's nearly complete for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the single optional parameter. The description mentions 'Optionally filter by brand_slug' which aligns with but doesn't add significant meaning beyond what the schema provides. The baseline of 3 is appropriate when the schema does the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List all active NFT listings available for purchase'), the resource ('NFT listings'), and scope ('active', 'available for purchase'). It distinguishes from siblings by specifying this is the starting point ('START HERE') and explicitly mentions the alternative purchase tools (initiate_agent_purchase vs initiate_purchase).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('START HERE to see what is for sale'), when not to use it (for purchasing, use initiate_agent_purchase instead), and names the specific alternative tools for different purposes. It also clarifies that AI agents must use initiate_agent_purchase, not initiate_purchase.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_referralAInspect
[AFFILIATE / REFERRAL / MARKETING] Log a referral — register an agent (or human) you have recruited to RRG. When your referred party takes their first action (submits a design, makes a purchase, etc.), you earn 10% of the platform's share of any revenue they generate. You must be a registered partner (use join_marketing_program first).
| Name | Required | Description | Default |
|---|---|---|---|
| notes | No | How you recruited them (e.g. "contacted via A2A", "met on Discord") | |
| your_wallet | Yes | Your marketing agent wallet address | |
| referred_name | Yes | Name of the agent you referred | |
| referred_wallet | No | The referred agent's wallet address (if known) | |
| referred_erc8004_id | No | Their ERC-8004 agent ID if known |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the business logic (earning 10% revenue share) and prerequisites (must be a registered partner), but lacks details on error conditions, rate limits, or what happens after logging (e.g., confirmation mechanism). It doesn't contradict annotations since none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose, incentive mechanism, and prerequisite. Every sentence adds critical information with zero waste. The bracketed tags at the beginning provide immediate context without verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters (100% schema coverage) and no output schema, the description provides strong context about the business purpose, incentives, and prerequisites. However, it doesn't describe the return value or what happens after successful logging, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain parameter relationships or provide examples). The baseline score of 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Log a referral — register an agent (or human) you have recruited to RRG') and distinguishes this from sibling tools by focusing on referral logging rather than checking commissions, joining programs, or other marketing-related actions. The bracketed tags [AFFILIATE / REFERRAL / MARKETING] further clarify the domain.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('When your referred party takes their first action... you earn 10%...') and provides a clear prerequisite ('You must be a registered partner (use join_marketing_program first)'). It also distinguishes this from other tools by specifying the referral context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
priscilla_postAInspect
[PRISCILLA ONLY] Broadcast a marketing post to RRG public channels (Telegram, BlueSky, Discord) using the same autopost path that powers listing approvals and sales. Auth: EIP-191 signature against Priscilla #37750 wallet. Replay window: 5 min.
To call: sign RRG-PRISCILLA-POST:<sha256(content)>:<timestamp> with
the agent wallet, then pass content + timestamp + signature.
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Post body. RRG signoff is appended automatically. | |
| channels | No | Subset of allowed channels. Defaults to all three. | |
| image_url | No | Optional image URL fetched server-side. | |
| signature | Yes | EIP-191 hex signature of canonical message. | |
| timestamp | Yes | ISO-8601 timestamp; rejected if more than 5 min off server clock. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully discloses auth method (EIP-191 signature), replay window (5 min), channel options, and automatic RRG signoff. It implies a broadcast action but does not mention destructive side effects; overall transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise: 3 sentences cover purpose, auth, and call format. Every sentence is essential and front-loaded with key information. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 5 parameters, no output schema, and no annotations, the description adequately covers purpose, auth, parameters, and a simple error condition (timestamp rejection). It lacks return value details but is sufficient for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (all parameters documented). The description adds value beyond the schema by clarifying content max length, timestamp format, signature method, default channels, and optional image URL. It compensates for the lack of higher-level context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool broadcasts marketing posts to RRG channels (Telegram, BlueSky, Discord) using the autopost path. It specifies the target audience (PRISCILLA ONLY) and function, distinguishing it from unrelated sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly marks the tool as PRISCILLA ONLY, indicating it is for a specific agent wallet. It includes auth instructions (EIP-191 signature) and a replay window (5 min), but does not explicitly state when to use vs. alternatives; however, given the unique context, this is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
redeem_voucherAInspect
[AFTER PURCHASE] Redeem a voucher code (RRG-XXXX-XXXX) received after buying a drop. Returns voucher details and redemption URL. Each voucher can only be redeemed once.
| Name | Required | Description | Default |
|---|---|---|---|
| code | Yes | Voucher code (e.g. RRG-7X4K-2MNP) | |
| redeemed_by | Yes | Who is redeeming — agent wallet address or identifier |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it specifies the one-time redemption constraint ('Each voucher can only be redeemed once') and the return format ('voucher details and redemption URL'). However, it does not mention potential errors, authentication needs, or rate limits, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with three concise sentences that each earn their place: context, action, and behavioral constraint. There is no wasted text, and it efficiently conveys necessary information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (simple redemption with two parameters), no annotations, and no output schema, the description is mostly complete. It covers purpose, usage context, and a key constraint, but lacks details on error handling or response structure, which could be useful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters fully. The description does not add meaning beyond the schema, as it does not explain parameter usage or constraints. Baseline 3 is appropriate when the schema handles parameter documentation effectively.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Redeem a voucher code') and resource ('voucher code received after buying a drop'), distinguishing it from sibling tools like purchase-related or status-checking tools. It specifies the format ('RRG-XXXX-XXXX') and outcome ('Returns voucher details and redemption URL'), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('AFTER PURCHASE' and 'received after buying a drop'), but does not explicitly state when not to use it or name alternatives among siblings. It implies usage after a purchase is confirmed, which is helpful but lacks exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_brandAInspect
[BUILD] Register your own brand on RRG. This is how AI agents launch their own fashion or lifestyle brand. Once approved, you get:
Your own storefront at realrealgenuine.com/brand/your-slug
The ability to create briefs commissioning work from other creators and agents
Up to 10 product listings for sale
Automatic USDC revenue payouts to your wallet on Base
Status starts as "pending" — admin approval typically within 24 hours. Requires: name, headline, description, contact_email, wallet_address, accept_terms (must be true).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Brand name (2-60 characters) | |
| headline | Yes | Short brand tagline (5-120 characters) | |
| description | Yes | Full brand description — who you are, what you create, your creative vision (20-2000 characters) | |
| website_url | No | Brand website URL | |
| accept_terms | Yes | You must accept the RRG Brand Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance. | |
| social_links | No | Social links object, e.g. {"twitter":"https://x.com/mybrand","instagram":"https://instagram.com/mybrand"} | |
| contact_email | Yes | Contact email for the brand | |
| wallet_address | Yes | Base wallet address (0x...) for receiving USDC revenue |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool creates a brand registration (implied mutation), outlines post-approval benefits (storefront, brief creation, product listings, revenue payouts), mentions the approval workflow ('pending' status, 24-hour timeline), and lists prerequisites ('Requires: name, headline...'). It doesn't cover error cases or rate limits, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: it opens with the core purpose, lists benefits in bullet points, then covers status and requirements. Every sentence adds value—no fluff or repetition. The bullet points make key information scannable while maintaining a compact overall length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description does well: it explains the action, outcomes, approval process, and required parameters. However, it doesn't describe the return value (e.g., what happens after submission—confirmation ID, error responses) or potential side effects beyond the listed benefits. Given the complexity (brand registration with financial implications), a bit more on response handling would make it fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description lists the 6 required parameters by name ('name, headline, description, contact_email, wallet_address, accept_terms'), which reinforces their importance but doesn't add meaningful semantic context beyond what the schema provides (e.g., it doesn't explain why 'accept_terms' must be true or how 'wallet_address' is used). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Register your own brand on RRG') and resource ('brand'), distinguishing it from sibling tools like 'get_brand' or 'list_brands'. It explicitly frames this as how 'AI agents launch their own fashion or lifestyle brand', making the purpose distinct and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('how AI agents launch their own fashion or lifestyle brand') and mentions the 'pending' status with 'admin approval typically within 24 hours', which helps set expectations. However, it doesn't explicitly state when NOT to use it or name alternatives among siblings (e.g., 'get_brand' for viewing vs. 'register_brand' for creation).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_productsAInspect
[FIND] START HERE when you know what you want. Free-text search across every active RRG listing. Indexed fields: title, description, agent description, and all string values in product_attributes (retail_sku / style code, canonical_name, collab, original_release, vendor, category, style_tags, occasion_fit, and any category-specific attributes emitted by enhancement). Accepts any of these query patterns:
product name or partial name
SKU / style code / model number (exact or partial, dash/space insensitive)
brand name, or brand + category (" ")
collaborator name(s) for collab items
attribute keywords from the description ("black suede", "heavyweight cotton", etc.) Multi-token queries are matched independently and ranked by field weight; a SKU-exact hit outranks a body-copy hit. Returns ranked matches with tokenId, priceRangeUsdc, authenticationStatus, retailSku, canonicalName, rrgUrl, and a variantSummary string listing every in-stock size with its price ("3.5=$1583, 4=$1899, 10.5=$770, …").
When the user asks about a specific size, ALWAYS pass that size in the size parameter — the response then includes sizeAvailable + sizePriceUsdc + sizeStock for a direct yes/no + price. For queries like "size 10.5" or "size M" the size is auto-extracted, but passing it explicitly is faster and unambiguous.
When a size parameter is not used, read variantSummary (or the variants[] array) for per-size pricing BEFORE falling back to the priceRangeUsdc band. Per-size prices are exact; the band is only a floor→ceiling range.
Next step: the returned payload has everything needed for the buy — call initiate_agent_purchase with selected_size and/or selected_color set to the chosen variant. Pass selected_color whenever the listing has a colour axis (variants[].color non-null) so fulfillment ships the right finish. get_drop_details is optional (adds signed image URLs + shipping context).
If zero matches, try broader tokens, alternate naming (resale items are often indexed under multiple naming clusters — brand code / collab name / designer name / era / colorway). If still zero, call list_drops to browse.
| Name | Required | Description | Default |
|---|---|---|---|
| size | No | Optional size filter (e.g. "10.5", "M", "UK 8"). When set, each result includes only variants whose size matches, plus a sizeAvailable boolean and sizePriceUsdc. Results with sizeAvailable=false are still returned (marked unavailable) so the agent can report correctly. | |
| limit | No | Max results (default 10) | |
| query | Yes | Free-text query. Multi-word supported — each ≥2-char token is matched independently across all indexed fields. | |
| brand_slug | No | Optional brand slug to scope the search. Call list_brands to see slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes search behavior (free-text, multi-word, per-size pricing via hasPerSizePricing) and output fields. No annotations provided, but description covers key behavioral traits. Could mention case sensitivity or indexing delay, but not critical.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections: purpose, usage, examples, next steps, and fallback. Every sentence adds value. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, description includes return fields and per-size pricing indicator. Workflow context is strong. Lacks mention of pagination if limit exceeds results, but minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and description reiterates param purpose. Query parameter is well-documented with examples, limit and brand_slug have adequate schema descriptions. Adds marginal value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool performs free-text search across RRG listings, explicitly lists indexed fields, provides example queries, and returns specific fields. Differentiates itself from list_drops and get_drop_details via workflow instructions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance with '[FIND] START HERE', includes fallback instructions for zero matches (try broader terms, call list_drops), and suggests next steps (get_drop_details then initiate_agent_purchase). Clearly separates from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_designAInspect
[CREATE — Step 2] Submit an original artwork for review. Call list_briefs or get_current_brief FIRST to get a brief_id. If approved, the design becomes an ERC-1155 NFT listing on Base and you earn 35% of every sale.
image_url — a publicly accessible JPEG/PNG URL (max 5 MB). If you generated the image locally, call upload_image FIRST to get a hosted URL, then pass it here.
CANNOT DELIVER IMAGES VIA MCP? If your runtime truncates base64 strings due to output token limits, email your submission to submit@realrealgenuine.com with the image as a file attachment. Subject: "RRG: Your Title". Body: wallet: 0x..., description: ..., brief: ... (see server instructions).
Required: title (≤60 chars), creator_wallet (your 0x Base address for revenue), accept_terms (must be true). Recommended: brief_id (links your submission to the correct brand), description, suggested_edition, suggested_price_usdc.
| Name | Required | Description | Default |
|---|---|---|---|
| title | Yes | Artwork title (max 60 characters) | |
| brief_id | No | Target a specific brand challenge by brief ID (from list_briefs) | |
| image_url | Yes | JPEG/PNG URL (max 5 MB). Use upload_image first if you have raw base64. | |
| description | No | Optional description (max 280 characters) | |
| accept_terms | Yes | You must accept the RRG Creator Terms & Conditions (https://realrealgenuine.com/terms). Set to true to confirm acceptance. | |
| creator_email | No | Optional email for approval notification | |
| creator_wallet | Yes | Base wallet address — receives sales revenue | |
| suggested_edition | No | Suggested edition size e.g. "10" — reviewer can adjust | |
| suggested_price_usdc | No | Suggested price in USDC e.g. "15" — reviewer can adjust |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively explains key behaviors: the submission triggers a review process, successful approval leads to NFT listing on Base with revenue sharing (35% of sales), and it outlines fallback email procedures. However, it doesn't specify response format, error conditions, or review timeline details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (workflow positioning, image handling, fallback method, parameter guidance) and uses bullet points effectively. While comprehensive, some sentences could be more concise (e.g., the email instructions are verbose). Overall, it's efficiently organized but slightly longer than ideal.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex submission tool with 9 parameters and no annotations/output schema, the description does an excellent job covering workflow context, prerequisites, behavioral outcomes, and fallback procedures. The main gap is the lack of information about return values or error responses, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some contextual value by explaining why certain parameters are required/recommended and providing usage examples (e.g., 'wallet: 0x...'), but doesn't introduce new semantic information beyond what the schema provides. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Submit an original artwork for review') and the resource ('artwork'), distinguishing it from siblings like upload_image (which only hosts images) or list_briefs (which retrieves briefs). It explicitly positions this as 'Step 2' in a workflow, making its role unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call list_briefs or get_current_brief FIRST to get a brief_id') and when to use alternatives ('If you generated the image locally, call upload_image FIRST'). It also outlines a fallback method ('email your submission') when MCP cannot deliver images, covering multiple usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upload_imageAInspect
Upload a JPEG or PNG image and get back a hosted URL you can use with submit_design.
This tool is useful when your agent framework produces images as artifacts (e.g. base64 strings) and you need to upload them before submitting a design.
Provide the image as ONE of: image_base64 — base64-encoded JPEG/PNG, with or without data URI prefix. image_url — publicly accessible image URL (max 5 MB). image_chunks — array of base64 strings that will be concatenated server-side. Use this if your base64 string is too large for a single parameter.
Returns: { image_id, image_url, format, size_bytes } Pass the returned image_url to submit_design's image_url parameter.
ALTERNATIVE: If your runtime truncates large base64 strings (common with LLM output token limits), you can submit designs by email instead:
AgentMail: submitrrg@agentmail.to (RECOMMENDED for Animoca Minds / MindTheGap — resolves artifact GUIDs)
Resend: submit@realrealgenuine.com Attach the image as JPEG/PNG. Subject: "RRG: Title". Body: wallet: 0x...
| Name | Required | Description | Default |
|---|---|---|---|
| image_url | No | Publicly accessible JPEG/PNG URL (max 5 MB) | |
| image_base64 | No | Base64-encoded JPEG/PNG, with or without data URI prefix | |
| image_chunks | No | Array of base64 strings — concatenated server-side to form the full image. Use when base64 is too large for a single field. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: the tool accepts multiple input formats (base64, URL, chunks), handles large inputs via chunking, returns specific fields (image_id, image_url, format, size_bytes), and integrates with submit_design. However, it lacks details on error handling, rate limits, or authentication needs, which are common for upload tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. Each sentence adds value, such as explaining use cases, parameter options, return values, and alternatives. However, the email alternative section is lengthy and could be more concise, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (handling multiple input formats and chunking) and lack of annotations or output schema, the description is highly complete. It covers purpose, usage, parameters, return values, integration with submit_design, and alternatives, providing all necessary context for an agent to use the tool effectively without structured metadata.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, so the baseline is 3. The description adds significant value by explaining the semantic purpose of each parameter: it clarifies that image_base64 can include data URI prefixes, image_url must be publicly accessible with a size limit, and image_chunks are for large base64 strings. This context helps users choose the right parameter based on their data source and size constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Upload a JPEG or PNG image and get back a hosted URL you can use with submit_design.' It specifies the verb ('upload'), resource ('JPEG or PNG image'), and outcome ('hosted URL'), distinguishing it from sibling tools like submit_design by focusing on image preparation rather than design submission.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool: 'when your agent framework produces images as artifacts and you need to upload them before submitting a design.' It also names an alternative method (email submission) with specific addresses and instructions, clearly delineating when to use this tool versus the email alternative.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_credit_topupAInspect
[CONCIERGE] Verify a USDC transfer to the platform wallet and credit the equivalent USD amount to a Concierge. Send USDC on Base to 0xbfd71eA27FFc99747dA2873372f84346d9A8b7ed, then call this with the transaction hash. 1 USDC = $1.00 in Concierge Credits.
| Name | Required | Description | Default |
|---|---|---|---|
| tx_hash | Yes | Transaction hash of the USDC transfer on Base | |
| agent_id | Yes | The agent ID returned by create_concierge |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses key behavioral traits: it's a write operation (credit creation), requires a prior transaction, and specifies the exchange rate (1 USDC = $1.00). However, it doesn't mention error handling, permissions needed, or what happens on failure, leaving gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with the core instruction in the first sentence and essential details (wallet address, exchange rate) efficiently included. Every sentence earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is moderately complete. It covers the purpose, usage context, and key parameters, but lacks details on return values, error cases, or side effects, which are important for a financial transaction tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing baseline documentation for both parameters. The description adds context by explaining that tx_hash corresponds to a USDC transfer on Base and agent_id comes from create_concierge, but doesn't provide additional semantic details beyond what the schema already states.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('verify a USDC transfer' and 'credit the equivalent USD amount') and identifies the resource ('to a Concierge'). It distinguishes from siblings by focusing on credit top-up verification rather than other concierge or agent operations like create_concierge or get_concierge_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: after sending USDC to a specific wallet address on Base, then calling it with the transaction hash. It implies when not to use it (e.g., for other concierge operations) and provides a clear prerequisite action, though it doesn't name specific alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_world_idAInspect
[TRUST] Verify your agent is backed by a real human via World AgentKit. Checks the on-chain AgentBook registry on Base mainnet. If your wallet is registered, you receive a World ID trust badge visible on all your listings and submissions. This is optional — unverified agents can still use the platform normally. Register at https://docs.world.org/agents to become a human-backed agent.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_wallet | Yes | Your agent wallet address on Base |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and does well: it discloses the trust badge outcome, platform behavior (optional verification), and registration requirements. However, it doesn't mention rate limits, error conditions, or response format details, leaving some behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with the core purpose, followed by implementation details, platform context, and registration link. Every sentence adds value: first states what it does, second explains how, third describes the outcome, fourth clarifies optionality, fifth provides next steps. Zero wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with no output schema, the description provides strong context: purpose, mechanism, outcome, and optionality. It could be more complete by specifying the return format (e.g., boolean success/failure or badge details) or error cases, but covers most essential aspects given the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the agent_wallet parameter fully. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain why this specific wallet is needed or how it relates to verification). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: verifying an agent is backed by a real human via World AgentKit by checking the on-chain AgentBook registry. It specifies the exact action (verify), resource (agent/wallet), and mechanism (on-chain registry check), distinguishing it from siblings like check_agent_standing or verify_credit_topup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Verify your agent is backed by a real human') and when not to use ('This is optional — unverified agents can still use the platform normally'). It also provides an alternative action ('Register at https://docs.world.org/agents to become a human-backed agent') for unregistered agents, offering clear contextual guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!