Resonance Reward Agent
Server Details
Find shopping deals, earn cashback, and redeem rewards across retail, dining, and travel brands.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
45 toolsrsnc_agent_best_dealsAInspect
Find the best cashback deals and rewards matching a shopping intent. Searches brands by category, ranks by reward value, and optionally personalizes results based on user balances. Use this when a user wants to shop, eat, travel, or game and wants the best rewards.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum deals to return. Defaults to 10. | |
| budget | No | Optional max perk cost to consider. | |
| intent | Yes | What the user wants to do — e.g. "buy running shoes", "get coffee", "book a hotel", "play games". Matched against brand categories and perk descriptions. | |
| userId | No | Optional user ID (email/wallet). When provided, results include personalized data: current balance, affordable perks, and "you can afford this NOW" flags. | |
| category | No | Direct category filter (retail, dining, travel, gaming). If provided, overrides intent-based category matching. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries full burden. It discloses ranking behavior ('ranks by reward value') and personalization effects, but omits safety characteristics (read-only vs. destructive), rate limits, or detailed output structure that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three well-structured sentences with zero waste: purpose front-loaded in sentence 1, behavioral details in sentence 2, and usage context in sentence 3. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Lacks output schema and description of return values (deal structure, fields). While input parameters and ranking logic are covered, the absence of annotations and output specification leaves gaps for an agent understanding the full contract.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, parameters are well-documented in the schema itself. The description adds cross-parameter context (how userId triggers personalization, how intent relates to categories) but does not significantly augment individual parameter semantics beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds and ranks cashback deals by reward value, with specific actions (searches, ranks, personalizes). However, it does not explicitly differentiate from siblings like 'browse_perks' or 'compare_cashback', though the ranking behavior implies distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear when-to-use guidance ('Use this when a user wants to shop, eat, travel, or game and wants the best rewards'). Lacks explicit exclusions or named alternatives, but covers the primary use case contexts effectively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_analyticsAInspect
Deep analytics for a specific brand: daily performance trends, customer velocity, event activity, and program health metrics. Use this to understand how a brand's reward program is performing.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period for analytics. Defaults to "30d". | |
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry the full burden of behavioral disclosure. It successfully describes what metrics are returned (daily trends, velocity, event activity, health), but fails to disclose operational traits like read-only safety, error handling when brandId is invalid, data volume limits, or whether results are real-time versus cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficiently structured sentences with no waste. It is front-loaded with the core capability ('Deep analytics for a specific brand') followed by specific metrics and a clear use-case statement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has only 2 parameters (simple structure) and no output schema, the description adequately compensates by listing the specific metric categories returned (trends, velocity, activity, health). It is sufficiently complete for an analytics retrieval tool, though it could improve by describing the return data structure or format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both brandId and period are well-documented in the schema), establishing a baseline of 3. The description mentions 'daily performance trends,' which loosely hints at the time-period functionality, but adds no specific guidance on parameter syntax, validation rules, or the default period behavior beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool provides 'deep analytics for a specific brand' and enumerates specific metrics covered (daily performance trends, customer velocity, event activity, program health metrics). However, it does not explicitly differentiate from similar siblings like rsnc_agent_brand_health or rsnc_agent_network_analytics, which could cause selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'Use this to understand how a brand's reward program is performing,' which provides positive guidance on when to use the tool. However, it lacks negative guidance (when not to use) and does not name alternatives like rsnc_agent_brand_health for simpler health checks, leaving users to infer appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_audienceBInspect
Get a brand's user persona distribution: archetype breakdown, engagement levels, platform split, and archetype drift over time.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Zero annotations provided, yet description fails to disclose operational characteristics (read-only status, data freshness, rate limits). While it lists return data components, it omits safety profiles and execution costs that agents need when selecting among 40+ tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently front-loads the action ('Get') and uses a colon-delimited list to specify return components. No redundant phrases; every word contributes to understanding scope and outputs.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates for missing output schema by listing specific return fields, which is necessary for agent comprehension. However, lacking annotations and behavioral context (safety, side effects), and with no usage guidance among numerous siblings, coverage is minimally adequate but incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema fully documents the single 'brandId' parameter. The description adds no syntax details, format examples, or constraints beyond the schema's baseline documentation, warranting the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verb 'Get' with clear resource 'brand's user persona distribution' and enumerates four specific output components (archetype breakdown, engagement levels, platform split, archetype drift). However, it does not explicitly differentiate from similar siblings like 'rsnc_agent_brand_analytics' or 'rsnc_agent_perk_audience'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to prefer this tool over sibling alternatives such as 'rsnc_agent_brand_analytics' or 'rsnc_agent_user_persona'. No prerequisites, filters, or exclusions are mentioned despite the crowded tool namespace.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_healthBInspect
Get a program health assessment for a brand with data-driven recommendations. Analyzes event performance, perk utilization, and customer engagement to suggest optimizations.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It implies a read-only operation ('Get,' 'suggest optimizations') but fails to disclose computational cost, caching behavior, whether the assessment is real-time or cached, or the structure/format of the returned health assessment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences: the first establishes the action and output, the second details the analytical methodology. No redundant phrases or tautologies; every clause adds specific information about scope or behavior.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description covers the analytical scope adequately for a single-parameter tool, it omits the return value structure despite the absence of an output schema. Given no annotations and no output schema, additional context on response format or health score interpretation would be expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'brandId' parameter, the baseline is met. The description references 'for a brand' but adds no syntactic constraints, validation rules, or examples beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a 'program health assessment' and lists specific analytical domains (event performance, perk utilization, customer engagement). It implies differentiation from sibling analytics tools by emphasizing 'data-driven recommendations' and 'suggest optimizations,' though it could more explicitly contrast with rsnc_agent_brand_analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to select this tool versus siblings like rsnc_agent_brand_analytics, rsnc_agent_brand_info, or rsnc_agent_brand_audience. No prerequisites or conditions for use are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_infoBInspect
Get details on a brand's reward program: what actions earn cashback (purchases, signups, reviews, referrals), how much you earn per action, and any bonus opportunities.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It explains what data is returned (reward structure, bonus opportunities), but lacks explicit safety indicators (read-only vs mutation), error conditions, or rate limiting. 'Get details' implies read-only, but explicit confirmation would help given the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with optimal front-loading: action first ('Get details...'), followed by colon-separated specifics (actions, amounts, bonuses). Examples in parentheses add value without verbosity. Zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given single parameter, simple structure, and no output schema, the description adequately covers return value semantics by enumerating what details are retrieved (actions, rates, bonuses). Could improve by noting return format, but sufficient for tool complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (brandId fully described). The main description adds no parameter-specific guidance, but the schema adequately documents the identifier and cross-references rsnc_agent_list_brands for discovery, meeting baseline expectations for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb-resource combination ('Get details on a brand's reward program') with specific scope (cashback actions, rates, bonuses). Distinguishes from siblings like brand_analytics or brand_audience by focusing specifically on reward program mechanics, though it could more explicitly contrast with rsnc_agent_brand_perks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to prefer this over sibling tools like rsnc_agent_brand_perks or rsnc_agent_check_brand. The schema mentions rsnc_agent_list_brands for discovery, but this is a prerequisite, not usage guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_perksBInspect
Browse a brand's available rewards — discounts, free products, exclusive access, experiences, and more. See what's redeemable and what each costs.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. | |
| category | No | Optional category filter for perks. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It implies read-only behavior through verbs 'Browse' and 'See', and hints at return content ('what each costs'), but lacks explicit safety guarantees, rate limits, or details about the returned data structure since no output schema exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently constructed sentences. The first sentence front-loads the core purpose with concrete examples; the second clarifies the utility (cost visibility). Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter schema with full coverage, the description adequately covers the core purpose. However, with no output schema and no annotations, it should more explicitly describe the return structure (e.g., list of perks with pricing) rather than just implying it through 'See what each costs'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, establishing a baseline of 3. The description itself does not discuss parameters or their semantics, but this is acceptable since the schema comprehensively documents both brandId (with cross-reference to list_brands) and the optional category filter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action 'Browse' and resource 'a brand's available rewards', with helpful examples (discounts, free products, exclusive access). However, it fails to distinguish from the sibling tool 'rsnc_agent_browse_perks', which likely has overlapping functionality and could confuse agent selection.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus similar siblings (e.g., rsnc_agent_browse_perks, rsnc_agent_my_rewards) or prerequisites. While the input schema mentions using rsnc_agent_list_brands to discover brandId, this critical usage path is absent from the main description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_brand_rankingsBInspect
See which brands offer the most rewards, have the most active programs, or the best cashback rates. Useful for finding where to shop for maximum value.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Number of brands to rank. Defaults to 10, max 10. | |
| category | No | Filter by category (e.g. "retail", "dining", "travel", "gaming"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully identifies the ranking dimensions (rewards, active programs, cashback) but omits behavioral details like pagination behavior, sort order logic, or whether rankings are real-time vs cached.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero redundancy. Front-loaded with the core action. Second sentence adds genuine value by contextualizing the use case rather than repeating the tool name.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriate for low complexity (2 optional params) and high schema coverage. Lacks output schema, so description could benefit from hinting at return structure (e.g., 'returns ranked list'), but adequately covers the input-to-purpose mapping.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description does not add parameter-specific semantics (e.g., noting that category filters the ranking scope), but schema adequately documents both optional parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action ('See') with concrete resource ('brands') and criteria ('most rewards', 'active programs', 'cashback rates'). However, it does not explicitly differentiate from similar sibling tools like `compare_brands`, `leaderboard`, or `best_deals`.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied context for use ('Useful for finding where to shop for maximum value'), indicating when it might be helpful. Lacks explicit 'when not to use' guidance or named alternatives from the extensive sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_browse_perksBInspect
Search for deals, discounts, and rewards across all brands or filter by category and budget. Find cashback offers, free products, exclusive access, and experiences from retail, dining, travel, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of perks to return. Defaults to 20. | |
| brandId | No | Scope to a specific brand. If omitted, returns deals from all active brands. Use rsnc_agent_list_brands to find brands. | |
| category | No | Filter perks by category. | |
| maxPrice | No | Maximum reward cost to include in results. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full disclosure burden. While it lists perk types found, it fails to disclose behavioral traits: read-only status, pagination behavior, result sorting, or output structure. It does not mention that all parameters are optional.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. Front-loaded with the action verb 'Search', followed by scope and filters, then specific examples of perk types. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter search tool with 100% schema coverage, but lacks output schema description or behavioral context given the absence of annotations. No mention of default sorting or result cardinality beyond the limit parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds semantic framing by mapping maxPrice to 'budget' and brandId to 'brands', but largely echoes schema content rather than adding significant syntactic or usage detail beyond the structured definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Uses specific verbs (Search, Find) and clearly identifies the resource (deals, discounts, rewards). The phrase 'across all brands' effectively signals broad scope, implicitly distinguishing it from sibling tools like rsnc_agent_brand_perks, though it does not explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage context ('across all brands or filter by category') and the schema references rsnc_agent_list_brands, but the description itself lacks explicit when-to-use guidance contrasting it with siblings like rsnc_agent_suggest_perks or rsnc_agent_best_deals.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_check_brandAInspect
Check if a brand or website is on the Resonance cashback network. Returns their reward program details if found, or information about how to join if not.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Brand name, domain, or URL to look up (e.g. "nike.com", "Starbucks"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It successfully discloses the dual-return behavior (reward details vs. join information) based on existence check. However, lacks disclosure of potential rate limits, auth requirements, or cache behavior expected for network lookup tools.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose upfront, second clarifies return behavior. Every word earns its place; appropriately sized for a single-parameter lookup tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates by detailing both possible return states (found vs not found). Given the tool's simplicity (1 param, no nesting) and clear sibling differentiation, the description provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the query parameter fully documented ('Brand name, domain, or URL'). Description mentions 'brand or website' which aligns with but does not extend beyond schema specifications. Baseline score appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specifies exact action (check network membership) and resource (brand/website on Resonance cashback network). The phrase 'on the Resonance cashback network' clearly distinguishes it from siblings like rsnc_agent_brand_info (general info) and rsnc_agent_list_brands (enumeration).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through conditional return description ('if found... if not'), but lacks explicit when-to-use guidance relative to rsnc_agent_brand_info or rsnc_agent_onboard_brand. No explicit alternatives or exclusions named.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_claim_rewardAInspect
Claim a reward for a valuable agent action. Rewards are paid from the Resonance network fund with daily and weekly rate caps. Discovery earns 100 RSNC, onboarding earns 500 RSNC.
| Name | Required | Description | Default |
|---|---|---|---|
| rewardType | Yes | Type of action to claim reward for. | |
| brandAddress | Yes | The brand address associated with this action. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses payment source (Resonance network fund), specific amounts, and rate limiting (daily/weekly caps). Missing idempotency, error cases on cap exceedance, and return structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose (sentence 1), constraints/source (sentence 2), parameter mapping (sentence 3). Well front-loaded with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Appropriately complete for a 2-parameter financial claim tool without output schema. Covers action, economics (caps, amounts), and fund source. Would benefit from mentioning success/failure return pattern or cap enforcement behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% so baseline is 3. Description adds significant semantic value by mapping 'discovery' and 'onboarding' enum values to specific RSNC amounts (100 vs 500), which helps agent understand parameter impact beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Claim' + resource 'reward' with clear scope (RSNC network fund). Distinguishes from sibling 'redeem_perk' by specifying token amounts (100/500 RSNC) and from 'my_rewards' by using active 'claim' vs passive viewing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Lists valid reward types (discovery, onboarding) and mentions rate caps implying usage limits, but lacks explicit guidance on when to call vs siblings like 'onboard_brand' or 'redeem_perk', and doesn't state prerequisites (e.g., 'call after completing onboarding').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_compare_brandsAInspect
Compare 2-5 brands side-by-side across all performance metrics: velocity, customers reached, transactions, LTZ distributed, event diversity, and perk utilization. Returns a structured comparison matrix with per-metric winners and insights.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period for comparison. Defaults to "30d". | |
| brandIds | Yes | Array of 2-5 brand identifiers to compare. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It compensates by disclosing the output structure ('structured comparison matrix with per-metric winners and insights') and specific metrics included in the comparison. Does not explicitly declare read-only status or rate limits, though 'Compare' implies safe read operation. Solid disclosure of return format given lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with no waste. First sentence covers functionality and metrics; second sentence covers return value. Front-loaded with key information (the 2-5 brand constraint and specific metrics). Metric list is long but necessary for clarity in this domain.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Simple 2-parameter tool with 100% schema coverage and no nesting. Description adequately compensates for missing output schema by describing the comparison matrix return structure. Given the rich sibling ecosystem (40+ tools), provides sufficient context to locate this tool's specific niche, though explicit sibling differentiation would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters fully documented. Description reinforces the brandIds constraints (2-5 items) and adds semantic context by listing what metrics the brands will be compared on, explaining the purpose of the comparison beyond the schema's 'Array of 2-5 brand identifiers to compare'. Baseline 3 appropriate since schema does heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description uses specific verb 'Compare' with clear resource 'brands' and explicit scope '2-5 brands side-by-side'. Lists six specific performance metrics (velocity, customers reached, transactions, LTZ distributed, event diversity, perk utilization) that clearly distinguish this from siblings like rsnc_agent_brand_analytics (single-brand) and rsnc_agent_compare_cashback (different domain).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly indicates usage by specifying '2-5 brands' constraint and describing the output format ('structured comparison matrix'), which helps identify when multi-brand analysis is needed. However, lacks explicit guidance on when to use this versus querying individual brands via rsnc_agent_brand_analytics and comparing manually, or when the comparison might be unnecessary.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_compare_cashbackAInspect
Compare cashback rates across brands for a specific purchase amount. Shows exactly how much you earn at each brand — side-by-side "Buy $100 at Nike = $8 back vs Adidas = $3 back". The key decision tool for purchase routing.
| Name | Required | Description | Default |
|---|---|---|---|
| brandIds | No | Brands to compare. Or omit and use category to auto-discover. | |
| category | No | Category to auto-discover brands for comparison (retail, dining, travel, gaming). | |
| purchaseAmount | Yes | Purchase amount in USD to calculate exact cashback. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It effectively discloses output format via the side-by-side example and clarifies this calculates earnings rather than executing payments. However, it omits safety properties (read-only vs. destructive), authentication requirements, or rate limiting that annotations would typically cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose statement, concrete output example, and usage context. The example is front-loaded with actionable detail. Every sentence earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 3-parameter tool without output schema or annotations, the description compensates well by illustrating the side-by-side comparison output format. However, it lacks guidance on error handling (e.g., invalid brandIds) or behavior when both category and brandIds are provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all three parameters have complete descriptions), establishing baseline 3. The description mentions 'purchase amount' and implies brand selection but doesn't add syntax details, validation constraints (e.g., max 10 brands), or usage patterns beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Compare') with explicit resource ('cashback rates across brands') and scope ('for a specific purchase amount'). The concrete example ('Buy $100 at Nike = $8 back vs Adidas = $3 back') precisely distinguishes this from generic brand comparison siblings like rsnc_agent_compare_brands.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Positions the tool as 'The key decision tool for purchase routing', establishing when to use it (when deciding where to purchase). However, it doesn't explicitly contrast with rsnc_agent_route_purchase (execution) or rsnc_agent_compare_brands (general attributes), leaving some ambiguity about the exact workflow sequence.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_create_eventAInspect
Create a new reward event for a brand. Events define what user actions earn cashback. The agent validates the configuration against the brand's data maturity before creating.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier (wallet address). | |
| eventType | Yes | The type of user action that triggers this reward event. | |
| maxClaims | No | Maximum total claims allowed. 0 = unlimited. Defaults to 0. | |
| customName | No | Required when eventType is "custom". Human-readable name for the custom event. | |
| rewardAmount | Yes | RSNC tokens awarded per claim. Must be greater than 0. | |
| cooldownHours | No | Hours between claims per user. Defaults to 24. | |
| detectionMethod | No | How the event is detected. Defaults to "webhook". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds valuable behavioral context that validation occurs against 'data maturity' before creation, which is not in the schema. However, as a write operation with no annotations, it omits critical behavioral details: success/failure responses, whether creation is idempotent, side effects, or what the return value contains.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three efficient sentences with zero waste. The structure is front-loaded with the action (Create), followed by conceptual context (what events do), and ends with behavioral guardrails (validation). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a creation tool with 7 parameters and rich schema coverage. However, given no output schema and no annotations, the description should disclose what constitutes successful creation (return format, confirmation IDs) or error conditions, which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds conceptual context ('user actions earn cashback') that helps interpret the rewardAmount and eventType parameters, but does not elaborate on syntax, constraints, or relationships between parameters beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Create') and resource ('reward event'). The phrase 'Create a new' distinguishes this from sibling rsnc_agent_update_event, though it does not explicitly mention the distinction between events and perks (rsnc_agent_create_perk) or when to choose between creation and processing (rsnc_agent_process_event).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions a prerequisite ('validates the configuration against the brand's data maturity'), implying this should only be called when configuration is ready. However, it lacks explicit guidance on when to use this versus rsnc_agent_suggest_events or rsnc_agent_update_event, or what happens if validation fails.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_create_perkAInspect
Create a new perk collection for a brand. Perks are rewards users can claim with earned RSNC tokens — discounts, free products, VIP access, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Display name for the perk. | |
| tags | No | Optional tags for search and filtering. | |
| brandId | Yes | The brand identifier (wallet address). | |
| category | No | Optional perk category (e.g. "discount", "freebie", "vip"). | |
| maxSupply | Yes | Total supply available. Must be greater than 0. | |
| maxPerUser | No | Maximum redemptions per user. Defaults to 1. | |
| description | Yes | Description of what the user receives. | |
| priceInPoints | Yes | RSNC cost to redeem this perk. Must be greater than 0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully explains domain semantics (perks as token-based rewards) but lacks operational details like permission requirements, reversibility, idempotency, or what the creation returns. 'Create' implies mutation but doesn't disclose side effects or failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. Front-loaded with the action ('Create a new perk collection'), followed by clarifying domain context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for an 8-parameter creation tool with 100% schema coverage. Explains the domain model sufficiently to use the tool. Minor gap: no output schema exists, and description doesn't indicate what gets returned on success (e.g., the created perk ID).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing baseline 3. The description mentions concepts mapping to core parameters (price in points, supply) but doesn't add syntactic details, constraints, or examples beyond what the schema already documents.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Create a new perk collection for a brand' provides clear verb (Create), resource (perk collection), and scope (for a brand). The naming and action clearly distinguish it from sibling tools like rsnc_agent_update_perk and rsnc_agent_browse_perks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage by explaining what perks are (rewards users claim with RSNC tokens) and examples (discounts, VIP access), helping agents understand when to use it. However, lacks explicit when-not guidance or named alternatives (e.g., vs updating existing perks).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_estimate_roiAInspect
Estimate the return on investment of adding a Resonance cashback program to a business. Takes industry and business metrics to project engagement and retention impact.
| Name | Required | Description | Default |
|---|---|---|---|
| industry | Yes | Business industry category. | |
| monthlyCustomers | No | Estimated monthly customers. Used to project reward costs and impact. | |
| averageOrderValue | No | Average transaction value in USD. Used to estimate cashback costs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It adds valuable context about what gets calculated ('project engagement and retention impact'), but omits key behavioral traits like whether results are persisted, rate limits, or the output format/structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core purpose, while the second efficiently covers inputs and behavioral scope. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 100% schema coverage and no output schema, the description adequately covers the input requirements and functional purpose. However, with no annotations and no output schema, it lacks completeness regarding the return value structure (e.g., whether it returns monetary values, percentages, or a detailed report).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description broadly acknowledges the parameters ('Takes industry and business metrics') but does not add semantic details beyond what the schema already provides (e.g., it does not clarify valid ranges or input relationships).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Estimate the return on investment') and resource ('adding a Resonance cashback program'), distinguishing it from sibling analytics tools. However, it could more explicitly contrast with 'brand_analytics' or 'network_analytics' siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The phrase 'adding a Resonance cashback program' provides implied guidance that this is for pre-implementation/prospective analysis. However, it lacks explicit when-to-use instructions or contrast with the many sibling analytics tools (e.g., brand_analytics, perk_analytics).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_event_performanceBInspect
See how each reward event is performing at a brand: which actions drive the most engagement, claim rates, and reward efficiency per event type.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period for analytics. Defaults to "30d". | |
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. The verb 'See' implies a read-only analytics operation, and it lists specific return metrics (engagement, claim rates, reward efficiency). However, it does not explicitly confirm the read-only nature, mention data freshness, pagination limits, or required permissions that would be necessary for a complete behavioral picture without annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with efficient colon structure that front-loads the primary action. Every clause earns its place: the first part defines the operation, the second specifies the metrics. Slightly repetitive ('each reward event' vs 'per event type') but generally tight and readable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple 2-parameter input schema and lack of output schema, the description adequately covers what data is returned by enumerating the performance metrics (engagement, claim rates, efficiency). It appropriately delegates parameter details to the well-documented schema, making it complete enough for tool selection despite missing output format specifics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'at a brand' which reinforces the brandId parameter, but adds no additional context about the period parameter (7d/30d/90d) or valid formats beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('See') and resource ('reward event performance'), clarifying the scope with 'at a brand' and 'per event type'. It implicitly distinguishes from siblings like rsnc_agent_brand_analytics and rsnc_agent_perk_analytics by focusing specifically on reward events and listing metrics (engagement, claim rates, efficiency). However, it lacks explicit comparison to sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but provides no guidance on when to use it versus alternatives. For example, it does not clarify when to choose this over rsnc_agent_brand_analytics or rsnc_agent_process_event, nor does it mention prerequisites like needing an existing brandId from prior steps.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_leaderboardBInspect
See top earners and most active customers for a brand. Useful for social proof or finding the most rewarding brands to shop at.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of entries to return. Defaults to 10. | |
| metric | No | Leaderboard ranking metric. Defaults to rsnc_earned. | |
| period | No | Time period for the leaderboard. Defaults to all_time. | |
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It conceptually describes the returned data ('top earners', 'most active customers') but omits technical behavioral details like pagination, caching, real-time vs. delayed data, or the actual response structure. No safety hazards are mentioned.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with efficient front-loading: first defines the function, second provides use cases. No wasted words. The only inefficiency is the weak verb 'See' in place of a more precise action verb.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 4 parameters with 100% schema coverage, the description adequately covers the conceptual model. However, with no output schema and no annotations, it should ideally describe the return structure or data format. The missing 'title' field is also noteworthy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description itself adds no parameter-specific guidance beyond what's in the schema (e.g., it doesn't explain the relationship between 'metric' values and the 'top earners' concept, or provide examples for the enum values).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'top earners and most active customers for a brand' (specific resource + scope). However, it uses the weak verb 'See' rather than 'Retrieve' or 'Get', and fails to differentiate from similar sibling tools like 'rsnc_agent_brand_rankings' or 'rsnc_agent_user_stats'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage context ('Useful for social proof or finding the most rewarding brands'), but lacks explicit guidance on when to use this versus 'brand_rankings' or 'user_stats' alternatives. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_list_brandsAInspect
Search and browse brands offering cashback, rewards, and deals. Filter by category (retail, dining, travel, gaming) to find relevant offers for shopping, booking, or purchasing.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of brands to return. Defaults to 50. | |
| offset | No | Pagination offset. Defaults to 0. | |
| category | No | Filter brands by category (e.g. "retail", "gaming"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. While 'search and browse' implies read-only access, it omits critical behavioral details: return payload structure (no output schema), pagination behavior explanation, or rate limiting—essential for a discovery tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences front-load the purpose. Minor redundancy between 'Search' and 'browse', but trailing clause effectively communicates use-case context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for basic discovery but gaps remain: no output schema means return structure should be described, and given 40+ sibling tools, stronger differentiation from rsnc_agent_brand_info and rsnc_agent_best_deals would improve selection accuracy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (baseline 3), the description adds value by enumerating specific category examples ('retail, dining, travel, gaming') beyond the schema's limited examples, clarifying valid filter values.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Search and browse') with clear resource scope ('brands offering cashback, rewards, and deals'), distinguishing it from sibling tools like rsnc_agent_brand_info (specific details) or rsnc_agent_claim_reward (actions).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context ('find relevant offers for shopping, booking, or purchasing') but lacks explicit when-to-use guidance versus alternatives like rsnc_agent_brand_info or rsnc_agent_check_brand in the crowded 40+ tool namespace.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_manage_keysAInspect
Manage API keys for brands you onboarded. Rotate, revoke, or check status. Never exposes secrets — credentials are accessed through the partner portal.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Key management action. | |
| brandId | Yes | Brand wallet address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable security context ('Never exposes secrets') but fails to disclose mutation side effects (revoke is irreversible, rotate invalidates old keys) or auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: purpose first, actions second, security guarantee third. Every sentence earns its place with no repetition of title or schema details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 2-parameter tool with 100% schema coverage, but gaps exist: no annotations declaring safety/destructiveness, no output schema, and missing return value description for status checks.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions present. Description reinforces enum values (rotate, revoke, status) but adds no syntax, format, or semantic details beyond schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb 'Manage' + resource 'API keys' with scope constraint 'for brands you onboarded'. Lists concrete actions (rotate, revoke, status) and distinguishes from sibling onboard/analytics tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies prerequisite via 'brands you onboarded' but lacks explicit when-to-use guidance or named alternatives (e.g., does not mention to use rsnc_agent_onboard_brand first). No exclusion criteria stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_my_rewardsAInspect
Check your agent's RSNC reward earnings, activity stats, and remaining rate cap allowances. Shows how much the agent has earned for discovering brands, onboarding partners, and driving network growth.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Compensates partially by detailing specific return data categories (earnings, stats, rate cap allowances, activity breakdown). However, fails to explicitly confirm read-only safety, idempotency, or that it doesn't modify agent state—important given the 'rewards' context implies potential mutation risk.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. Front-loaded with core function ('Check your agent's...'), followed by clarifying elaboration (breakdown of earnings sources). Every word earns its place; appropriate length for simple read tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema exists, but description compensates effectively by enumerating four distinct data categories returned (earnings, activity stats, rate cap allowances, earnings breakdown by activity type). Sufficient for agent to understand tool value. Minor gap: doesn't specify time period scope (current vs lifetime earnings).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Zero parameters required. Per calibration guidelines, 0 params baseline is 4. Description appropriately requires no additional parameter explanation since the tool operates on implicit agent context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: uses concrete verb 'Check' with clear resource 'agent's RSNC reward earnings, activity stats, and remaining rate cap allowances.' Explicitly targets 'agent' scope, distinguishing from sibling user_balance tools, and 'check' distinguishes from action-oriented siblings like claim_reward or onboard_brand.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage through descriptive content—indicates this retrieves reward data broken down by 'discovering brands, onboarding partners, and driving network growth.' However, lacks explicit when-to-use guidance vs alternatives like claim_reward or network_analytics, and doesn't state prerequisites or timing considerations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_network_analyticsAInspect
Get real-time network analytics: total users, brands, rewards distributed, trending categories, and growth metrics across the entire Resonance network.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period for analytics. Defaults to "30d". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds 'real-time' data freshness trait and lists return metrics (compensating for missing output schema). However, omits: read-only safety confirmation, rate limits, aggregation methodology, or latency expectations. 'Get' implies read-only but explicit confirmation would help given no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, dense sentence with optimal structure: Verb ('Get') + qualifier ('real-time') + resource + colon-delimited return value specification + scope qualifier. Every clause earns its place. Appropriate length for complexity level.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description admirably documents return values via the metrics list (users, brands, rewards, categories, growth). Scope is clear. Missing only: safety confirmation (read-only status) which would be valuable given lack of annotations, and differentiation from other network-* siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (period parameter fully documented with enum and default). Description does not mention the parameter, but with high schema coverage, baseline 3 is appropriate per rubric. Description implies temporal scope via 'real-time' but doesn't add syntax/format details beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: verb 'Get' + resource 'network analytics' + scope 'across the entire Resonance network'. The colon-separated list (users, brands, rewards, etc.) precisely defines the data resources accessed, distinguishing this from sibling brand_analytics (single entity) and user-specific tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through scope ('entire Resonance network') but lacks explicit when-to-use guidance versus siblings like rsnc_agent_network_stats, rsnc_agent_network_trending, or rsnc_agent_network_info. No mention of when NOT to use (e.g., for single-brand analytics use brand_analytics instead).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_network_flowsBInspect
See how users flow between brands across the network. Shows which brands are emitters (users leave to redeem elsewhere) vs attractors (users come to redeem). All data is anonymous and category-level.
| Name | Required | Description | Default |
|---|---|---|---|
| period | No | Time period for flow analysis. Defaults to "30d". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden and partially succeeds by stating data constraints (anonymous, category-level) and defining behavioral classifications (emitters/attractors). However, it omits return format details, pagination, or whether results include raw counts vs percentages.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structure the information: purpose statement, specific differentiator (emitter/attractor concept), and data constraints. No redundant text, though the term 'category-level' could be ambiguous without additional context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the single optional parameter and lack of output schema, the description adequately covers the conceptual model (network flows). However, for a complex analytical tool with many network-related siblings, it lacks integration context—such as whether this aggregates data from rsnc_agent_brand_analytics or operates independently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single 'period' parameter, the baseline score applies. The description does not add param-specific usage guidance (e.g., when to use 7d vs 90d), but the schema is complete enough that additional description text is unnecessary.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool's specific function—analyzing user flows between brands—and introduces key domain concepts (emitters vs attractors) that distinguish it from generic network analytics siblings like rsnc_agent_network_analytics or rsnc_agent_network_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this versus the numerous sibling network analysis tools (e.g., rsnc_agent_network_analytics, rsnc_agent_brand_analytics), nor are prerequisites or filtering limitations mentioned beyond the optional period parameter.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_network_infoBInspect
Returns info about the Resonance rewards network — a cross-brand cashback and perks marketplace covering retail, dining, travel, gaming, and more.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. The verb 'Returns' implies a safe read-only operation, and the em-dash elaboration adds useful business context about what the Resonance network is. However, it fails to disclose what specific information is returned (configuration, metadata, statistics), any rate limiting, or whether the data is static or real-time.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no filler. The em-dash construction appropriately front-loads the core action ('Returns info about the Resonance rewards network') and uses the second clause for domain elaboration that aids agent understanding without clutter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the description successfully characterizes the business domain (cashback marketplace categories), it lacks an output schema and fails to compensate by describing the return value structure or content. Given the existence of many specific network-sibling tools, the description should clarify what distinguishes this general 'info' from specialized analytics, stats, or flows data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which per the calibration guidelines establishes a baseline score of 4. The description correctly implies no input filtering is needed by describing a general information retrieval about the entire network.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Returns') and resource ('info about the Resonance rewards network') and adds valuable domain context describing the marketplace coverage (retail, dining, travel, gaming). However, it does not distinguish from sibling tools like `rsnc_agent_network_analytics`, `rsnc_agent_network_stats`, or `rsnc_agent_network_trending`, which is critical given the crowded namespace.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the numerous sibling network-specific tools (e.g., network_analytics, network_stats, network_flows). It omits any 'when-not-to-use' advice or explicit alternatives, leaving the agent to guess whether this returns general metadata or specific metrics.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_network_statsAInspect
Returns how many brands, users, and rewards are active — useful for understanding the scope of available deals and cashback offers.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of disclosure. It states what data is returned (active counts) but omits behavioral details such as whether data is real-time or cached, rate limiting, authentication requirements, or the response structure format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of a single efficient sentence with an em-dash separating the function declaration from the use case. Every word earns its place—no redundancy or filler. The structure front-loads the action ('Returns') before the utility clause.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the low complexity (zero parameters, no output schema, no annotations), the description adequately explains the tool's purpose and return intent. While it could benefit from_hinting at the return data structure (object with count fields), the absence of an output schema means this is not strictly required for a complete understanding of when to invoke the tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters (confirmed by empty properties object). According to calibration guidelines, zero-parameter tools receive a baseline score of 4. The description appropriately does not invent parameter requirements, allowing the empty schema to speak for itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Returns') and identifies the exact resources being counted (brands, users, rewards). It clarifies that this provides aggregate counts of active entities, distinguishing it from sibling tools that retrieve individual records or detailed analytics like rsnc_agent_network_analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides contextual guidance by stating it's 'useful for understanding the scope of available deals and cashback offers,' implying when to invoke it. However, it lacks explicit guidance on when NOT to use it or which sibling tools (e.g., rsnc_agent_network_analytics vs rsnc_agent_network_info) to use for more detailed breakdowns.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_network_trendingCInspect
See what's trending across the Resonance network: hottest perk categories, fastest growing brand categories, and overall network health metrics.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of trending items per category. Defaults to 10. | |
| period | No | Time period for trending data. Defaults to "7d". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description must carry full behavioral disclosure burden. While it conceptually describes what data is monitored, it fails to disclose whether this is read-only, what the return structure looks like (critical given no output schema exists), or any side effects/costs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence efficiently packs three specific data domains. Well-structured with colon-delimited list. No wasted words, though front-loading the resource type before the colon could be marginally clearer.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple 2-parameter tool with 100% schema coverage, but insufficient given the lack of output schema. Should describe the return format or structure since no output_schema exists to document it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear descriptions for both limit ('Maximum number...per category') and period. The main description adds context that trending data covers three distinct categories, which helps interpret how the limit parameter applies, but does not elaborate on parameter syntax or validation rules beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('See what's trending') and identifies three concrete data categories returned (hottest perk categories, fastest growing brand categories, network health metrics). Implicitly distinguishes from siblings by emphasizing 'trending' nature, though it doesn't explicitly contrast with rsnc_agent_network_analytics or rsnc_agent_network_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no explicit guidance on when to use this versus sibling tools like rsnc_agent_network_analytics or rsnc_agent_network_stats. No indication of prerequisites, rate limits, or selection criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_next_goalBInspect
Calculate the fastest path to a user's next attainable reward. Shows exactly how many actions (purchases, reviews, referrals) are needed to afford a specific perk. Creates urgency and drives engagement. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User identifier (email or wallet address). | |
| brandId | No | Optional: focus on a specific brand. If omitted, finds the best goal across all brands the user is active with. | |
| targetPerkId | No | Optional: calculate path to a specific perk. If omitted, finds the most attainable perk. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Mentions 'Requires authentication' explicitly. 'Calculate' implies read-only operation, but doesn't confirm lack of side effects or mutation. Ambiguous phrase 'Creates urgency' could imply notifications (unclear if tool sends alerts or merely returns data that creates urgency). No disclosure of rate limits, error cases, or return format despite missing output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with front-loaded purpose. However, 'Creates urgency and drives engagement' is marketing fluff that wastes space without helping the agent select or invoke the tool. Otherwise efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a calculation tool: mentions what information is returned (action counts needed). However, with no output schema and no annotations, significant gaps remain: no description of response structure, error scenarios (e.g., user has no active brands), or pagination behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline 3. Description adds semantic context by listing example actions (purchases, reviews, referrals) not present in schema, helping explain what constitutes the 'path'. Doesn't detail parameter syntax beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Calculate') and resource ('path', 'reward'). Distinguishes from execution siblings like 'claim_reward' or 'redeem_perk' by focusing on planning/calculation. Lists specific action types (purchases, reviews, referrals) clarifying scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage via optional parameter descriptions in schema (find best goal vs specific perk), but description lacks explicit when-to-use guidance and doesn't distinguish from recommendation tools like 'suggest_perks' or 'user_recommendations'. 'Requires authentication' is the only explicit constraint mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_onboard_brandAInspect
Register a new brand on the Resonance cashback network. Creates their account, generates credentials (accessible via partner portal), and sends a welcome email. Requires agent authentication with onboarding permission.
| Name | Required | Description | Default |
|---|---|---|---|
| website | No | Brand website URL. | |
| category | No | Industry category (retail, dining, travel, gaming, other). | |
| description | No | Short description of the business. | |
| brandAddress | Yes | Brand's public wallet address (Ethereum format, 0x-prefixed). | |
| businessName | Yes | Brand/business name. | |
| contactEmail | Yes | Contact email — becomes their portal login. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It effectively reveals side effects (account creation, credential generation, welcome email sent) and security requirements (agent auth with specific permission). Missing details on idempotency or conflict handling when brand already exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: sentence 1 states purpose, sentence 2 lists behavioral effects (account/credentials/email), sentence 3 states prerequisites. Information is front-loaded and appropriately sized for the tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 6 parameters, no annotations, and no output schema, the description adequately covers primary effects and auth needs. Would benefit from mentioning return values (e.g., confirmation status) or error scenarios (duplicate brand detection).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. The description adds narrative context that credentials are 'accessible via partner portal' and implies contactEmail becomes login (mentioned in schema), but does not explicitly elaborate parameter formats or validation beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Register' with resource 'brand' and scope 'Resonance cashback network'. It clearly distinguishes from sibling query tools like check_brand, list_brands, and brand_info by emphasizing 'new' brand onboarding versus existing brand operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states the prerequisite 'Requires agent authentication with onboarding permission', which is critical for agent selection. However, it lacks explicit guidance on alternatives (e.g., 'if brand exists, use check_brand instead') or when not to use the tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_perk_analyticsBInspect
Analyze perk collection performance: redemption rates, supply status, which perks are most popular. Helps brands optimize their reward catalog.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'analyze' implies read-only access, the description doesn't confirm this, nor does it describe the return format, data freshness, pagination behavior, or any cost/query limit implications that an analytics tool typically requires.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The two-sentence structure is efficient and front-loaded with specific capabilities (metrics analyzed) before the value proposition. However, given the lack of annotations and output schema, the description could have utilized this space to disclose behavioral traits or return structure instead of ending with generic marketing language.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter analytical tool, the description adequately identifies the analysis dimensions. However, given the absence of an output schema and annotations, it leaves significant gaps by not describing what data structure is returned, whether metrics are real-time or cached, or any filtering limitations beyond the implied brand scope.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage for the single `brandId` parameter, the schema adequately documents inputs. The description adds contextual meaning by indicating this ID is used to analyze that specific brand's perk performance, meeting the baseline expectation when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the analytical purpose with specific metrics (redemption rates, supply status, popularity) and the resource (perk collection performance). It implicitly distinguishes from sibling tools like `brand_analytics` (broader scope) and `perk_audience` (different focus) by specifying performance analysis, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage through the value proposition ('Helps brands optimize their reward catalog'), suggesting when to use it (for optimization decisions). However, it lacks explicit guidance on when to prefer this over `perk_intelligence` or `brand_perks`, and doesn't mention prerequisites like needing an existing perk catalog.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_perk_audienceAInspect
Get the persona breakdown of users who claimed a specific perk: archetypes, engagement levels, and platform distribution.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. | |
| collectionId | Yes | The perk collection ID. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full disclosure burden. It adds valuable behavioral context by specifying exactly what data is returned (archetypes, engagement levels, platform distribution), compensating for the missing output schema. However, it omits operational details like rate limits, pagination behavior, whether data is real-time or cached, and permission requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Optimal single-sentence structure. Front-loaded action verb 'Get,' colon-separated list clarifying the three output dimensions, zero redundancy. Every word contributes to understanding the scope and return value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 flat parameters) and lack of output schema, the description appropriately compensates by detailing the three returned data categories. Sufficient for invocation decisions, though could mention if results are aggregated or time-bounded.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage (brandId and collectionId both documented), the baseline is 3. The description adds minimal technical semantics beyond the schema, but provides domain context by referencing 'specific perk,' which anchors the abstract 'collectionId' to the business concept of a perk collection.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Get' (verb) + 'persona breakdown' (resource) + 'users who claimed a specific perk' (scope). The phrase distinguishes this from sibling rsnc_agent_brand_audience (broader brand audience) and rsnc_agent_user_persona (individual vs. aggregate perk claimers). The three data dimensions listed (archetypes, engagement levels, platform distribution) precisely define the output.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through the phrase 'users who claimed a specific perk,' suggesting this is for post-claim analytics. However, lacks explicit guidance on when to choose this over rsnc_agent_perk_analytics or rsnc_agent_brand_audience, and omits prerequisites like whether the brandId/collectionId combo must be valid/accessible.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_perk_intelligenceAInspect
Get intelligence on a brand's perks: heat scores, conversion funnels, supply depletion rates, and retention impact. Helps optimize perk catalogs.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Lists specific data points returned (heat scores, funnels, depletion rates, retention impact), which discloses what the intelligence comprises. However, lacks information on data freshness, caching, rate limits, error behaviors, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences totaling 16 words. First sentence front-loads specific capabilities and return values; second states value proposition. Zero redundancy or filler content. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read operation with no output schema, the description adequately compensates by listing the specific intelligence metrics returned. Sufficient for the tool's complexity level, though data freshness or pagination notes would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with 'brandId' described as 'The brand identifier.' Description references 'a brand's perks' implying the parameter context but adds no syntax or format details beyond schema. With complete schema coverage, baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get intelligence') and resource ('brand's perks'), with specific metrics listed (heat scores, conversion funnels, supply depletion rates, retention impact). Distinguishes from generic analytics siblings by specifying these particular intelligence metrics, though could explicitly differentiate from 'perk_analytics' or 'brand_analytics'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implied usage via 'Helps optimize perk catalogs,' suggesting when to invoke the tool. However, lacks explicit guidance on when to prefer this over siblings like 'rsnc_agent_perk_analytics' or 'rsnc_agent_brand_analytics', and no contraindications or prerequisites mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_process_bulkBInspect
Record multiple qualifying actions in one request — batch purchases, signups, or engagement events across brands. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| events | Yes | Array of events to process. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It mentions 'Requires authentication' but fails to disclose critical bulk operation behaviors: atomicity (all-or-nothing vs partial success), batch size limits, rate limiting, idempotency, or what constitutes a 'qualifying' action versus a rejected one.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient two-sentence structure. First sentence establishes scope and examples; second states authentication requirement. No redundancy, optimally front-loaded with the essential batch-processing concept.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Input requirements are fully covered by schema and description. However, given this is a complex bulk write operation with no output schema or annotations, the description should disclose behavioral outcomes (success indicators, failure modes, partial batch rejection) which are absent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (events array and all nested properties are fully documented). The description adds no parameter-specific syntax or semantics beyond the schema, which aligns with the baseline score of 3 for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Record' with specific resource 'qualifying actions' and concrete examples (purchases, signups, engagement). The terms 'multiple', 'batch', and 'in one request' effectively distinguish this from the sibling rsnc_agent_process_event which implies single-event processing, though it could explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through 'batch' and 'multiple', suggesting when to use this over single-event tools. However, lacks explicit when-to-use/when-not-to-use guidance or explicit comparison to rsnc_agent_process_event for cases with only one event.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_process_eventBInspect
Record a qualifying action (purchase, signup, review, referral) to earn cashback for a user. Returns the reward amount credited. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User email address. | |
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. | |
| metadata | No | Optional metadata to attach to the event. | |
| eventType | Yes | The type of event to track (e.g. "purchase", "review", "referral"). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses authentication requirement and return value ('Returns the reward amount credited'), helpful given no output schema and no annotations. However, misses side effects, idempotency, error modes, or criteria for 'qualifying' actions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, zero waste. Front-loaded with action verb, followed by return value and auth requirement. No redundancy with schema or annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter tool with no annotations, covering core purpose, auth, and return value. However, fails to resolve ambiguity with overlapping sibling tools or explain what data belongs in the nested metadata object.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, establishing baseline 3. Description adds 'signup' to event type examples (schema lists purchase/review/referral) but doesn't elaborate on the optional 'metadata' object structure or required parameter interactions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific purpose: 'Record a qualifying action... to earn cashback' with concrete examples (purchase, signup, review, referral). However, it doesn't distinguish from similar siblings like 'rsnc_agent_create_event' or 'rsnc_agent_process_bulk'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this versus alternatives (create_event, process_bulk, route_purchase) which sound functionally overlapping. Only notes the authentication requirement without workflow context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_redeem_perkAInspect
Claim a reward for a user — redeem a discount, freebie, or exclusive offer using their earned cashback. Returns a confirmation code. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| perkId | Yes | The ID of the perk to redeem. | |
| userId | Yes | User identifier (email or wallet address). | |
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden and successfully discloses: return value (confirmation code), authentication requirement, and side effect (consumes earned cashback). It does not mention reversibility or rate limits, but covers the critical behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two compact sentences with zero redundancy. Front-loaded action ('Claim a reward'), followed by mechanism, return value, and auth requirement. Every clause delivers distinct information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately compensates for missing output schema by describing the confirmation code return. Covers authentication and mutation side effects (cashback deduction). Lacks error handling or prerequisite details (e.g., minimum cashback balance), but sufficient for a 3-parameter mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema achieves 100% description coverage for all 3 parameters. The description provides no additional parameter-specific semantics (e.g., format examples, validation rules), warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb (redeem/claim), resource (discount/freebie/offer), and mechanism (using earned cashback). However, it does not distinguish from sibling tool `rsnc_agent_claim_reward`, which has nearly synonymous naming.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions authentication requirement but lacks explicit when-to-use guidance or differentiation from `rsnc_agent_claim_reward` or `rsnc_agent_browse_perks`. The schema references `rsnc_agent_list_brands`, but the description itself provides no workflow context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_request_infoAInspect
Request information about adding a Resonance cashback program to a business. Captures contact details for the partnerships team.
| Name | Required | Description | Default |
|---|---|---|---|
| website | No | Business website URL. | |
| industry | No | Business industry category. | |
| businessName | Yes | Name of the business. | |
| contactEmail | Yes | Contact email for the business owner or decision maker. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the full burden. It discloses the write nature ('Captures contact details') and destination ('partnerships team'), which is essential context. However, it omits idempotency, response format, or what confirmation the user receives after submission.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose, second discloses the capture mechanism. Front-loaded and appropriately sized for a 4-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lead-capture tool without output schema, the description adequately covers intent (cashback program inquiry), audience (businesses), and outcome (partnerships team notification). Minor gap: does not mention what return value or confirmation the caller receives.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (all 4 params described), so baseline is 3. The description mentions 'Captures contact details' which loosely aligns with contactEmail/businessName, but adds no format constraints, validation rules, or semantic relationships beyond the schema definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Request information' (verb) + 'about adding a Resonance cashback program' (resource/scope). The phrase distinguishes from sibling rsnc_agent_onboard_brand by clarifying this is an information inquiry/capture step rather than the actual onboarding process.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context ('adding a Resonance cashback program' suggests early-stage business interest), but lacks explicit guidance on when to use vs rsnc_agent_onboard_brand or what prerequisites exist. No 'when not to use' specified.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_route_purchaseAInspect
Find the best brand to buy from based on cashback and rewards. Given a shopping intent, compares all matching brands and ranks them by total reward value — cashback earned, perks redeemable, and effective savings. The core purchase intelligence tool for agentic commerce.
| Name | Required | Description | Default |
|---|---|---|---|
| intent | Yes | What the user wants to buy — e.g. "running shoes", "coffee", "hotel in NYC". | |
| userId | No | Optional user ID for personalized routing — factors in existing balances and redeemable perks. | |
| category | No | Direct category filter (retail, dining, travel, gaming). | |
| purchaseAmount | No | Expected purchase amount in USD. Used to calculate exact cashback and savings. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It effectively explains the ranking algorithm (cashback + perks + savings) but omits operational details like whether this is a read-only query, if it consumes API quotas, or what the return structure looks like (ranked list vs single recommendation). No mention of destructive side effects or caching behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three tightly constructed sentences with no repetition. Front-loaded with the core action ('Find the best brand'). The final sentence ('The core purchase intelligence tool...') borders on marketing fluff but serves the functional purpose of positioning among 40+ sibling tools. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 4-parameter input with 100% schema coverage and no output schema, the description adequately covers the tool's purpose and mechanism. While it would benefit from hinting at the output structure (e.g., 'returns ranked brand recommendations'), the combination of the tool name ('route_purchase') and the ranking description provides sufficient context for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by explaining how parameters interact: purchaseAmount is 'used to calculate exact cashback' and userId 'factors in existing balances.' It also maps the 'intent' parameter to the natural language concept of 'shopping intent,' adding semantic clarity beyond the schema's basic 'what the user wants to buy.'
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: states exact action ('Find the best brand'), selection criteria ('based on cashback and rewards'), and ranking logic ('ranks them by total reward value'). The phrase 'core purchase intelligence tool' distinguishes it from sibling analytics tools (e.g., rsnc_agent_brand_analytics) by positioning it as the primary recommendation engine for transactions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit context ('Given a shopping intent'), suggesting use when a user wants to make a purchase. However, lacks explicit guidance on when to use this versus sibling tools like rsnc_agent_compare_brands or rsnc_agent_best_deals, which also involve comparison and discovery. No mention of prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_stack_dealsAInspect
Calculate the optimal deal stack for a specific purchase at a specific brand. Combines base cashback earned + best redeemable perk + any active promotions to show total savings. Use after route_purchase to maximize value at the chosen brand.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User ID to check redeemable perks against their balance. | |
| brandId | Yes | The brand to optimize the deal for. | |
| purchaseAmount | Yes | Purchase amount in USD. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Moderate: With no annotations, description carries full burden. It explains the calculation components (cashback + perk + promotions) but fails to clarify if this is a pure calculation/simulation or if it applies/reduces the user's perk balance. Given sibling 'redeem_perk' exists, this distinction is critical but missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Perfect: Three sentences with zero waste. Front-loaded purpose ('Calculate...'), followed by mechanism ('Combines...'), followed by usage context ('Use after...'). No redundancy with schema or sibling names.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Strong for input side (3 params, 100% schema coverage). Given no output schema, description partially compensates by mentioning 'show total savings', though it could specify whether return is a dollar amount, percentage, or breakdown object. Sufficient but not exhaustive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Good: With 100% schema coverage, baseline is 3. Description adds value by explaining the conceptual relationship between parameters—'stack for a specific purchase at a specific brand' clarifies why brandId, purchaseAmount, and userId must be provided together as a transaction context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity: 'Calculate the optimal deal stack' uses a concrete verb with clear resource (deal stack) and scope (specific purchase at specific brand). Distinguishes from general deal tools like 'best_deals' by emphasizing the stacking calculation aspect.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent: Explicitly states 'Use after route_purchase' establishing clear workflow sequencing. References specific sibling tool 'route_purchase' by name and explains the value proposition ('maximize value'), giving agents precise context on when to invoke.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_suggest_eventsAInspect
Get AI-recommended reward event configurations for a brand. Analyzes what event types, reward amounts, and cooldowns work best in the brand's category based on network-wide patterns.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Adds valuable context about AI methodology (analyzes network-wide patterns, category benchmarks) but omits safety properties (read-only status, latency, rate limits) and return structure details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two well-structured sentences with zero waste. First establishes action and resource, second explains methodology and scope. Information density is high with no filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, description effectively details conceptual return values (event types, reward amounts, cooldowns). Missing explicit behavioral safety notes given lack of annotations, but adequate for a single-parameter recommendation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with clear 'brandId' description. Description implies the brand context ('for a brand') but adds no syntax or format details beyond schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Excellent specificity with concrete verbs ('Get', 'Analyzes') and resource ('reward event configurations'). Clearly distinguishes from sibling 'create_event' (recommendation vs. execution) and 'suggest_perks' (events vs. perks) through scope description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage through 'AI-recommended' and analysis description, but lacks explicit guidance on when to prefer this over create_event/update_event or workflow integration (e.g., 'call this before creating events').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_suggest_perksBInspect
Get AI-recommended perk configurations for a brand. Suggests perk types, pricing, and supply limits based on audience composition and balance distribution patterns.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It explains the recommendation logic (based on audience composition and balance patterns) but does not clarify operational traits such as whether this operation is read-only, idempotent, computationally expensive, or if it has rate limits. The term 'Get' implies safety, but explicit confirmation is absent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences with zero waste. The first establishes the core action and resource, while the second elaborates on specific outputs and algorithmic inputs without redundancy. It is appropriately front-loaded and sized for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without an output schema, the description adequately covers the functional scope by enumerating what constitutes a 'perk configuration' (types, pricing, limits) and the data driving recommendations. It lacks only explicit side-effect or cost disclosures, which would be necessary for a complete operational picture given the absence of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single 'brandId' parameter. The description adds business context that the brand ID drives analysis of 'audience composition and balance distribution patterns,' which adds semantic meaning beyond the schema's technical 'brand identifier' label. With high schema coverage, this meets the baseline expectation of augmenting the raw schema definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] AI-recommended perk configurations' and specifies the outputs (perk types, pricing, supply limits) and inputs (audience composition, balance distribution). However, it does not explicitly differentiate from siblings like 'browse_perks' or 'create_perk', which is critical given the dense perk-related toolset.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but provides no guidance on when to use it versus alternatives such as 'rsnc_agent_create_perk', 'rsnc_agent_browse_perks', or 'rsnc_agent_brand_perks'. Given the many sibling tools with overlapping domains, the lack of explicit selection criteria is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_update_eventAInspect
Update an existing reward event's configuration. Changes to reward amounts are validated against performance data to ensure sufficient data for evaluation.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier (wallet address). | |
| enabled | No | Enable or disable the event. | |
| eventId | Yes | The event identifier to update. | |
| maxClaims | No | New maximum total claims allowed. 0 = unlimited. | |
| rewardAmount | No | New RSNC tokens awarded per claim. | |
| cooldownHours | No | New cooldown period in hours between claims per user. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description adds valuable behavioral context about validation logic ('validated against performance data to ensure sufficient data'). However, it omits safety-critical details like whether updates are reversible, what happens on validation failure, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero waste. The first establishes the operation, the second adds the validation constraint. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With six parameters and 100% schema coverage, the description covers the core purpose adequately. However, for a mutation tool without output schema or annotations, it should disclose the return value structure or success/failure indicators to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description mentions 'reward amounts' which maps to rewardAmount param, but this merely restates schema information without adding syntax details, edge cases, or relationships between parameters (e.g., that changing rewardAmount triggers validation).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the verb (Update) and resource (reward event's configuration), distinguishing it from sibling create_event and process_event tools. It could be elevated to a 5 by explicitly contrasting with create_event or clarifying this is for configuration edits versus state changes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The mention that 'Changes to reward amounts are validated against performance data' provides implicit guidance about prerequisites, suggesting performance data should exist first. However, it lacks explicit when-to-use/when-not-to-use rules or comparison to alternatives like create_event.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_update_perkCInspect
Update a perk collection's configuration. Can adjust pricing, supply, and active status.
| Name | Required | Description | Default |
|---|---|---|---|
| brandId | Yes | The brand identifier (wallet address). Used for permission and scope checks. | |
| isActive | No | Enable or disable the perk. | |
| maxSupply | No | New total supply. | |
| description | No | New description text. | |
| collectionId | Yes | The perk collection ID to update. | |
| priceInPoints | No | New RSNC cost to redeem. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden of behavioral disclosure. While it identifies this as a mutation ('Update'), it fails to disclose whether partial updates are supported, what happens to unspecified fields, idempotency characteristics, or permission requirements beyond the schema's mention of scope checks.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is exactly two sentences with zero redundancy. The first sentence establishes the primary action and target resource, while the second enumerates the configurable aspects. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
As a mutation tool with no output schema and no annotations, the description leaves significant gaps. It should disclose whether the update is atomic, how to unset fields (if possible), error handling patterns, or whether the operation triggers side effects like invalidating existing claims. The current description is insufficient for safe invocation of a write operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all six parameters adequately. The description maps the abstract concepts ('pricing, supply, active status') to the schema fields but adds no additional validation rules, format constraints, or interdependency logic beyond what the schema provides. This meets the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the action ('Update'), resource ('perk collection's configuration'), and specific updatable fields ('pricing, supply, and active status'). While it doesn't explicitly name sibling tools like 'create_perk' to differentiate, the verb 'Update' and context make the scope distinguishable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description lists what can be updated but provides no guidance on when to use this tool versus alternatives like 'create_perk' or 'rsnc_agent_update_event'. There is no mention of prerequisites (e.g., requiring an existing collectionId) or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_user_balanceAInspect
Check how much cashback a user has earned at a specific brand and how much they can spend on rewards. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User identifier (email address or wallet address). | |
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Successfully discloses authentication requirement. However, omits other behavioral traits like return format structure, rate limits, or explicit read-only nature despite being a 'Check' operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficiently structured sentences with no filler. First sentence states the core functionality, second states the authentication requirement. Front-loaded and appropriately sized for a simple query tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter query tool. Describes the specific data returned (cashback amount and spendable balance) compensating partially for missing output schema. Could benefit from explicit differentiation from similar user-query siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with detailed descriptions for userId and brandId. The description mentions 'at a specific brand' reinforcing the brandId context and implies user via 'a user', but does not add substantial semantics beyond what the schema provides. Baseline 3 appropriate for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Check' and specific resource scope (cashback earned and spendable rewards at a specific brand). Accurately conveys the dual information retrieval purpose. Does not explicitly differentiate from sibling user-portfolio tools like rsnc_agent_user_stats, though the specific-brand scope is distinctive.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Includes explicit prerequisite 'Requires authentication'. The schema cross-references rsnc_agent_list_brands for brand discovery. However, lacks explicit guidance on when to use this versus rsnc_agent_user_portfolio or rsnc_agent_my_rewards among the many user-focused siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_user_personaAInspect
Get a user's behavioral profile: earn patterns, persona archetype, tag/brand affinities, price preferences, and network position. Requires HMAC authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | The user identifier (email or wallet address). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses authentication requirements and hints at return value structure via the enumerated fields. However, with no annotations provided, the description omits critical safety characteristics (read-only status, privacy sensitivity of behavioral data, error modes) that the agent needs to handle the tool responsibly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single efficient sentence with colon-separated list. Front-loaded action verb, zero filler words. Authentication requirement placed appropriately at end as prerequisite.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Compensates partially for missing output schema by listing five specific return value categories. However, given sensitive behavioral profiling context and absence of annotations/output schema, description should disclose privacy scope, error behaviors, or data freshness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (userId fully documented). The description does not elaborate on the parameter format or semantics, but baseline 3 is appropriate as the schema carries the full load adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb ('Get') and resource ('user's behavioral profile'), and enumerates distinct content domains (earn patterns, archetype, affinities) that differentiate it from quantitative siblings like user_stats or user_balance. Lacks explicit comparative differentiation statement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides authentication prerequisite ('Requires HMAC authentication'), but lacks explicit guidance on when to select this vs. sibling user tools (user_stats, user_recommendations) or what behavioral questions this answers best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_user_portfolioAInspect
Get a unified view of a user's rewards across ALL brands — total balance, per-brand breakdown, active streaks, and the best perk they can afford right now. The cross-brand loyalty dashboard. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User identifier (email address or wallet address). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It compensates partially by detailing the conceptual return structure (what data points are returned) and noting authentication requirements. However, it omits operational details such as whether the operation is read-only, idempotent, or subject to rate limiting.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the action ('Get a unified view') and scope, uses an em-dash efficiently to list specific data components, and ends with a clear classification ('The cross-brand loyalty dashboard') and prerequisite ('Requires authentication'). The tagline is slightly redundant but contributes to conceptual framing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema and annotations, the description adequately compensates by enumerating the specific data components returned (balance, breakdown, streaks, affordable perks). For a single-parameter read operation, this provides sufficient conceptual completeness to infer the tool's utility.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with the userId parameter fully documented as 'User identifier (email address or wallet address).' Since the schema is self-explanatory, the baseline score of 3 applies; the description provides implicit semantic context by referring to 'user's rewards' but does not add parameter-specific constraints or formatting details beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a 'unified view of a user's rewards across ALL brands' with specific outputs listed (total balance, per-brand breakdown, active streaks, best affordable perk). The emphasis on 'ALL brands' and 'cross-brand loyalty dashboard' effectively distinguishes it from sibling user-specific tools like rsnc_agent_user_balance or rsnc_agent_my_rewards that likely focus on narrower data sets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'Requires authentication,' establishing a prerequisite, and implies usage context via 'cross-brand loyalty dashboard.' However, it lacks explicit guidance on when to choose this over siblings (e.g., 'use this instead of user_balance when you need streak data') or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_user_recommendationsAInspect
Get personalized perk recommendations for a user, scored by persona affinity and affordability. Requires HMAC authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of recommendations to return. Defaults to 5. | |
| userId | Yes | The user identifier (email or wallet address). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It successfully discloses the scoring methodology (persona affinity + affordability) and authentication requirements. However, it omits whether this is idempotent, rate-limited, or what happens if the user has no recommendations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no waste. Front-loaded with the core action and resource, followed by authentication requirements. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description could better prepare the agent by hinting at return structure (e.g., ranked list of perks with scores). The scoring methodology mention helps, but completeness is limited by absence of usage context versus siblings.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents both userId and limit parameters. The description adds no additional parameter semantics (formats, examples, constraints) beyond what the schema provides, meeting the baseline expectation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific verb 'Get' and resource 'personalized perk recommendations'. The 'scored by persona affinity and affordability' clause effectively differentiates this from sibling tools like browse_perks or suggest_perks by describing the ranking methodology.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Requires HMAC authentication' which is a critical prerequisite for usage. However, fails to specify when to use this versus similar siblings like suggest_perks or browse_perks, or what inputs are required for successful personalization.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rsnc_agent_user_statsAInspect
Get a user's complete reward profile for a brand: total earned, total redeemed, active streaks, and achievements. Useful for recommending next actions. Requires authentication.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | User identifier (email address or wallet address). | |
| brandId | Yes | The brand identifier. Use rsnc_agent_list_brands to discover brands by category. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the authentication requirement and outlines the return payload structure (earned amounts, streaks, achievements), but omits operational details like rate limits, caching behavior, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is optimally structured with three efficient sentences: the first defines the action and return payload, the second states the use case, and the third notes the auth requirement. No filler content; every clause earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 string parameters, no output schema, no annotations), the description adequately covers the essential context: it explains what 'stats' encompasses, mentions the auth constraint, and suggests a use case. Minor gap in not contrasting with sibling user-query tools.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage (userId and brandId are fully documented in the schema), establishing a baseline of 3. The description mentions 'for a brand' which aligns with the brandId parameter but adds no additional semantic detail beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves a 'user's complete reward profile for a brand' using specific verbs ('Get') and enumerates exact data elements returned (total earned, redeemed, streaks, achievements). However, it fails to differentiate from sibling user-data tools like rsnc_agent_user_balance, rsnc_agent_user_portfolio, or rsnc_agent_my_rewards.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides an implied usage context ('Useful for recommending next actions') and a prerequisite ('Requires authentication'), but lacks explicit guidance on when to use this tool versus alternatives such as rsnc_agent_user_balance or rsnc_agent_user_recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!