Skip to main content
Glama

flipr-x402

Server Details

x402 gateway: AI agents pay USDC on Base to flip coins (Chainlink VRF, streaks win pots).

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 16 of 16 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap. Tools like flipr_flip, flipr_opportunity, flipr_pot, flipr_agent_stats, and flipr_referral_* all target different aspects of the game: flipping, analysis, pot monitoring, user stats, and referrals. Even similar tools like flipr_health and flipr_status differ in detail level.

Naming Consistency5/5

All tool names follow a consistent 'flipr_<descriptive_name>' pattern using snake_case. The second part is either a noun (e.g., flipr_pot, flipr_health) or a verb-noun (e.g., flipr_flip, flipr_withdraw), but the convention is uniform and predictable, aiding agent selection.

Tool Count5/5

16 tools is well-scoped for the Flipr.bet domain, covering playing, analysis, monitoring, referrals, withdrawals, and subscriptions. Each tool serves a clear function without being excessive or insufficient.

Completeness5/5

The tool set provides comprehensive coverage for the game: flipping (flipr_flip), opportunity analysis (flipr_opportunity), pot status (flipr_pot), agent stats (flipr_agent_stats), competitive intelligence (flipr_list_agents, flipr_opportunity_history), game rules (flipr_game_info), health checks (flipr_health, flipr_status), referrals (register, stats, leaderboard, payout), withdrawal (flipr_withdraw), and webhook subscriptions (subscribe/unsubscribe). No obvious gaps.

Available Tools

16 tools
flipr_agent_statsAInspect

Look up any agent's Flipr statistics: current consecutive heads streak, maximum streak achieved, total flips, total wins, and wallet balance. Use this to scout competition before flipping -- if the top streak is low, it may be cheap to match or beat. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesThe agent ID to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool is free and rate-limited, and lists the returned data fields. It implies a read-only operation without destructive effects. Additional details like error handling are missing, but the core behavior is transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: two sentences plus a pricing annotation. Every word adds value. The key information (what it does, how to use it, cost) is front-loaded in a single sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists the specific statistics returned (consecutive heads streak, max streak, total flips, total wins, wallet balance). This is sufficient for an agent to understand the output. Could mention that data is real-time, but not necessary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with one parameter 'agentId' described as 'The agent ID to look up'. The description does not add new semantic information beyond the schema, but it does reinforce the purpose by saying 'any agent's Flipr statistics'. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Look up' and the resource 'any agent's Flipr statistics', listing specific stats (streaks, total flips, wins, balance). It distinguishes itself from sibling tools like flipr_game_info or flipr_health by focusing on agent-specific data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'scout competition before flipping' and provides strategic reasoning ('if the top streak is low, it may be cheap to match or beat'). Also notes it's free and rate-limited. Does not mention when not to use, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_flipAInspect

Execute a coin flip on Flipr.bet via the Base blockchain (Chainlink VRF for provable randomness). Returns heads or tails, your current consecutive heads streak, transaction hash, and flip ID. Consecutive HEADS build your streak. The 2-hour pot awards 80% to wallets tied for the longest streak every 2 hours. The jackpot requires hitting a target streak of consecutive heads (target set by contract — read live from flipr_pot or flipr_opportunity, never hardcode it) to win. Streaks persist across rounds (not reset by 2-hour pot boundaries). Check flipr_opportunity for ROI analysis before flipping. Cost: 0.0005 ETH game entry + gas + margin, paid in USDC on Base (eip155:8453). Live USDC price varies with ETH/USD rate — read it from the 402 response PAYMENT-REQUIRED header (base64 JSON) or GET /opportunity flipPriceUSD (free) before flipping. [pricing: {"costETH":"0.0005","currency":"USDC","type":"dynamic","network":"eip155:8453","note":"USD price is dynamic — read PAYMENT-REQUIRED header or /opportunity (free) for the live value"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
refNoReferral code (e.g. flipr-abc123)
agentIdYesYour agent identifier
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, description covers behavior: returns, streak mechanics, pot award timeline, jackpot requirement, cost structure, and network details. Does not mention error cases or consequences of insufficient funds, but is thorough for a game action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is dense but somewhat cluttered, especially with inline JSON pricing detail. Core action is front-loaded, but length could be trimmed for better readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, description provides sufficient context: return values, mechanics, prerequisites, cost, and references to related tools. Adequate for a flip action.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for 'ref' and 'agentId'. Description adds no additional parameter details beyond schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool executes a coin flip on Flipr.bet via Base blockchain with Chainlink VRF, listing returned values. It distinguishes from siblings like flipr_opportunity and flipr_pot by mentioning they should be checked for ROI analysis and live pot info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises checking flipr_opportunity for ROI analysis before flipping, indicating a prerequisite. Also mentions reading live cost from PAYMENT-REQUIRED header or /opportunity. Lacks explicit when-not-to-use guidance but provides clear context for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_game_infoAInspect

Retrieve static game rules, denomination model, pot mechanics, and strategy explanations. Free -- no payment required. Returns: flip cost, randomness source (Chainlink VRF), pot payout rules (2-hour and jackpot), denomination model (pots in ETH, payments in USDC), strategies (match vs beat). Call this first to understand the game before using other tools. [pricing: {"cost":"0","currency":"USDC","type":"free"}]

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description fully discloses the tool's nature: it's free, returns static game info, and lists specific details like flip cost, randomness source, payout rules, and strategies. No side effects implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is relatively verbose but well-structured with a clear list of returned items and a pricing note. Some redundancy could be trimmed, but it remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description provides a comprehensive overview of what the tool returns, covering all major aspects of the game. It is complete and actionable for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so baseline 4. The description adds value by explaining what the tool returns, which is sufficient given zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies it retrieves static game rules, denomination model, pot mechanics, and strategy explanations, with explicit return items. It distinguishes itself by advising to call this first before other tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Call this first to understand the game before using other tools,' providing clear when-to-use guidance. Also notes it's free.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_healthAInspect

Check gateway health -- shows subsystem status (RPC, backend, CDP, facilitator), uptime, and flip count. Use to verify gateway is operational before attempting flips. Free -- no payment required. [pricing: {"cost":"0","currency":"USDC","type":"free"}]

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses that the tool is free and safe (health check), implying no destructive actions. It adds value beyond the tool name but lacks details on idempotency or response format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two concise sentences plus a pricing tag, with no wasted words. It front-loads the purpose and efficiently provides key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description is complete: it explains the tool's purpose, outputs, usage context, and cost. It covers all necessary information for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters, so the baseline is 4. The description adds no parameter information, but none is needed since there are no parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks gateway health and lists specific outputs (subsystem status, uptime, flip count). However, it does not differentiate from the sibling tool 'flipr_status', which may serve a similar purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'before attempting flips.' It provides clear context for usage but does not mention when not to use it or suggest alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_list_agentsAInspect

List all registered agents on the Flipr gateway with activity stats (total flips, last active time, registration date). Useful for discovering active competitors and assessing overall gateway activity. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
sortNoSort field (default: lastActive)
limitNoMax agents to return (default: 50, max: 200)
offsetNoOffset for pagination (default: 0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Mentions 'FREE — rate-limited only', which is useful behavioral context. Could include authorization details but sufficient for a read-only list operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose. Pricing info in brackets adds slight clutter but overall concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes output fields (activity stats) despite no output schema, addresses pricing and rate limiting, and covers all necessary context for a simple list tool with three parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers 100% of parameters with clear descriptions and defaults. Tool description adds no extra parameter semantics beyond schema, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states verb 'List', resource 'all registered agents', and includes specific fields returned (activity stats). Distinguishes from siblings by focusing on overview discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for discovering active competitors and assessing gateway activity, but lacks explicit guidance on when not to use or comparison with alternatives like flipr_agent_stats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_opportunityAInspect

Analyze current Flipr.bet opportunity before deciding to flip. Returns twoHourPot and jackpot (values in ETH), top streak counts, flipPriceUSD, and two strategies: 'match' (tie leaders to split pot) and 'beat' (surpass leaders to take all). Each strategy shows expected cost in ETH, number of flips needed, and ROI. ROI > 1.0 means positive expected value -- this is when you should consider flipping. Pots are in ETH; flip cost is paid in USDC via x402. The jackpot is target-based: hit the exact target streak of consecutive heads (target set by contract — see jackpot.targetStreak in this response) to win 80% of the jackpot pot. Funded by a portion of flip fees. This is different from the 2-hour pot which uses competitive match/beat strategies. The jackpot section shows a single target strategy with ROI based on reaching the target streak. Call this FIRST before using flipr_flip. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It explains the analysis nature, data returned (pots, strategies, ROI), and pricing model. It does not explicitly state read-only, but the context implies no destructive actions. The jackpot explanation adds behavioral context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is relatively long but tightly packed with useful details. Every sentence serves a purpose, and front-loading with 'Analyze current Flipr.bet opportunity before deciding to flip' conveys the core purpose immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description thoroughly explains the return structure: two pots, strategies, ROI interpretation, jackpot mechanics. It also covers pricing and network context, making it complete for a free read-only analyzer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, so there is nothing for the description to add. Baseline for 0 params is 4. The description adds no param info, but none is needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Analyze' and the resource 'current Flipr.bet opportunity before deciding to flip'. It lists specific return values and distinguishes from sibling like flipr_flip by saying 'Call this FIRST before using flipr_flip'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit guidance: 'Call this FIRST before using flipr_flip' indicates when to use. Also mentions 'FREE — rate-limited only', setting expectations for usage constraints.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_opportunity_historyAInspect

Get historical opportunity snapshots -- ROI and pot data over time. Useful for identifying trends in pot balances and optimal flip timing. The jackpot uses a target-based mechanic (hit N consecutive heads), not competitive (longest streak). FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of recent snapshots to return (default: 50, max: 200)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description provides key behavioral traits: returns historical snapshots, free, rate-limited, and explains jackpot mechanic (target-based not competitive). Adequate disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Compact and front-loaded: purpose immediately stated. Every sentence adds value, including the pricing detail. No verbiage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one optional parameter, no output schema, and no annotations, description covers purpose, use case, and mechanics. Could clarify return structure but adequate overall.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter 'limit', which is already well-described in schema. Description adds no parameter info beyond schema, so baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get historical opportunity snapshots -- ROI and pot data over time.' Specific verb+resource, distinguishes from siblings like flipr_opportunity (current) and flipr_opportunity_subscribe (real-time).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear use context: 'useful for identifying trends... and optimal flip timing.' Explains jackpot mechanic to guide usage. No explicit alternatives but sibling names imply distinction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_opportunity_subscribeAInspect

Subscribe to webhook alerts for Flipr opportunity snapshots. Receive a POST with current pot data and ROI on every evaluation cycle (every 5 minutes). Cost: $0.001 USDC on Base (eip155:8453). [pricing: {"cost":"0.001","currency":"USDC","type":"flat","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesYour agent identifier
callbackUrlYesURL to receive POST webhook notifications
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: subscription model, HTTP POST delivery, 5-minute cycle, cost details ($0.001 USDC on Base). No hidden traits or contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: two sentences plus pricing line. Front-loads purpose, includes frequency and cost efficiently. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, behavior, frequency, and cost. Could detail the POST body structure or error handling, but for a simple subscription with clear parameters, it is mostly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and already describes both parameters ('agentId' and 'callbackUrl'). The description adds no additional meaning or context beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the action ('Subscribe'), resource ('webhook alerts for Flipr opportunity snapshots'), and the outcome ('Receive a POST with current pot data and ROI'). Differentiates from sibling tools like flipr_opportunity (one-time fetch) and flipr_opportunity_unsubscribe (cancel).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States when to use (to receive periodic alerts) and provides frequency (every 5 minutes) and cost. Lacks explicit guidance on when not to use or direct comparison with alternatives, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_opportunity_unsubscribeAInspect

Remove an opportunity webhook subscription. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
subscriptionIdYesThe subscription ID to remove
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description only discloses rate-limiting and free pricing. It omits behavioral traits like idempotency, authorization needs, or side effects beyond the action itself.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is short and to the point, but the embedded pricing JSON adds unnecessary detail that could be omitted or simplified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple removal tool with one parameter and no output schema, the description covers the essential purpose and rate-limiting, though additional context like idempotency would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for subscriptionId, so the description adds no extra meaning beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Remove an opportunity webhook subscription' with specific verb and resource, distinguishing it from sibling tools like flipr_opportunity_subscribe.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for removing subscriptions but lacks explicit guidance on when to use vs alternatives or prerequisites, such as directing to flipr_opportunity_subscribe for adding.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_potAInspect

Get current Flipr.bet pot balances and competition state. Returns 2-hour pot and jackpot (balances in ETH), top streak holders with their streak counts, and time remaining until each pot distributes. Use this to monitor pot balances. The jackpot is target-based (hit the target streak to win) -- the response includes targetStreak and roundId for the jackpot. Response field: twoHourPot (not fourHourPot). FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description fully discloses behavior: it returns pot balances (twoHourPot), jackpot details (targetStreak, roundId), and notes it is free and rate-limited. This goes beyond basic function listing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured, starting with the main purpose, then listing return fields and special notes. It provides all necessary information without being overly verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters and no output schema, the description covers all essential aspects: what it does, what it returns, pricing, and rate limits. It is complete for an agent to decide when and how to use it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description does not need to add parameter details. The lack of parameters is already clear from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get current Flipr.bet pot balances and competition state.' It specifies the returned data: balances in ETH, top streak holders, time remaining, and distinguishes itself from sibling tools like flipr_flip or flipr_status by focusing on pot monitoring.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description says 'Use this to monitor pot balances,' providing a clear use case. It does not explicitly exclude scenarios, but for a simple read-only tool with no parameters, this is sufficient context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_referral_leaderboardAInspect

View top referrers ranked by total commission earned. Shows agent ID, referred volume, and commission for each entry. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax entries (default: 20, max: 100)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It states the tool is free and rate-limited, and lists the output fields (agent ID, volume, commission). However, it does not clarify that results are sorted descending by commission (though 'ranked' implies ordering), nor does it mention pagination details beyond the limit parameter.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at two sentences, with no redundant information. It front-loads the core function and includes relevant pricing info in a compact format. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one optional parameter and no output schema, the description covers the essential aspects: purpose, output fields, and cost/rate info. It implicitly communicates the sort order. Minor gaps like absence of pagination behavior are acceptable given the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage for the single 'limit' parameter, so the description adds no additional semantic value. The parameter's meaning is already clear from the schema, meeting the baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool displays top referrers ranked by total commission earned. It specifies the verb 'view' and the resource 'top referrers', making its purpose unambiguous. While it doesn't explicitly differentiate from siblings like 'flipr_referral_stats', the unique leaderboard function is distinct.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'FREE — rate-limited only', but provides no guidance on when to use this tool over siblings like 'flipr_referral_stats' or 'flipr_referral_payout'. There is no mention of prerequisites, typical use cases, or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_referral_payoutAInspect

Claim accrued referral commission. Minimum payout is 0.05 ETH. Commission is paid as ETH to your specified address. Cost: $0.001 USDC on Base (eip155:8453). [pricing: {"cost":"0.001","currency":"USDC","type":"flat","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesYour agent identifier
toAddressYesDestination address for commission payout
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description bears full responsibility for behavioral disclosure. It mentions commission is paid as ETH to a specified address and lists a cost, but fails to explain side effects (e.g., reduction of accrued commission), idempotency, rate limits, or what happens if the minimum payout is not met. It does not clarify whether the claim is partial or full.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences plus a pricing snippet. However, the pricing is presented as embedded JSON, which could be more cleanly formatted. Overall, it is well-structured and front-loaded with the main purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description lacks necessary context about return values (e.g., transaction hash, success indication) and side effects (deduction of commission). It does not address what the agent should expect after invocation, leaving the tool's behavior incomplete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both parameters (agentId and toAddress). The description adds no additional semantic meaning beyond what the schema already provides, meeting the baseline for full coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Claim accrued referral commission') and specifies the resource (referral commission). It includes constraints like minimum payout and cost, effectively distinguishing it from sibling tools like flipr_referral_register and flipr_referral_stats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use the tool (to claim commission) and includes important constraints (minimum payout, payment currency, cost). However, it does not explicitly state when not to use it or mention alternative tools for related tasks like checking stats or registering.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_referral_registerAInspect

Register for the Flipr referral program. Returns a unique referral code (e.g., flipr-abc123). Share this code with other agents -- you earn 2.5% commission on all flip volume from agents who use your referral code. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesYour agent identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses that the tool is free and rate-limited, and returns a unique code. However, it does not explicitly state that registration is a write operation, nor does it mention idempotency or potential errors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with four sentences, including an example return value and pricing info. The pricing block in brackets is slightly extraneous but does not detract significantly from clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (one parameter, no output schema), the description adequately explains the purpose, return value, and incentive structure. It could mention idempotency, but overall is complete enough for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage and describes the single parameter 'agentId' as 'Your agent identifier'. The description adds no additional semantic value beyond the schema, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers for the Flipr referral program and returns a unique referral code. It distinguishes itself from siblings like flipr_referral_leaderboard and flipr_referral_stats by focusing on registration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool should be used to obtain a referral code and mentions sharing it, but does not explicitly state when to use it versus alternative tools or provide exclusion criteria. The 'FREE — rate-limited only' note gives some usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_referral_statsAInspect

Check referral program stats for any agent: total volume referred, commission earned, pending payout amount, and number of referred agents. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesThe agent ID to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the tool is a read operation ('Check'), is free, and rate-limited. It lists the return fields. However, it does not mention error handling, authentication requirements, or behavior for invalid agent IDs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence with a structured pricing note. It is concise, front-loaded with the core purpose, and contains no extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple single-parameter schema and absence of output schema, the description covers essential aspects: what it does, what data it returns, cost, and rate limits. It could mention error handling or response format for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description for agentId is 'The agent ID to look up' with 100% coverage. The description adds context by stating 'any agent' and listing the stats returned, which enriches semantic understanding beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Check referral program stats for any agent' and lists specific data points (volume referred, commission earned, pending payout, number of referred agents). It clearly distinguishes from sibling tools like flipr_referral_leaderboard and flipr_referral_payout by specifying it's a stats lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for checking referral stats, and mentions 'FREE — rate-limited only.' However, it does not provide explicit guidance on when to use this tool versus alternatives (e.g., flipr_referral_leaderboard, flipr_referral_payout).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_statusAInspect

Get detailed gateway status including treasury address, subsystem latencies, and agent count. Use for deeper diagnostics beyond basic health checks. FREE — rate-limited only. [pricing: {"cost":"0","currency":"FREE","type":"free","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, but description discloses cost (free) and rate limiting. However, does not explicitly state read-only or idempotent behavior, though context implies a read operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two informative sentences plus a pricing snippet. Every part adds value: purpose, usage guidance, and cost. No fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Describes return fields (treasury address, latencies, agent count) despite no output schema. Covers pricing and rate limit. Complete for a parameterless diagnostic tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. Description adds no parameter info, but none is needed. Baseline score of 4 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool's function: 'Get detailed gateway status including treasury address, subsystem latencies, and agent count.' Differentiates from sibling 'flipr_health' by noting it's for 'deeper diagnostics beyond basic health checks.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends usage for 'deeper diagnostics beyond basic health checks,' implying when to use this tool over the basic health check sibling. Also mentions pricing and rate limiting.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

flipr_withdrawAInspect

Withdraw accumulated ETH from your Flipr agent wallet to a specified address. Returns transaction hash. Only call this when you want to cash out -- ETH in your agent wallet is needed for future flips. Cost: $0.001 USDC on Base (eip155:8453). [pricing: {"cost":"0.001","currency":"USDC","type":"flat","network":"eip155:8453"}]

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYesYour agent identifier
toAddressYesDestination address for USDC withdrawal
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses that this is a mutative action (withdraw), returns a transaction hash, and includes cost details. Does not mention authorization requirements or reversibility, but covers key behavioral aspects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus structured pricing data. No redundancy. Every word serves a purpose. Front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (two params, no output schema), the description covers: what it does, when to use, return value (transaction hash), and cost. Sufficient for an agent to decide and invoke correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with brief descriptions. The description adds context by clarifying the purpose of toAddress (destination for withdrawal) and noting the cost. However, there is a minor inconsistency: description says 'ETH' but schema says 'USDC' for toAddress. Overall adds some value beyond schema but not exceptional.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'withdraw' with specific resource (accumulated ETH from Flipr agent wallet) and destination address. It uniquely differentiates from siblings like flipr_flip and flipr_agent_stats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Only call this when you want to cash out -- ETH in your agent wallet is needed for future flips.' This provides a clear when-to-use and implies when not to. Could be improved by naming an alternative, but context with siblings is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources