fomox402
https://glama.ai/mcp/connectors/fun.staccpad.bot/fomox402-last-bidder-wins-on-solana wins
Server Details
Broker + MCP server for last-bidder-wins on-chain games on Solana via x402 micropayments.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- staccDOTsol/staccbot-tg
- GitHub Stars
- 0
- Server Listing
- fomox402 — Last-Bidder-Wins on Solana
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.2/5 across 25 of 25 tools scored. Lowest: 2.7/5.
Most tools have distinct purposes, with clear descriptions. There is minor overlap between 'claim_dividend' and 'claim_winnings', but their descriptions differentiate them (per-key passive income vs. one-time winner payout). Overall, an agent can reliably select the correct tool.
Tool names predominantly follow a verb_noun or noun_verb pattern with consistent snake_case. Exceptions like 'play' (single verb) and minor deviations (e.g., 'agent_equip_get' vs 'get_game') do not significantly hinder predictability.
With 25 tools, the server covers a broad domain (agent management, game actions, webhooks, tower floors, firm ingest) without feeling excessive. Each tool serves a well-defined purpose, though the count is on the high end of reasonable.
The tool surface covers core workflows: agent lifecycle, bidding, claiming, dividends, tower floor management, webhooks, and firm events. Minor gaps exist (e.g., no tool to modify a game after creation), but the set is sufficient for autonomous agents.
Available Tools
25 toolsagent_equip_getAInspect
Read an agent's STRAT config (the parameters its tower floor runs on).
WHAT IT DOES: GETs /v1/agents/:agent_wallet/config. Public read — anyone
can audit any agent's strategy. The returned version is the CAS token
you pass to agent_equip_set as expected_version on the next write.
WHEN TO USE: before agent_equip_set (to compute the next expected_version), or just to inspect what a competitor's floor is configured to do.
RETURNS: AgentConfig — { agent_wallet, version, updated_at, updated_by, config: { strategy, max_bid_raw, cooldown_sec, aggression_bps, custom } }.
FAILURE MODES: equip_get_failed (404) — agent has never written a config; treat the version baseline as 0 on the first write.
RELATED: agent_equip_set (write), agent_operators_list (who can write).
| Name | Required | Description | Default |
|---|---|---|---|
| agent_wallet | Yes | Agent wallet pubkey (base58). Same address returned by register_agent / get_me. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses public read behavior, return fields, and failure mode (404 on first write). Lacks mention of rate limits or caching, but sufficient for a read-only tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description uses clear section headers (WHAT IT DOES, WHEN TO USE, RETURNS, FAILURE MODES, RELATED) and is concise with no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter read tool with no output schema, the description covers all necessary aspects: purpose, usage, return structure, failure handling, and relationships.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The only parameter 'agent_wallet' is described in the schema as a base58 pubkey from register_agent/get_me. The description reinforces this requirement and ties it to other tools, adding value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reads an agent's STRAT config via GET endpoint, distinguishing it from the write counterpart agent_equip_set. It uses specific verb 'Read' and resource 'config'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly advises when to use: before agent_equip_set to compute expected_version or to inspect competitor config. Also lists related sibling tools, providing clear context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_equip_setAInspect
Write a STRAT config with a caller-signed payload (CAS-protected).
WHAT IT DOES: POSTs /v1/agents/:agent_wallet/config with { payload,
signature }. Broker verifies the signature against the agent's owner key OR
any wallet on the operator whitelist (see agent_operators_list), checks
expected_version against the current AgentConfig.version, and writes the
new config atomically. Headless — the broker NEVER signs.
WHEN TO USE: after a tower floor is claimed, push the STRAT config the tower v0 worker should run. Write again whenever you want to retune the strategy. Refetch with agent_equip_get on a 409 conflict and retry with the bumped expected_version.
PAYLOAD CANONICALISATION: broker re-stringifies payload with sorted keys
and no whitespace before verifying the signature. Sign that exact form.
RETURNS: AgentConfig — same shape as agent_equip_get, with version
incremented to the new high-water mark.
FAILURE MODES: equip_set_failed (bad_signature) — payload != signed bytes equip_set_failed (signer_not_authorized) — signer is neither owner nor operator equip_set_failed (version_mismatch) — refetch + retry (broker 409) equip_set_failed (payload_expired) — broker 410 equip_set_failed (nonce_replayed) — broker rejected duplicate nonce
RELATED: agent_equip_get (read current version), agent_operators_set (grant another wallet permission to write configs on this agent's behalf).
| Name | Required | Description | Default |
|---|---|---|---|
| payload | Yes | Canonical config payload. Caller signs the JCS-canonicalised JSON of this object with the agent owner key OR a whitelisted operator key. | |
| signature | Yes | Base58 ed25519 signature over the canonical JSON of `payload`. Sign client-side; the broker never signs. | |
| agent_wallet | Yes | Agent wallet whose config is being updated. The broker indexes config by this wallet. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations, so description carries full burden. Discloses broker never signs, atomic write, CAS check, canonicalisation process, and five failure modes. Very transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with sections (WHAT IT DOES, WHEN TO USE, PAYLOAD CANONICALISATION, RETURNS, FAILURE MODES, RELATED). No redundant sentences; every part adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, parameter details, canonicalisation, output shape (AgentConfig with incremented version), and failure modes. For a 3-parameter tool with no annotations and no output schema, this is highly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, baseline 3. Description adds significant context: canonicalisation requirement, signing process, and agent_wallet as index. Parameter semantics are well clarified beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool writes a STRAT config with a caller-signed payload and CAS protection. It differentiates from sibling tools like agent_equip_get (read) and agent_operators_set (managing operators).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says to use after a tower floor is claimed or for retuning. Provides guidance on conflict resolution (refetch with agent_equip_get and retry with bumped expected_version). Mentions related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_operators_listAInspect
Read an agent's operator whitelist (who can write configs on its behalf).
WHAT IT DOES: GETs /v1/agents/:agent_wallet/operators. Public read.
WHEN TO USE: before agent_equip_set (confirm the signer wallet is on the list), or to audit who else has write access to a competitor's config.
RETURNS: { agent_wallet, owner, operators: [{ wallet, role: 'owner'|'operator', added_at, added_by }], count }.
RELATED: agent_operators_set (mutate — owner-only), agent_equip_set (operators may write configs but not modify this list).
| Name | Required | Description | Default |
|---|---|---|---|
| agent_wallet | Yes | Agent wallet whose operator whitelist you want to read. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description fully covers behavior: 'Public read', HTTP GET method, returns specific data structure. No hidden side effects or contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose, includes useful details (URL, return format, related tools). Slightly verbose but still efficient for the information conveyed.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers all relevant aspects: purpose, usage context, return type, and related tools. With no output schema, describing the return structure is essential and done well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a single parameter already well-described. Description adds only the URL path pattern, which is helpful but not critical. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Read an agent's operator whitelist' with a specific verb and resource. Distinguishes from sibling tools like agent_operators_set (mutate) and agent_equip_set (operators may write configs). No ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit when-to-use guidance: before agent_equip_set to confirm signer wallet, or to audit write access. Mentions related tools and their roles, helping the agent choose correctly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
agent_operators_setAInspect
Mutate the operator whitelist with an owner-signed payload.
WHAT IT DOES: POSTs /v1/agents/:agent_wallet/operators with { payload, signature }. Broker enforces that the signer is the OWNER (agent_wallet itself) — operator-signed mutations of the whitelist are rejected even if the signer is otherwise authorised to write configs. Headless — the broker NEVER signs.
WHEN TO USE: granting / revoking write access for a sidecar process, rotating an operator key, or wiping the whitelist before retiring an agent.
OPS:
add — append operator to the list (idempotent on existing entry)
remove — drop operator from the list (idempotent on missing entry)
set — replace the entire list with operators (use [] to wipe)
PAYLOAD CANONICALISATION: broker re-stringifies payload with sorted keys
and no whitespace before verifying the signature. Sign that exact form.
RETURNS: OperatorsList after the mutation.
FAILURE MODES: operators_set_failed (bad_signature) — payload != signed bytes operators_set_failed (signer_not_owner) — only the owner may mutate the list operators_set_failed (payload_expired) — broker 410 operators_set_failed (nonce_replayed) — duplicate nonce
RELATED: agent_operators_list (read), agent_equip_set (the permission you're granting).
| Name | Required | Description | Default |
|---|---|---|---|
| payload | Yes | Canonical operator-mutation payload. MUST be signed by the OWNER key (operator signatures are rejected for whitelist edits). | |
| signature | Yes | Base58 ed25519 signature over the canonical JSON of `payload`. Sign with the OWNER key. | |
| agent_wallet | Yes | Agent wallet whose operator list is being mutated. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description fully covers behavior: broker enforces owner signing, headless operation, payload canonicalisation with sorted keys, returns OperatorsList, and lists all failure modes with explanations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear sections (WHAT IT DOES, WHEN TO USE, OPS, etc.). Front-loaded with purpose, each section earns its place with no redundancy. Concise yet comprehensive.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description states returns 'OperatorsList after the mutation'. Covers all aspects: operation, signing, failure modes, related tools. Complete for a mutation tool without missing details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, but description adds critical context: idempotency of add/remove, set replacing entire list (use [] to wipe), signature canonicalization, and nonce/expiry semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool mutates the operator whitelist with an owner-signed payload, specifies the HTTP endpoint, and contrasts with siblings like agent_operators_list (read) and agent_equip_set (permission being granted).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit 'WHEN TO USE' section lists concrete scenarios: granting/revoking write access, rotating keys, wiping whitelist before retirement. Also includes failure modes and related tools for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
burn_keyAInspect
Burn ONE key on a round to permanently boost your share on the remaining keys.
WHAT IT DOES: invokes the Anchor program's burn_key_token instruction.
The burnt key's stake is folded into the round's divPerKeyScaled,
increasing the per-key dividend rate for every remaining keyholder.
Your remaining keys benefit proportionally to your share of post-burn keys.
WHEN TO USE: only when you hold many keys (>5) on a round whose pot is still ratcheting up. The math: if your_keys / total_keys is large, burning ONE key transfers a big chunk of your-vs-other dividend power — but you keep the rest of your keys. if your_keys / total_keys is small, the burn mostly subsidises others.
IRREVERSIBLE: burnt keys are gone. The on-chain account is closed and the rent is reclaimed; you cannot re-mint a key without placing a new bid.
RETURNS: { tx (Solana sig), gameId, keysBefore, keysAfter (= keysBefore - 1), newDivPerKeyScaled (the boosted rate) }.
FAILURE MODES: burn_key_failed (no_keys) — you don't hold any keys on this round burn_key_failed (round_settled) — round is already gameOver
ADVANCED USE — counter-burn defence: if a competitor is dominating divs by holding many keys, burning your own can flip the per-key rate higher than their additional bid cost, pricing them out.
RELATED: claim_dividend (collect what your keys earned), place_bid (mints a fresh key — opposite of this).
| Name | Required | Description | Default |
|---|---|---|---|
| gameId | Yes | Round you hold keys on and want to burn one of. | |
| api_key | No | Bearer api_key (or env). Must be the wallet that holds the keys. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description warns about irreversibility ('Irreversible — burnt keys aren't recoverable') and mentions the on-chain instruction, but lacks details such as authorization requirements or potential side effects beyond the dividend boost. Given no annotations, it carries the full burden and provides moderately useful behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the main action and effect. Every sentence adds essential information (purpose, irreversibility, use case) without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with two parameters and no output schema, the description covers purpose, effect, irreversibility, and a specific use case. It is complete enough for an agent to decide when and how to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fails to explain the two parameters (gameId, api_key) beyond their names. It does not clarify what gameId represents or how api_key is used, adding little value over the raw schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's action (burn a key on a game) and its effect (permanent dividend boost on remaining keys). It distinguishes itself from siblings by specifying a unique use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides a clear use case ('when you hold many keys on a game whose pot is fattening'), indicating when to use. It does not explicitly state when not to use or suggest alternatives, but the context is sufficient for an agent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
claim_dividendBInspect
Withdraw your accrued $fomox402 key dividends from a specific round.
WHAT IT DOES: invokes the Anchor program's distribute instruction to pay
out the dividend share owed to your keys on this round. Each key earns
(divPerKeyScaled - your_lastClaimed_divPerKeyScaled) / 1e18 × your_keys
$fomox402 — i.e., your share of every bid placed AFTER you got each key.
WHEN TO USE: any time post-bid. Dividends accrue continuously as later bids come in; you can claim mid-round or wait until settle. Most agents claim once per round, after settle, to minimize fees.
WHO CAN CALL: any agent who holds at least 1 key on the round. Reads your key count from the on-chain account, so api_key MUST match the wallet that placed the bids.
RETURNS: { tx (Solana sig), gameId, claimedRaw (string, raw atomic units), newDivPerKeyScaledClaimed (the new high-water mark) }.
FAILURE MODES: dividend_failed (no_keys) — you don't hold keys on this round dividend_failed (zero_owed) — already up-to-date, no new dividends dividend_failed (rpc) — Solana RPC, retry
DIFFERENCES FROM claim_winnings:
winnings = the round-end pot (one-time, only to head bidder)
dividends = per-key passive income (every keyholder, continuous)
RELATED: claim_winnings (round-end pot), get_game.yourClaimableDividend (check before claiming), burn_key (advanced — boost your dividend share).
| Name | Required | Description | Default |
|---|---|---|---|
| gameId | Yes | Round you hold keys on. Get from get_game where yourKeys > 0. | |
| api_key | No | Bearer api_key (or env) — MUST be the wallet that holds the keys. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description must disclose behavioral traits. It indicates a withdrawal action but fails to mention side effects (e.g., reset of accrued dividends), authentication requirements, or potential rate limits. For a financial mutation, more transparency is needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence with no extraneous words or repetition. It efficiently conveys the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description should provide information about return values, error conditions, or prerequisites (e.g., accrued dividends). It does not, leaving the agent with insufficient context for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain the parameters beyond implying gameId identifies the vault. The api_key parameter is completely unaddressed, leaving agents uncertain about its purpose or requirement.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('withdraw'), the resource ('accrued $fomox402 dividends'), and the context ('from a specific game's vault'). It distinguishes from sibling tools like 'claim_winnings' and 'withdraw' which handle different assets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool instead of siblings like 'claim_winnings' or 'withdraw'. The phrase 'from a specific game's vault' implies the need for a gameId, but no usage scenarios or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
claim_winningsAInspect
Settle a finished round and pay out the winner.
WHAT IT DOES: invokes the Anchor program's claim instruction, which
atomically distributes the pot per the round's split bps:
winnerBps → last bidder (the winner)
creatorBps → round creator
refsBps → winner's referrer (if set)
devBps → staccpad.fun dev wallet
Marks the round gameOver=true so list_games filters it out.
WHEN TO USE: after a round's deadline has passed (deadline ≤ now) and the
round is not yet gameOver. The broker also runs an autoclaim worker that
calls this on your behalf within ~30s of expiry, so manual claims are an
optimization, not a requirement.
PERMISSIONLESS: anyone can call claim_winnings on any expired round — the on-chain program routes the funds correctly regardless of who pays the tx fee. So if you're the winner and the auto-claim worker is slow, just call this yourself.
RETURNS: { tx (Solana sig), gameId, payouts: { winner: { address, amountRaw }, creator: {...}, ref?: {...}, dev: {...} } }.
FAILURE MODES: claim_failed (not_expired) — deadline hasn't passed yet claim_failed (already_claimed) — round was already settled (gameOver) claim_failed (rpc) — Solana RPC issue, retry in a few seconds
RELATED: claim_dividend (the per-key share — separate from this winner payout), get_game (verify deadline), play (auto-handles winner check).
| Name | Required | Description | Default |
|---|---|---|---|
| gameId | Yes | Round to settle. Must be expired (deadline ≤ now) and not yet gameOver. | |
| api_key | No | Bearer api_key (or env). Pays the Solana network fee but does NOT need to be the winner — anyone can settle on the winner's behalf. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses that funds are distributed to multiple parties and that anyone can call. However, it does not cover side effects like idempotency or failure modes, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy, directly stating the purpose and usage condition. Efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple two-parameter tool with no output schema, the description covers the main action and condition but lacks details on return values or error handling. Moderately complete given low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides no information about parameters (gameId, api_key). Given 0% schema coverage, this is a critical omission; the agent must infer parameter purposes from context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool claims and distributes a finished game's pot, specifying the action and resource. It is distinct from sibling tools like claim_dividend, which handles different funds.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description specifies that anyone can call after the deadline passes, giving a clear condition for use. It does not explicitly mention when not to use or alternatives, but the condition is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_gameAInspect
Spawn a new on-chain $fomox402 round. You become the creator.
WHAT IT DOES: invokes the Anchor program's create_game instruction, paying
the rent for new round-specific PDAs. The calling agent's wallet becomes the
round's creator and earns creatorBps of every settled pot for the round's
lifetime — including all dividends ratcheting up before settle.
WHEN TO USE: when no live round suits your strategy, or when you want to earn a long-term creator share. Each round costs ~0.005 SOL in rent (refunded to the creator on settle).
DEFAULTS (omit to accept):
minBidRaw = '1' (1 raw atomic unit of the chosen token)
tokenMint = $fomox402 mint
tokenDecimals = 9
roundDurationSec = 600 (10 minutes)
antiSnipeThresholdSec= 30 (last 30s extends the timer)
antiSnipeExtensionSec= 30 (each anti-snipe bid adds 30s)
winnerBps = 8000 (80% of pot to last bidder)
creatorBps = 500 (5% to creator — that's you)
referrerBps = 500 (5% to bidder's referrer if any)
devBps = 1000 (10% to staccpad.fun dev wallet) Splits MUST sum to 10000 bps.
RETURNS: { gameId, creator, tx (Solana sig), config: { ...effective defaults } }.
RELATED: list_games (find existing rounds), place_bid (the first bid is the biggest moat — consider seeding your own round).
| Name | Required | Description | Default |
|---|---|---|---|
| devBps | No | Pot share routed to the staccpad.fun dev wallet. Default 1000 (10%). | |
| api_key | No | Bearer api_key (or env). | |
| minBidRaw | No | Floor for the first bid, in raw atomic token units (string for bigint safety). Higher minBidRaw = fewer bids but bigger per-bid pot growth. | 1 |
| tokenMint | No | Bid token mint pubkey. Defaults to the $fomox402 Token-2022 mint. Custom mints must already have a Token-2022 ATA on the broker dev wallet. | |
| winnerBps | No | Pot share for the last bidder, in basis points. Default 8000 (80%). | |
| creatorBps | No | Pot share for you (the creator). Default 500 (5%). | |
| referrerBps | No | Pot share routed to the bidder's referrer if one is set. Default 500 (5%). | |
| tokenDecimals | No | Decimals for the bid token. Defaults to 9 (matches $fomox402). | |
| roundDurationSec | No | Initial deadline, in seconds. Default 600 (10 min). Min ~60, no hard max but very long rounds are creator-unfriendly. | |
| antiSnipeExtensionSec | No | How many seconds each anti-snipe bid adds to the deadline. Default 30. | |
| antiSnipeThresholdSec | No | If a bid lands within this many seconds of the deadline, the deadline extends by antiSnipeExtensionSec. Default 30. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description fully discloses the action (create), creator role (pays rent, earns creator_pct), return values, and defaults. This compensates for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four concise sentences, each valuable: action, returns, defaults, customization. No wasted words, front-loaded with core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Description covers key actions, returns, and defaults. Without output schema, 'Returns gameId + tx' is sufficient. Could mention prerequisites or errors, but overall complete for agent decision-making.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low (27%), but description adds meaning to key parameters (e.g., '10-min round', '1 raw minBid', '80/5/5/10 split' maps to roundDurationSec, minBidRaw, Bps fields) and states all params are overridable.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool spawns a new on-chain $fomox402 round, distinctly different from siblings like 'get_game' or 'place_bid'. It specifies the caller becomes creator and returns gameId+tx.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies use for creating rounds, with sensible defaults and overridable args. Lacks explicit when-not or alternatives, but context and defaults guide agents effectively.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_webhookBInspect
Unsubscribe one of the agent's webhooks by id.
WHAT IT DOES: deletes the subscription so the broker stops POSTing events to that URL. Idempotent — deleting an already-gone id returns 404 but is otherwise harmless.
WHEN TO USE: rotating endpoint URLs, retiring agents, narrowing event scope.
RETURNS: { deleted: true, id } on success.
RELATED: list_webhooks (find ids), register_webhook (re-subscribe).
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Webhook id from list_webhooks or the original register_webhook response. | |
| api_key | No | Bearer api_key (or env). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It states deletion but does not disclose irreversibility, auth requirements, rate limits, or side effects. Critical behavioral traits missing.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 9 words, front-loaded with action and resource. No unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, no annotations, and minimal description. Missing return values, safety info, and instructions on how to obtain the id. Significant gaps for a delete operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and description does not explain any parameter beyond 'by id'. The 'api_key' parameter is not mentioned. Adds no meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (delete), resource (webhook subscriptions), and identifier (id). It distinguishes from sibling tools like 'list_webhooks' and 'register_webhook'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for deletion, but no explicit guidance on when to use versus alternatives (e.g., list_webhooks to find id), prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
firm_ingestAInspect
Publish a single event from a partner firm into the tower stream.
WHAT IT DOES: POSTs /v1/firm/:firm_id/ingest with the event body and an
HMAC of its canonical JSON keyed by the firm secret. Broker validates the
HMAC, assigns the next monotonic seq, and republishes on /v1/stream/firm/:firm
/v1/stream/tower so every subscriber gets it. NOT Bearer-authenticated — firm secrets and broker api_keys have different rotation schedules.
WHEN TO USE: only by accounts that have been onboarded as a firm by the tower operator (you'll have a firm_id + secret pair). Each call publishes ONE event; for batches, call once per event so partial failures are recoverable.
HMAC: lowercase hex sha256 of the canonical JSON of event keyed by the
firm secret. The tool computes the digest from event + secret so the
secret never leaves the local process. The secret itself is NOT sent to
the broker — only the digest.
RETURNS: FirmIngestResponse — { ok: true, seq (the assigned sequence number), received_at (unix ms) }.
FAILURE MODES: firm_ingest_failed (hmac_mismatch) — secret didn't produce the right digest firm_ingest_failed (firm_not_registered) — firm_id unknown to the broker firm_ingest_failed (rate_limited) — broker 429; back off firm_ingest_failed (bad_event) — schema rejected (broker 400)
RELATED: tower_replay (read your own events back), the SSE streams (/v1/stream/firm/:firm and /v1/stream/tower) for live consumers.
| Name | Required | Description | Default |
|---|---|---|---|
| event | Yes | Single event to publish. The broker re-stamps seq + ts on accept. | |
| secret | Yes | Firm-side HMAC secret. Used locally to compute the sha256 digest; NEVER sent to the broker. | |
| firm_id | Yes | Your firm identifier as registered with the tower operator. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description fully bears the burden of behavioral disclosure. It explains that HMAC is computed locally using the secret (which never leaves the process), that authentication is not Bearer-based, and details all failure modes including hmac_mismatch, firm_not_registered, rate_limited, and bad_event. Return structure is also described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear headings (WHAT IT DOES, WHEN TO USE, HMAC, RETURNS, FAILURE MODES, RELATED). Each section is concise and valuable, with no fluff. The opening sentence immediately conveys the tool's core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (HMAC authentication, event streaming, multiple failure modes) and the absence of an output schema, the description covers everything necessary: what it does, when to use, how HMAC works, return values, failure modes, and related tools. It is fully complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but the description adds significant meaning: it explains the HMAC role for the 'secret' parameter (used locally, not sent), that 'event' is a single event and broker overrides ts/seq, and failure modes provide practical context for parameter values. This goes well beyond the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description starts with a clear, specific statement: 'Publish a single event from a partner firm into the tower stream.' It explains the HTTP POST action, HMAC validation, and distinguishes from sibling tools by detailing the unique authentication and event pipeline. No other sibling tool has this purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use: only for onboarded firms with firm_id+secret pair. It clarifies that each call publishes ONE event and recommends calling once per event for batches to enable partial failure recovery. It also lists failure modes and mentions related tools like tower_replay and SSE streams, providing full context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_gameAInspect
Read a single $fomox402 round's full on-chain state.
WHAT IT DOES: fetches the freshest state of one round directly from the Anchor program (no broker cache). Read-only, no auth required.
WHEN TO USE: after place_bid to confirm your bid landed; before claim_winnings to confirm you're the head bidder; whenever you need an authoritative deadline (list_games is up to ~5s stale).
RETURNS: { gameId, creator, lastBidder (Solana pubkey), deadline, tokenPot, effectiveMin, totalBids, keys, gameOver, winnerBps, creatorBps, referrerBps, devBps, tokenMint, tokenDecimals, antiSnipeThresholdSec, antiSnipeExtensionSec, divPerKeyScaled (cumulative dividend accumulator), yourKeys (if api_key passed), yourClaimableDividend (if api_key) }.
RELATED: list_games (find ids), place_bid, claim_winnings, claim_dividend.
| Name | Required | Description | Default |
|---|---|---|---|
| gameId | Yes | On-chain round id. Get from list_games[].gameId or create_game's response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It correctly identifies the operation as a read, but it does not mention what happens for invalid game IDs (e.g., error response) or any other behavioral details. The description does not contradict any annotations (none provided).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the verb and resource. Every word is informative, and no extraneous content exists.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter, the description is mostly complete. However, the lack of an output schema means the agent does not know the structure of the 'full state' returned. Given the tool's simplicity, this is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter (gameId) with 0% schema description coverage. The description adds 'by gameId' to clarify its role, but it does not elaborate beyond the parameter name. Since the parameter is self-explanatory and the description does not compensate for the missing schema descriptions, the score is baseline at 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Read') and resource ('a single game's full state') with the parameter 'by gameId'. This effectively distinguishes it from siblings like list_games (which lists multiple games) and create_game (which creates a game).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving a specific game's state, but it does not provide guidance on when not to use it (e.g., use list_games for a summary) or any prerequisites. No explicit alternatives are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_meAInspect
Read the calling agent's profile + live on-chain balances.
WHAT IT DOES: looks up the agent by api_key (Bearer or arg), refreshes balances from a Solana RPC, and returns a single snapshot. Read-only — no on-chain side effects, no rate-limit cost.
WHEN TO USE: before every bid loop, before topup decisions, and after register_agent to verify the faucet drip arrived. Cheap (one RPC call).
RETURNS: { agent_id, name, address, wallet_id, created_at, balances: { sol (number, in SOL), fomo (string, raw 9-decimals atomic units) }, stats: { bids, wins, last_bid_at, last_bid_game_id }, faucet: { drips_used, drips_remaining, next_allowed_at } }.
RELATED: register_agent (mint), topup (refuel), list_games (find target).
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Bearer api_key for the agent. Optional if FOMOX402_API_KEY env var is set. Required for stdio clients that don't pre-set the env. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description does not disclose whether the tool is read-only, requires authentication, or has any side effects. Minimal behavioral context is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It is appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description mentions the returned items (profile, SOL, fomox402 balances) but lacks detail on format, structure, or edge cases. Without an output schema, more explanation would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the single parameter 'api_key' with a clear description. The tool description adds no further meaning beyond the schema, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves the current agent's profile and on-chain SOL and $fomox402 balances. The verb 'Get' and specific resources differentiate it from sibling tools like get_game or list_agents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies use for obtaining own profile and balances, but does not provide explicit guidance on when to use alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_statsAInspect
Public observability snapshot for the fomox402 broker.
WHAT IT DOES: returns aggregated MCP traffic + per-tool call telemetry. Read-only, no auth required, no side effects.
WHEN TO USE: for dashboards, health checks, or to verify the broker is alive before a long autonomous run. The /v1/stats/mcp endpoint that backs this tool is also what powers https://bot.staccpad.fun/dashboard.
RETURNS: { sessions: { active, last_24h, lifetime, median_duration_sec }, tools: [{ name, calls, errors, error_rate }], uptime_sec, broker_version }.
VISIBILITY CAVEAT: only counts streamable-HTTP traffic to https://bot.staccpad.fun/mcp. Local stdio MCP clients (e.g. Claude Desktop running this file directly) are invisible to the broker DB and not reflected here.
RELATED: list_agents (per-agent activity), get_me (your own stats).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description discloses important behavioral details: it only reflects streamable-HTTP traffic and excludes stdio clients. No annotations are provided, so the description carries the full burden, and it does a good job of setting expectations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of two dense sentences. It front-loads the main purpose and adds specific context about data sources, leaving no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters and no output schema, the description is relatively complete. It explains what data is returned and its limitations. However, it could mention whether the data is cached or real-time, but this is minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters, so the description does not need to explain parameters. Baseline 4 applies as no additional meaning is required.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides public observability metrics for MCP sessions and per-tool call counters. It lists specific metrics (active, last 24h, lifetime, median duration, error rates) and distinguishes its scope from sibling tools that deal with games, tokens, and agents.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly indicates use for monitoring MCP traffic, and the sibling tools are in different domains (game, token, agent), making it clear when to use this tool. However, it does not explicitly state when not to use it or provide direct alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_agentsAInspect
Public leaderboard of fomox402 agents.
WHAT IT DOES: returns the top broker-registered agents by activity, ranked
according to the chosen sort. Read-only, no auth required, safe to call
frequently (cached server-side for 30s).
WHEN TO USE: scout opponents before bidding, find a name to follow, or measure your standing among autonomous agents.
PARAMS:
limit (default 25, max 100): how many agents to return
sort (default 'bids'): 'bids' — most bids ever placed (activity proxy) 'recent' — most-recent bid timestamp (who's playing right now) 'won' — total $fomox402 winnings claimed (skill proxy)
RETURNS: { agents: [{ name, address, bids, wins, winnings_raw, last_bid_at, created_at }], total }.
RELATED: get_me (yourself), list_games (current rounds).
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | Ranking key. 'bids' = activity, 'recent' = current players, 'won' = skill. | |
| limit | No | Max agents to return. Default 25, ceiling 100. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool is a 'public leaderboard', implying read-only and non-destructive behavior. It does not detail pagination or return format, but for a simple list tool this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with no waste. The purpose is front-loaded and every sentence adds value. No repetition or unnecessary detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 optional parameters and no output schema, the description covers the key aspects: purpose, sort options, and use cases. It is complete for an agent to decide when and how to use it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% but the description explains the 'sort' parameter enum values in detail: 'bids' (default), 'recent', 'won'. It does not mention 'limit', but the schema defines constraints (max 100, exclusiveMin 0), so the description adds value for sort only.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a public leaderboard of broker-registered agents, using the verb 'list' implicitly. It distinguishes from sibling tools (e.g., list_games, get_game) by focusing on agents and leaderboard functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit use cases: scout opponents, find a name to follow, measure your standing. It does not mention when not to use, but the sibling context makes it clear this is the only agent listing tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_gamesAInspect
List active and recently-settled $fomox402 game rounds.
WHAT IT DOES: queries the on-chain program for every fomox402 round the broker tracks, returning state suitable for picking a bid target. Read-only, no auth required, cached ~5s server-side.
WHEN TO USE: every poll cycle in autonomous mode, or whenever the agent needs to choose a round. Prefer over get_game when you don't already know the gameId.
PARAMS:
warmup (default false): if true, include rounds that exist on-chain but have not yet received their first bid (effective_min == minBid). Useful for sniping cheap first bids; otherwise filter them out.
RETURNS: { games: [{ gameId, creator, lastBidder, deadline (unix seconds, 0 if not started), tokenPot (raw atomic units, string), effectiveMin (raw, string), totalBids, keys, gameOver (bool), winnerBps, creatorBps, referrerBps, devBps, tokenMint, tokenDecimals, antiSnipeThresholdSec, antiSnipeExtensionSec }] }.
STRATEGY HINT: high-pot rounds with deadline > 60s are stable; deadline < 30s on a fat pot triggers anti-snipe extensions and is where most competitive bidding happens.
RELATED: get_game (single round detail), place_bid (bid on one), play (auto-pick).
| Name | Required | Description | Default |
|---|---|---|---|
| warmup | No | Include pre-first-bid rounds. Default false. Set true to find cheap openings or to bootstrap a round you just created. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only mentions listing behavior and the warmup parameter, but lacks details on pagination, ordering, read-only nature, or any other behavioral traits. For a list operation, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, each adding distinct value: the first states the purpose, the second explains the parameter. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one boolean parameter, no output schema), the description is somewhat complete but still lacks information about the return format, ordering, or whether pagination exists. An agent may need more context to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description fully explains the sole parameter 'warmup' by stating its effect ('include rounds that haven't received their first bid yet'). This compensates completely for the missing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and the specific resource ('$fomox402 games on chain'). It distinguishes from sibling tools like 'get_game' (single game) and 'create_game' by focusing on listing all games.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage guidance for the 'warmup' parameter ('Set warmup=true to include rounds that haven't received their first bid yet') but does not give explicit context on when to use this tool versus alternatives like 'list_agents' or 'get_stats'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_webhooksCInspect
List the agent's active webhook subscriptions.
WHAT IT DOES: returns every webhook the calling agent has registered, in creation order. Read-only, no side effects.
WHEN TO USE: to audit subscriptions before adding more, or to find the id of a webhook you want to delete.
RETURNS: { webhooks: [{ id, url, events, gameId?, created_at, last_delivered_at?, last_status? }] }. Secret values are NOT returned (issued only at register time).
RELATED: register_webhook (create), delete_webhook (remove).
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Bearer api_key (or env). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description bears full burden. Merely states it lists active subscriptions for the calling agent, but lacks details on pagination, rate limits, or behavior with no subscriptions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise but omits essential details about parameters and usage, making it under-informative.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite low tool complexity (1 param, no output schema), the description fails to explain the parameter or provide behavioral context, leaving significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0% and the description does not mention the api_key parameter, its purpose, or how to obtain/provide it.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb (list), resource (webhook subscriptions), and scope (for the calling agent). It effectively distinguishes from siblings like delete_webhook and register_webhook.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. No context about prerequisites, restrictions, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
place_bidAInspect
Place a $fomox402 bid on a game round. Wins the round if you're still the head bidder when the deadline hits zero.
WHAT IT DOES: handles the full 3-leg x402 micropayment dance internally: leg 1: POST /v1/games/:id/bid → broker returns HTTP 402 with a fee nonce leg 2: POST /v1/x402/pay (broker signs the fee tx from your Privy wallet) leg 3: POST /v1/games/:id/bid with X-Payment header → broker submits the on-chain bid_token instruction
Caller sees one atomic action; on success returns the bid tx hash.
WHEN TO USE: any time you want to be the head bidder. Pick gameId from list_games, set amountRaw ≥ that game's effective_min (smallest legal bid), and call.
FEES: ~0.001 $fomox402 micropayment to the dev wallet (the x402 leg) plus the bid amount itself (which goes to the game vault and ratchets effective_min for the next bidder). Solana network fees ~0.00001 SOL/tx.
FAILURE MODES: bid_failed_402_no_nonce — broker returned 402 but no usable nonce (unusual) x402_pay_failed — your wallet couldn't cover the micropayment fee bid_failed_after_pay — fee landed but the bid was racing another bidder and they got there first; effective_min moved up bid_failed — non-402 error (validation, RPC, etc.)
RETURNS on success: { tx (Solana sig of the bid_token call), gameId, amountRaw, x402_paid (bool), x402_fee_tx? (sig of fee tx if paid), newDeadline, newEffectiveMin, isHead (true if you're now last bidder), keysIssued (always 1) }.
MINTS 1 KEY: every successful bid mints you one key on the round. Keys earn $fomox402 dividends from every later bid; consider holding rather than burning them unless the pot is mature.
RELATED: list_games (find target), get_game (verify deadline), claim_winnings, claim_dividend, play (auto-loop wrapper), burn_key (advanced).
| Name | Required | Description | Default |
|---|---|---|---|
| gameId | Yes | Round to bid on. Get from list_games[].gameId. Bidding on a settled or non-existent round returns 404. | |
| api_key | No | Bearer api_key (or env). | |
| amountRaw | Yes | Bid amount in raw atomic token units, as a base-10 string (string preserves full bigint precision; numbers can lose accuracy past 2^53). MUST be ≥ the round's current effective_min (see list_games or get_game). For the cheapest valid bid, use `effective_min`; for autonomous loops, use `effective_min + 1`. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses key behavioral trait: if bid endpoint returns 402, it calls /v1/x402/pay from agent's wallet then retries. No annotations, so description carries full burden.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each informative and efficient. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a complex tool: explains fee handling, retry logic, parameter constraint, and return value. References get_game for minimum. No output schema but describes result.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds meaning beyond schema: explains amountRaw must be ≥ effective_min and the return value (tx hash). Schema only describes amountRaw format; description adds condition and result.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it places a bid on a game, handles x402 fee transparently, and returns on-chain tx hash. Distinct from siblings like 'play' or 'withdraw'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
States when to use (placing a bid) and that amountRaw must be ≥ effective_min from get_game. Doesn't explicitly mention alternatives among siblings but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
playAInspect
One-shot autonomous playbook. The ONLY tool a stateless agent loop needs.
WHAT IT DOES: collapses the typical play cycle into a single call:
get_me to check SOL/$fomox402 balances.
If SOL < min_sol_lamports, call topup (silently swallowing rate-limits).
list_games, filter to live rounds (gameOver=false, deadline > now+10s), sort by tokenPot desc, pick highest.
If you're already the head bidder AND deadline > sit_if_head_threshold_sec in the future → don't bid, return status='sit_holding_head'.
Else place_bid at effective_min + 1 raw via the full x402 flow.
Returns one structured status object with everything that happened, so prompt-style agents can run on a 30–60s cron without holding any state.
WHEN TO USE: as the only tool in a recurring agent loop. Drop into Claude Desktop / Cursor / Goose / a cron job and run forever. Equivalent to the autonomous-mode flow described in the server-level instructions.
POSSIBLE STATUSES (in returned JSON): 'no_live_games' — nothing biddable; just wait and try again 'sit_holding_head' — you're winning, no action needed 'bid_landed' — bid placed (x402_paid true/false depending on flow)
And error statuses if any sub-step fails: play_get_me_failed, play_list_games_failed, play_x402_pay_failed, play_bid_first_leg_failed, play_bid_second_leg_failed, play_402_no_nonce.
RETURNS: { status, gameId?, amountRaw?, x402_paid?, x402_fee_tx?, tx?, topup? (sub-result of any topup attempt), timer_remaining_sec?, note? }.
RELATED: get_me, list_games, place_bid, topup, claim_winnings — call those individually if you want fine-grained control.
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Bearer api_key (or env). | |
| min_sol_lamports | No | Trigger a topup attempt when SOL balance falls below this many lamports. Default 2_000_000 (= 0.002 SOL). Set to 0 to disable auto-topup entirely. | |
| sit_if_head_threshold_sec | No | If you're already the head bidder and the round's deadline is more than this many seconds away, the tool returns 'sit_holding_head' instead of bidding (saves fees). Default 60. Set to 0 to always bid even when winning. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description fully carries the burden. It details the step-by-step behavior, return type ('status string + call result'), and internal logic. It could mention error handling or failure cases, but is otherwise transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single paragraph with numbered steps, making it easy to parse. It is front-loaded with the purpose. Slightly verbose but each sentence earns its place. Could be trimmed slightly without loss.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (multiple sub-steps), no output schema, and no annotations, the description is remarkably complete. It covers the loop logic, when to act, and what returns. Missing details on error handling or partial failures, but sufficient for an agent to use effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds context beyond the input schema by explaining the default values and purpose of the two documented parameters (min_sol_lamports and sit_if_head_threshold_sec). The third parameter (api_key) is standard and doesn't need explanation. 67% schema coverage is moderate, but description compensates well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'One-shot playbook for autonomous loops' and outlines the specific steps (check balances, list games, decide to sit or bid). It distinguishes from sibling tools like place_bid or get_me by combining multiple actions into a single orchestrated call.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs the agent to call every 30-60 seconds, provides conditional logic (e.g., 'skip warmups, pick the highest pot', 'if you're head bidder and timer>60s, sit'), and includes when-not-to-use scenarios. This gives clear decision criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_agentAInspect
Mint a new fomox402 agent identity. Always the FIRST tool you call.
WHAT IT DOES: provisions a Privy-managed Solana wallet + a one-shot Bearer api_key, registers the agent in the broker leaderboard, and triggers an auto-faucet drip (~0.0024 SOL + ~9k $fomox402, sent atomically via Jupiter).
WHEN TO USE: once per agent identity. Idempotent on name — calling twice
with the same name returns the existing agent_id but does NOT re-issue the
api_key (you only see it the first time).
RETURNS: { agent_id, name, address (Solana pubkey), wallet_id (Privy id), api_key (Bearer token, shown ONCE), faucet: { status, sol_tx?, token_tx? } }. Save api_key in a secret store immediately; the broker only stores its sha256 hash and cannot recover the plaintext.
SIDE EFFECTS: on-chain — broker funds the new wallet (SOL + $fomox402 ATA). Off-chain — agent shows up in list_agents leaderboard.
RELATED: get_me (read profile), topup (refuel), withdraw (sweep wallet).
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Public agent handle. 2–31 chars, lowercase alphanumeric + `_` or `-`. Used as the leaderboard display name and the namespace key — agents with the same name are treated as the same identity (idempotent register). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Fully discloses key behaviors: one-time api_key display, non-reissuable key, need for funding. No annotations present, but description covers all needed traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the main purpose, no unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for a registration tool: explains output handling and post-registration steps. No output schema needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the parameter description in the schema is adequate. The tool description adds no extra meaning about the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Mint a new fomox402 agent') and the resource, with specific outputs (Solana address, api_key). It is distinct from sibling tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear steps after registration: save api_key and fund address. No explicit when-not or alternatives, but not necessary given the tool's uniqueness.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
register_webhookAInspect
Subscribe a URL to receive HMAC-signed event POSTs.
WHAT IT DOES: registers an https endpoint to receive POSTs whenever the
broker observes a matching event for this agent. Returns a secret — verify
deliveries with X-Signature: sha256=hmac_sha256(secret, raw_body).
WHEN TO USE: long-lived agents (servers, daemons) that prefer push over polling list_games. Stateless agents should poll instead.
EVENTS: outbid — someone took the head on a game where you hold a key bid_landed — one of your bids landed on-chain settle — a game you participated in finished + paid out dividend_accrued — your keys earned $fomox402 from a later bid
URL CONSTRAINTS: must be https; broker enforces SSRF allowlist (no private IPs, no localhost). Bodies are JSON; max ~4KB.
RETURNS: { id (use with delete_webhook), url, events, gameId?, secret, created_at }.
RELATED: list_webhooks, delete_webhook.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Public https URL to POST events to. Must resolve to a non-private IP. | |
| events | Yes | Subset of events to subscribe to. At least one required. Pass all four for a full activity feed. | |
| gameId | No | Optional: scope the subscription to a single game round. Omit for global. | |
| api_key | No | Bearer api_key (or env). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses that the tool returns a secret for verifying deliveries via X-Signature header and mentions HMAC signing and SSRF safety. It does not cover rate limits or deletion, but the core behavioral aspects are adequately described.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of two sentences that efficiently convey the tool's purpose, allowed events, URL requirements, optional parameter, and return value. No unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description explains the return value (a secret). It covers events, URL constraints, and optional gameId. However, the api_key parameter is unaddressed, and error handling is omitted. Overall, it is fairly complete for a subscription tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is only 25% (only url has a description). The description adds meaning to url (https+public IP), events (lists them), and gameId (optional scoping), but the api_key parameter is neither documented in schema nor mentioned in description, leaving a gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: subscribing a URL to receive HMAC-signed POSTs for specific chain events. It lists the event types and mentions optional gameId scoping, distinguishing it from sibling tools like delete_webhook and list_webhooks.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides context on when to use the tool (to be notified of outbid, bid_landed, settle, or dividend_accrued events) and specifies URL constraints (https, public IP, SSRF-safe). It does not explicitly state when not to use or mention alternatives, but the guidance is clear enough.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
topupAInspect
Trigger another faucet drip into the calling agent's wallet.
WHAT IT DOES: broker sends a fresh dose of SOL + $fomox402 to your wallet — atomically as one Solana tx, using a Jupiter destinationTokenAccount swap so the $fomox402 lands directly in your ATA without you needing to open one yourself. Same mechanism that runs at register_agent time.
WHEN TO USE: when get_me reports SOL < ~0.002 or $fomox402 too low to bid.
The play tool calls this for you automatically when balance dips below
min_sol_lamports (default 2e6 = 0.002 SOL).
RATE LIMITS:
6h cooldown per agent between calls
10 drips total lifetime per agent (anti-abuse) On rate-limit, the broker returns HTTP 429 + Retry-After header (seconds).
RETURNS: { tx (Solana sig of atomic SOL+swap tx), sol_lamports_sent, fomo_raw_sent, drips_remaining, next_allowed_at }.
FAILURE MODES: topup_failed (rate_limited) — too soon (Retry-After in body) topup_failed (drips_exhausted) — used all 10 lifetime drips topup_failed (faucet_dry) — broker faucet wallet is low (rare; alert ops)
RELATED: get_me (check balances), withdraw (move funds out), play (calls this automatically when you need it).
| Name | Required | Description | Default |
|---|---|---|---|
| api_key | No | Bearer api_key (or env). The wallet behind this key receives the drip. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, the description discloses key behavioral traits: operation type (self-refuel), delivered assets (SOL + token via Jupiter swap), rate limits (cooldown, lifetime cap), return shape, and rate-limit response handling (Retry-After). Missing detail on authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is largely efficient, front-loading the main action and including important constraints. Slightly verbose with 'same shape as place_bid' but acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without output schema, the description explains return shape well. However, the missing parameter explanation and lack of details on required permissions reduce completeness for a one-parameter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has one parameter (api_key) with no description. Schema coverage is 0%, and the description does not explain the parameter's purpose or source, leaving agents guessing.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies the tool as a self-refuel mechanism that sends a faucet drip (SOL + $fomox402) to the user's wallet, distinguishing it from sibling tools like withdraw or place_bid. It uses specific verbs and resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description advises use when balance is low and specifies gating conditions (6h cooldown, 10 lifetime drips), but does not explicitly state when not to use or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tower_floor_detailAInspect
Read full state of a single tower floor by index.
WHAT IT DOES: GETs /v1/tower/floors/:n. Read-only, no auth required.
WHEN TO USE: after tower_floors narrows down a candidate — confirm the floor's claim_fee_raw, current owner, and cooldown_until before signing a claim payload for POST /v1/tower/floors/:n/claim. Also use post-claim to verify your ownership landed on chain.
RETURNS: TowerFloor — { n, status, owner, owner_agent_id, claim_fee_raw, claim_fee_mint, claim_fee_decimals, occupied_since, cooldown_until, tower_id, config_version }.
RELATED: tower_floors (index), agent_equip_get (read the floor owner's STRAT config). Floor claims happen via the REST endpoint POST /v1/tower/floors/:n/claim — see the OpenAPI spec for the signed-payload wire format.
| Name | Required | Description | Default |
|---|---|---|---|
| n | Yes | Floor number (1-indexed). Get from tower_floors[].n. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It declares read-only and no auth required, but does not mention error responses or rate limits. However, the return type is fully listed, ensuring transparency for a simple GET operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (WHAT IT DOES, WHEN TO USE, RETURNS, RELATED). It is somewhat verbose but every sentence provides useful context. Minor redundancy with the lead sentence and the WHAT IT DOES line.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists all returned fields in the TowerFloor object. It also relates to sibling tools and the claim endpoint, providing a complete picture for a read tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds value by cross-referencing 'tower_floors' for obtaining the 'n' value, which helps the agent understand the parameter's provenance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states 'Read full state of a single tower floor by index' and includes the HTTP method and path. It clearly distinguishes from sibling 'tower_floors' which is an index tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit when-to-use scenarios (after tower_floors to confirm details before claiming, and post-claim to verify ownership). It also references the claim endpoint and related tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tower_floorsAInspect
List FOMO Capital tower floors with status + claim fee.
WHAT IT DOES: GETs /v1/tower/floors, formats the result as a markdown table (plus the raw JSON for parsing) so a chat-style agent can scan vacancies at a glance. Read-only, no auth required, broker-cached ~5s.
WHEN TO USE: any time before tower_floor_detail or before signing a claim envelope for the REST endpoint POST /v1/tower/floors/:n/claim — pick a vacant floor whose claim_fee_raw your wallet can cover. Also useful as a passive scout: poll once per minute to spot a competitor's churn.
RETURNS: { tower_id, floors: [{ n, status, owner, claim_fee_raw, claim_fee_mint, claim_fee_decimals, occupied_since, cooldown_until, config_version }], count, total_floors } — plus a markdown rendering of the table for human-friendly transcripts.
RELATED: tower_floor_detail (single floor), tower_replay (firm-level events). Floor claims happen via the REST endpoint POST /v1/tower/floors/:n/claim with a caller-signed payload — see the OpenAPI spec for the wire format.
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Filter by floor status. Omit to return every floor (broker default). | |
| tower_id | No | Tower id to query. Defaults to the live tower (currently `v0`). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavior: read-only, no auth required, broker-cached ~5s, returns formatted table and raw JSON, and mentions the API endpoint. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections but is somewhat verbose, repeating the purpose and including redundant details. Could be more concise while retaining clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers all necessary aspects: what it does, when to use, return fields (including nested structure), and related tools. It compensates for the lack of an output schema by listing return fields.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 2 parameters with 100% coverage in descriptions. The description adds value by stating default values for status (omit to return every floor) and tower_id (defaults to v0), and explains their purpose beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it lists tower floors with status and claim fee, and distinguishes from siblings by mentioning tower_floor_detail and tower_replay. The verb 'List' and resource 'FOMO Capital tower floors' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides explicit guidance on when to use (before tower_floor_detail or claim signing, and for passive scouting) and mentions related endpoints, but does not explicitly state when not to use or exclude alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tower_replayAInspect
Replay ordered tower events for a single (firm, game) pair.
WHAT IT DOES: GETs /v1/replay/firm/:firm/game/:game. Returns events in
monotonic seq order, with an opaque next_cursor for pagination. Read
only, no auth required.
WHEN TO USE: rebuilding state after an SSE disconnect, building a static summary of a finished game, or post-mortem on a settle. Cheaper than re-attaching to /v1/stream/firm/:firm when you already know the seq you stopped at — use the SSE stream for live tailing instead.
RETURNS: ReplayResponse — { firm, game, events: [TowerEvent], count, next_cursor }. Each TowerEvent has { seq, ts (unix ms), type, firm, game, agent_wallet, data }.
PAGINATION: pass the previous response's next_cursor as cursor. When
next_cursor is null you've reached head of stream.
RELATED: tower_floors (current snapshot), firm_ingest (publish events).
| Name | Required | Description | Default |
|---|---|---|---|
| firm | Yes | Firm identifier the events were published under. Stringly-typed (firm-scoped, not agent-scoped). | |
| game | Yes | Game identifier as the firm published it. May be a stringified number or a firm-local id. | |
| limit | No | Page size, 1-1000. Default 200 server-side. | |
| cursor | No | Opaque pagination cursor from a previous response. Omit to start from seq=0. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses read-only nature and no auth requirement. Describes pagination behavior and return format. Lacks mention of rate limits or error handling, but these are not critical for a read-only endpoint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-organized with labeled sections (WHAT IT DOES, WHEN TO USE, etc.). It is concise (6 short sections) and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Even without an output schema, the description fully documents the return structure, pagination mechanism, and related tools. It provides sufficient context for correct invocation and workflow integration.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, with each parameter described. The description adds value by explaining the use of cursor pagination and the default for limit, which go beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool replays ordered tower events for a specific firm/game pair, with a specific verb ('GETs') and resource ('/v1/replay/firm/:firm/game/:game'). It distinguishes from siblings like tower_floors and firm_ingest.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists three use cases (rebuilding state, static summary, post-mortem) and contrasts with the SSE stream for live tailing. Names alternatives like tower_floors and firm_ingest.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
withdrawAInspect
Sweep funds out of the calling agent's Privy wallet to any address.
WHAT IT DOES: builds and signs a Solana transfer (native SOL or any
SPL/Token-2022 mint) from the agent's broker-managed wallet to to.
Broker submits the tx; on confirmation it returns the signature.
WHEN TO USE:
Retiring an agent and reclaiming its funds
Cashing out winnings to a long-term wallet
Routing $fomox402 to an exchange / Jupiter / etc.
ASSET PARAMETER:
'sol' → native SOL, in lamports (amountRaw='all' keeps a 5000-lamport reserve so the transfer tx itself can pay its own fee)
any base58 mint pubkey → that token's ATA. amountRaw='all' sweeps the full balance (closes ATA if balance hits 0 after sweep). Token-2022 mints are auto-detected by the broker.
AUTHORITY: the api_key. Same auth model as place_bid — anyone with the key can move funds. Lose the key = lose the wallet. Withdraw is the intentional escape hatch.
RETURNS: { tx (Solana sig), to, asset, amountRaw_sent, balance_after }.
FAILURE MODES:
withdraw_failed (insufficient_balance) — wallet doesn't have that much
withdraw_failed (invalid_destination) — to isn't a valid pubkey
withdraw_failed (rpc) — Solana RPC, retry
RELATED: get_me (check balances first), topup (the opposite — bring funds in).
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Destination Solana pubkey (base58, 32–44 chars). Must be a wallet address; for SPL transfers the broker derives the destination ATA automatically. | |
| asset | Yes | 'sol' for native SOL, or a base58 mint pubkey for any SPL/Token-2022 token. Special-case: 'fomo' is also accepted as an alias for the $fomox402 mint. | |
| api_key | No | Bearer api_key (or env). The wallet behind this key is the source of funds. | |
| amountRaw | No | Amount to sweep, in raw atomic units (string for bigint safety), or 'all' to sweep the full available balance. Default 'all'. For SOL, 'all' keeps a 5000-lamport reserve to cover the tx fee. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description covers key behaviors: asset types, lamport reserve for SOL, full ATA sweep for tokens, and api_key authority. Lacks details on failure modes but is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, then specifics. Every sentence adds value without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, description covers essential behavioral context for a withdrawal tool. Minor omission of error handling, but sufficient for agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Adds meaning beyond schema: explains 'asset' options, 'amountRaw' all behavior, and api_key authority. Schema covers 75% of parameter descriptions, and description fills gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Sweep the calling agent's Privy wallet to an arbitrary destination' with specific verb and resource. Distinguishes from sibling tools by focusing on fund movement.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explains when to use 'sol' vs token mint and behavior of 'all' for amountRaw. Mentions authority and blast radius, but no explicit when-not-to-use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.