Skip to main content
Glama

Server Details

Complete financial infrastructure for AI agents — payments, lending, escrow & more.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
BbrainFrance/Agent_payment_protocol
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.7/5 across 38 of 42 tools scored. Lowest: 1.3/5.

Server CoherenceB
Disambiguation4/5

Despite 42 tools, distinct domains (bets, loans, contracts, negotiations, messaging) keep purposes clear. While accept_bet, accept_negotiation, and borrow all involve 'accepting' something, the object nouns and descriptions clearly separate the workflows. No two tools appear interchangeable.

Naming Consistency3/5

Most tools follow verb_noun (create_bet, check_balance), but significant deviations exist: 'borrow' lacks a noun object, six tools use 'my_' prefix (my_bets, my_loans), and others like blockchain_wallet_info, unread_count, and transaction_history omit leading verbs. Readable but inconsistent patterns.

Tool Count2/5

42 tools significantly exceeds the 25+ threshold for 'too many.' While the MIDAS protocol covers multiple domains (lending, betting, contracts, services), the surface area is bloated, risking tool selection paralysis for agents. Could likely be refactored into smaller resource-specific servers.

Completeness3/5

Core CRUD exists for most entities, but notable gaps remain: no cancel_bet, cancel_contract, or cancel_loan_offer; no update_subscription; no deposit function to complement withdraw_usdc; and no dispute/resolve mechanisms for contracts. Agents will hit dead ends when trying to exit or modify existing commitments.

Available Tools

44 tools
accept_betBInspect

Accept an open bet (AGAINST side)

ParametersJSON Schema
NameRequiredDescriptionDefault
betIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It fails to mention whether this operation is reversible, what financial commitments are made (funds locked/escrow), what the return value contains, or error conditions. For a financial mutation tool, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only four words, with no redundant information. The AGAINST side parenthetical efficiently conveys critical positional information. However, it is arguably too brief for a tool involving financial commitment, lacking necessary safety context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a financial betting tool with mutation semantics and no output schema or annotations, the description is insufficient. It omits critical context: return value structure, whether the acceptance creates a contract, fund locking behavior, and irreversibility warnings that agents need to safely invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by implying betId refers to an 'open' bet (as opposed to any bet ID). However, it does not describe the parameter's format, constraints, or where to obtain valid values (e.g., from list_open_bets), leaving significant semantic gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb (Accept), resource (open bet), and critical scope modifier (AGAINST side) that distinguishes it from sibling tools like create_bet or list_open_bets. The AGAINST side notation is essential for understanding the position being taken.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The (AGAINST side) notation implies this tool is for taking the opposing position on an existing bet, but it does not explicitly state when to use this versus create_bet or mention prerequisites like requiring sufficient balance. Usage is implied but not explicitly guided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

accept_negotiationCInspect

Accept the current offer

ParametersJSON Schema
NameRequiredDescriptionDefault
negotiationIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but reveals nothing about consequences. It does not state whether acceptance is final, creates a binding contract, triggers payment, or if the action is reversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief (4 words), the description is under-informative rather than efficiently concise. The single sentence fails to earn its place by providing actionable specificity for an agent selecting among multiple negotiation-related tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this appears to be a significant state-changing operation (acceptance likely finalizes a deal) with no output schema or annotations, the description inadequately prepares an agent. It omits side effects, success indicators, and error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (negotiationId lacks description). The description implies the parameter identifies an offer to accept but does not specify format, where to obtain valid IDs, or that this identifies the specific negotiation being accepted.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('Accept') and implies the resource (negotiation/offer), but 'current offer' is ambiguous regarding whether it refers to the parameter negotiationId or a session state. It fails to distinguish this tool from siblings like reject_negotiation or counter_offer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use accept_negotiation versus alternatives like counter_offer, reject_negotiation, or human_approve_negotiation. The workflow context (e.g., prerequisites, when acceptance is appropriate) is entirely absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

block_agentCInspect

Block another agent

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonNo
agentIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but provides none. It does not specify the scope of blocking (messaging only? all interactions?), whether the blocked agent is notified, if the action is idempotent, or side effects on existing bets/negotiations visible in sibling tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief (three words) with no filler or redundant text. However, while concise, it is insufficiently informative for the tool's apparent complexity within the multi-agent ecosystem.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has zero schema descriptions, no annotations, no output schema, and operates in a complex domain with messaging, betting, and negotiation siblings, a three-word description is completely inadequate. It leaves critical behavioral, parametric, and workflow questions unanswered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for both agentId and reason parameters. The description fails to compensate by explaining what constitutes a valid agentId format or the purpose of the optional reason field (e.g., private note vs. notification sent to blocked party).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Block another agent' is essentially tautological, restating the tool name with spaces added. While it confirms the action (block) and target (agent), it fails to define what 'blocking' entails in this multi-agent system context (e.g., preventing messages, bets, or visibility) and does not differentiate from sibling unblock_agent.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives. The existence of unblock_agent and list_blocked_agents suggests a workflow, but the description does not indicate when blocking is appropriate, whether it is reversible (though implied by sibling), or prerequisites such as existing interactions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

blockchain_wallet_infoBInspect

Get blockchain wallet address and on-chain balances

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden but offers minimal behavioral context. While 'on-chain' hints at blockchain interaction, it fails to specify which blockchain/network, whether the operation is read-only (implied but not guaranteed), rate limits, or whether balances include NFTs/tokens vs. native currency only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at six words with zero redundancy. The description is front-loaded with the action and resource, appropriate for a simple getter tool with no parameters.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema and annotations, the description provides the minimal viable information about return values (address + balances). However, for a financial blockchain tool, it lacks critical context such as address format, supported chains, or balance granularity, leaving agents uncertain about the response structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. According to scoring rubric, 0 parameters establishes a baseline of 4. The description does not need to compensate for missing parameter documentation, though it could have clarified why no parameters are needed (e.g., uses authenticated wallet context).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' and identifies the resources 'blockchain wallet address and on-chain balances' clearly. However, it does not explicitly differentiate from the sibling tool 'check_balance', leaving ambiguity about whether this returns platform balances vs. on-chain crypto balances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'check_balance' or 'transaction_history'. No prerequisites or conditions are mentioned despite this being a blockchain-related query that might require specific wallet setup.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

book_and_payCInspect

Book and pay for a service using a quote

ParametersJSON Schema
NameRequiredDescriptionDefault
payWithNo
quoteIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full burden. 'Pay' implies financial mutation and irreversibility, but description doesn't disclose critical safety traits: atomicity (booking vs payment), failure modes, balance prerequisites, or whether this creates a subscription/contract. Missing behavioral warnings expected for payment tools.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence, front-loaded with action. Structure is grammatically correct and direct. However, extreme brevity is inappropriate given the tool's complexity (financial transaction with 0% schema coverage and no annotations).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a payment tool with no output schema and no annotations, one sentence is inadequate. No information on return values (confirmation ID? transaction hash?), error states (insufficient funds? expired quote?), or side effects (notifications, contract creation). Requires behavioral and parameter details given high stakes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (neither parameter documented in schema). Description mentions 'quote' which implicitly maps to quoteId, but provides zero information about payWith (payment method? token type? enum values?). Insufficient compensation for zero schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource ('Book and pay for a service') and references 'using a quote' which distinguishes it from direct booking tools and links to the get_quote sibling. However, 'service' remains somewhat vague given the diverse domain (bets, loans, contracts vs marketplace services).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies prerequisite workflow via 'using a quote' (suggests get_quote must precede), but lacks explicit when-to-use guidance, alternatives, or exclusion criteria. No mention of relationship to send_payment or create_contract for different transaction types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

borrowBInspect

Accept a loan offer and receive funds

ParametersJSON Schema
NameRequiredDescriptionDefault
offerIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention critical traits: that this creates a debt obligation (loan), whether the action is reversible, required collateral/credit checks, or side effects like updating my_loans. Only the inbound fund transfer is mentioned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally concise at seven words with no filler. Information is front-loaded with the action ('Accept') first, followed by object and outcome. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial mutation tool with no output schema, no annotations, and zero schema parameter coverage, the description is insufficient. It omits the debt creation consequence, return value description, and error conditions that would be necessary for safe agent invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description partially compensates by implying the offerId parameter represents a 'loan offer' to be accepted. However, it does not specify where to obtain this ID (e.g., from list_loan_offers), its format, or that it must be a valid, available offer.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verbs ('Accept', 'receive') and clearly identifies the resource ('loan offer', 'funds'). It effectively distinguishes from siblings like accept_bet and accept_negotiation by explicitly specifying 'loan offer', and contrasts with create_loan_offer and repay_loan through its action-oriented phrasing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives (e.g., when to use borrow vs. create_loan_offer), nor does it mention prerequisites such as first calling list_loan_offers to obtain an offerId. Usage context is entirely implicit.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

cancel_subscriptionCInspect

Cancel a recurring payment

ParametersJSON Schema
NameRequiredDescriptionDefault
subscriptionIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure but fails to specify critical behavioral traits: whether cancellation is immediate or end-of-period, if it is reversible, whether prorated refunds occur, or required permissions. Only the basic mutative action is implied.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at four words. It is front-loaded with no redundancy, though its brevity contributes to information gaps in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial mutation tool with no annotations, no output schema, and zero schema coverage, the description is insufficient. It lacks necessary context about the parameter source, cancellation timing, and irreversibility that agents need to invoke this safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the required 'subscriptionId' parameter. The description mentions 'recurring payment' but does not explain the parameter's format, valid values, or how to obtain it (e.g., from my_subscriptions), failing to compensate for the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Cancel') and identifies the resource ('recurring payment'), which clearly maps to the tool name. It distinguishes this from sibling payment tools like send_payment or repay_loan by specifying 'recurring,' though it does not explicitly contrast with create_subscription.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., modifying vs. canceling), nor does it mention prerequisites such as needing an active subscription or retrieving the ID from my_subscriptions first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_balanceBInspect

Check on-chain USDC balance (Base L2)

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing the full burden of behavioral disclosure on the description. The text fails to specify the response format, whether balances are real-time or cached, authentication requirements, or idempotency characteristics of the operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence that places the core action at the beginning. There is no redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (zero parameters) and lack of output schema, the description adequately covers the basic purpose but remains incomplete regarding return value structure and behavioral specifics. For a financial tool with numerous siblings, additional context about the wallet scope would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score per the rubric. The description appropriately does not mention parameters since none exist, though it implies no currency filtering is supported via 'across all currencies'.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Check' with the resource 'wallet balance' and clarifies scope with 'across all currencies.' While it clearly identifies the tool's function, it does not explicitly differentiate from the sibling tool `blockchain_wallet_info` or explain when to prefer this over `transaction_history` for balance inquiries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like `transaction_history` or `blockchain_wallet_info`. There are no stated prerequisites, conditions, or exclusions for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_inboxCInspect

Check inbox for received messages

ParametersJSON Schema
NameRequiredDescriptionDefault
typeNo
limitNo
unreadOnlyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but reveals nothing about return format, pagination behavior (despite the 'limit' parameter), or side effects. The term 'check' implies read-only, but specifics of what data is returned are absent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured and front-loaded with the action and resource. While underspecified for the tool's complexity, it contains no wasted words or redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a messaging tool with three undocumented parameters and no output schema. The description omits essential context such as the relationship to sibling messaging tools and the nature of the inbox data returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with three parameters (type, limit, unreadOnly) completely undocumented. The description adds no meaning for these—particularly 'type', which is ambiguous—failing to compensate for the schema gap, though parameter names provide minimal implicit hints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action (check) and resource (inbox) but remains vague about scope and return value. Given siblings like 'read_message' and 'unread_count', it fails to clarify whether this returns message summaries, IDs, or full content—critical ambiguity for an agent selecting tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus 'read_message', 'unread_count', or 'send_message'. An agent cannot determine from this description whether to use check_inbox for listing messages versus reading specific ones.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

check_reputationCInspect

Check reputation score of an agent

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden but reveals nothing about behavioral traits: it doesn't state this is read-only, doesn't describe the score format/range (numeric? 0-100?), and doesn't mention auth requirements or 'agent not found' error behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (6 words) and front-loaded, but given the complete absence of schema documentation and annotations, this brevity constitutes under-specification rather than efficient communication.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only one parameter, the tool operates in a complex domain (financial/contract platform per siblings). With 0% schema coverage, no output schema, and no annotations, the description should explain the reputation scoring system and return values, but provides none of this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% (agentId lacks description). While the phrase 'of an agent' implicitly references the parameter, the description adds no semantics about agentId format, valid values, or where to obtain agent identifiers, failing to compensate for the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states a clear verb ('Check') and resource ('reputation score of an agent'), establishing basic purpose. However, it fails to distinguish scope from siblings like 'my_profile' (which may contain reputation) or clarify whether this queries external agents versus self.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., needing to know the agentId first) or trust thresholds for reputation scores.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

counter_offerCInspect

Send a counter-offer in an active negotiation

ParametersJSON Schema
NameRequiredDescriptionDefault
offerYes
negotiationIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It implies the negotiation must be 'active' (state requirement), but discloses no information about side effects, idempotency, authentication requirements, or what happens to the negotiation state after sending.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at seven words with no redundancy. However, given the complexity of the nested object parameter and lack of schema documentation, this brevity under-serves the agent's information needs.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool with a completely undefined nested object (offer) and no output schema or annotations. The description omits critical details about valid counter-offer formats, required fields within the offer object, and return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description minimally compensates by semantically linking 'counter-offer' to the 'offer' parameter and 'negotiation' to 'negotiationId'. However, it fails to describe the expected structure or content of the open-ended 'offer' object (additionalProperties: {}).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific action ('Send') and resource ('counter-offer') and constrains context ('active negotiation'). However, it does not explicitly differentiate from sibling tools like accept_negotiation or reject_negotiation, relying solely on the tool name for distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus siblings (accept_negotiation, reject_negotiation, start_negotiation). No mention of prerequisites, failure modes, or workflow sequencing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_betCInspect

Create a bet with escrow

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
currencyYes
conditionYes
oracleTypeNo
descriptionYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While 'escrow' hints at funds being locked, it fails to explain critical behavioral traits: what conditions trigger escrow release, how oracleType works, whether creation is reversible/cancellable, or what triggers the bet resolution.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

At four words for a complex financial mutation tool with five undocumented parameters, the description is inappropriately terse. It front-loads the core action but sacrifices necessary complexity, resulting in under-specification rather than effective conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for a financial tool with zero schema documentation, no annotations, no output schema, and escrow mechanics. The description omits risk implications, oracle selection, counterparty requirements, and resolution flow essential for safe agent operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and the description compensates with zero parameter guidance. No explanation of 'condition' vs 'description', valid 'currency' values, 'amount' precision, or the optional 'oracleType' parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Create) and resource (bet) and adds specific mechanism context (escrow). However, it does not explicitly distinguish from sibling tool 'accept_bet' or clarify when to create versus accept existing bets.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like 'accept_bet', prerequisites such as requiring sufficient balance, or the betting workflow sequence (create → accept → fulfill).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_contractCInspect

Create a conditional contract with escrow

ParametersJSON Schema
NameRequiredDescriptionDefault
titleYes
currencyYes
conditionsYes
descriptionYes
deadlineDaysNo
escrowAmountYes
counterpartyIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full disclosure burden. While it mentions 'escrow' implying funds are locked, it fails to specify when escrow is released, whether the contract is immediately active or pending, cancellation policies, or the structure of the returned contract object. It omits the critical lifecycle state that requires signing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single 5-word sentence. While not verbose, it is inappropriately concise for a complex 7-parameter financial operation with nested condition objects. It under-specifies rather than efficiently summarizes, leaving critical behavioral and parameter context unaddressed.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high complexity (financial escrow, conditional logic, 7 parameters, 0% schema coverage, no output schema, no annotations), the description is inadequate. It omits the contract lifecycle (creation → signing → fulfillment), the nature of conditions, and return value structure that an agent would need to use this tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. While the words 'conditional' and 'escrow' loosely map to the conditions and escrowAmount parameters, the description fails to explain the counterpartyId requirement, the deadlineDays limitation, the expected currency format, or the structure of condition objects (id, type, params).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Create') and identifies the resource ('conditional contract') along with a key distinguishing feature ('with escrow'). It sufficiently distinguishes the tool from siblings like create_bet or create_loan_offer by specifying the contract type, though it could explicitly contrast with the signing workflow implied by sibling sign_contract.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., create_bet for simple wagers), nor does it mention prerequisites like requiring sufficient balance to cover the escrowAmount, or that the resulting contract requires counterparty signature via sign_contract before activation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_loan_offerCInspect

Create a loan offer with interest and collateral

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
currencyYes
durationDaysYes
interestRateYes
collateralPercentYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of disclosure. While 'Create' implies a write operation, the description fails to explain side effects, offer lifecycle, acceptance mechanics, or irreversibility of published offers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with the verb and resource front-loaded. While minimal, there is no redundant or wasted text; every word conveys meaningful content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 required parameters with zero schema documentation, zero annotations, and no output schema, the 9-word description is insufficient for a financial tool involving collateral and interest rates. Missing critical context like currency format, value constraints, and offer visibility.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and the description only implicitly references 2 of 5 parameters ('interest' and 'collateral'). It does not document 'amount', 'currency', 'durationDays', expected formats (e.g., currency codes), or valid ranges, leaving critical parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the core action (Create) and resource (loan offer), mentioning key attributes (interest, collateral). It sufficiently distinguishes from siblings like 'borrow' (taking loans) and 'list_loan_offers' (querying), though it doesn't explicitly clarify the lender perspective.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus 'borrow' or 'create_contract', nor any warnings about financial risks, prerequisites, or expiration behavior of offers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_subscriptionCInspect

Set up a recurring payment

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
currencyYes
frequencyYes
recipientIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but discloses almost nothing. It does not mention authorization requirements, idempotency, what happens on insufficient funds, or that this creates a persistent subscription record visible in my_subscriptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the single sentence is efficient per word, the description suffers from under-specification rather than true conciseness. A four-parameter financial mutation tool requires substantially more detail than four words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Grossly incomplete for a financial tool with four required parameters and no output schema. Missing: parameter formats, error conditions, side effects, return value structure, and relationships to sibling subscription management tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with four undocumented parameters. The description adds no parameter context whatsoever—critical omissions include amount units (cents vs whole currency), currency format (ISO codes), recipientId format, and frequency implications.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Set up') and resource ('recurring payment'), clarifying the basic intent. However, it fails to distinguish from siblings like send_payment (one-time vs recurring) or position itself relative to cancel_subscription/my_subscriptions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus one-time payment tools, prerequisites (sufficient balance, recipient validation), or lifecycle management (subscriptions created here can be viewed via my_subscriptions and cancelled via cancel_subscription).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

discover_servicesCInspect

Search available services and products in the MIDAS marketplace

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
categoryNo
currencyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'Search' implies a read-only operation, the description does not confirm safety characteristics, explain pagination behavior (despite the presence of a `limit` parameter), or describe the return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single nine-word sentence with the action verb front-loaded. No redundant language is present, though the extreme brevity contributes to information gaps given the lack of supporting metadata.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a tool with three undocumented parameters and no output schema. Critical context is missing regarding what fields are returned, how pagination behaves, or what valid inputs are accepted for the category and currency parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for its three parameters (limit, category, currency). The description fails to compensate by explaining parameter semantics, valid values for category filters, or expected currency formats (e.g., ISO codes).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Search') with clear resource ('services and products') and scope ('MIDAS marketplace'). However, it does not explicitly distinguish from sibling tool `get_service`, which likely retrieves specific services by ID rather than listing available ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit when-to-use guidance is provided. The description fails to indicate when to use this broad discovery search versus the specific `get_service` tool, nor does it mention prerequisites or filtering strategies.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

fulfill_conditionCInspect

Mark a condition as fulfilled

ParametersJSON Schema
NameRequiredDescriptionDefault
contractIdYes
conditionIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It implies a write operation ('Mark') but discloses nothing about side effects, reversibility, idempotency, required permissions, or what happens after fulfillment.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded and wastes no words, but represents under-specification rather than effective conciseness. Appropriate length for the content provided, though the content is insufficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool affecting contract state with no output schema and no annotations, the description is inadequate. It fails to explain the return value, error conditions, or the relationship between the condition and contract entities.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to compensate. It mentions 'condition' which hints at conditionId, but completely omits contractId despite it being a required parameter. No explanation of ID formats or where to obtain them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Mark a condition as fulfilled' essentially restates the function name (tautology) and fails to specify that this operates on contract conditions, despite the contractId parameter and sibling tools like create_contract/sign_contract. It does not distinguish when to use this versus sign_contract or other state-changing tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives, prerequisites (e.g., must be a contract party), or workflow context (e.g., whether this triggers payments or is reversible).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_quoteCInspect

Request a price quote for a service

ParametersJSON Schema
NameRequiredDescriptionDefault
guestsNo
nightsNo
currencyNo
serviceIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It does not clarify whether 'requesting' a quote creates a persistent record, consumes resources, has side effects, requires specific permissions, or how long the quote remains valid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded with no wasted words. However, it is undersized for the complexity of a 4-parameter tool with zero schema documentation, though this shortcoming is captured in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 4 parameters with no schema descriptions, no annotations, and no output schema, the 6-word description is inadequate. It fails to explain the return value (what constitutes a 'quote'), side effects, or the relationship between input parameters and pricing.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fails to compensate for the undocumented parameters. While 'service' implies the purpose of 'serviceId', it provides no context for 'guests', 'nights', or 'currency'—critical parameters for calculating a price quote.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Request') and resource ('price quote for a service'), making the basic purpose clear. However, it fails to distinguish from sibling tools like 'get_service' (which likely retrieves service details) or 'discover_services' (which finds available services).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'book_and_pay' or 'get_service'. It does not mention prerequisites (e.g., needing to discover services first) or when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_serviceCInspect

Get details of a specific service

ParametersJSON Schema
NameRequiredDescriptionDefault
serviceIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose error behavior (what happens if serviceId is invalid?), whether this is idempotent, rate limits, or what format the details are returned in. Only the basic read-only nature is implied by 'Get'.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is brief (6 words), but brevity here results in underspecification rather than efficiency. The single sentence fails to earn its place by omitting critical context about the parameter and sibling relationships.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich domain implied by siblings (bets, loans, contracts), the description inadequately explains what 'service' means here. Without an output schema, it should at least clarify the return value type or structure, which it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate but adds nothing about the serviceId parameter. It does not explain the ID format, where to obtain it (e.g., from discover_services), or validation constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action ('Get details') and resource ('service'), but remains vague about what constitutes a 'service' in this ecosystem (given siblings like discover_services, create_contract, etc.) and what specific details are returned. It does not differentiate from discover_services.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings like discover_services. The required serviceId parameter implies you need an ID first, but the description does not state that you should use discover_services to find IDs before calling this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

human_approve_negotiationBInspect

Approve a negotiation exceeding the automatic threshold

ParametersJSON Schema
NameRequiredDescriptionDefault
negotiationIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions the threshold mechanism, it fails to disclose critical mutation details: what state the negotiation enters after approval, whether this triggers binding commitments or payments, required permissions, or reversibility. For a high-stakes approval operation, this is inadequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of seven words with no filler. It front-loads the verb ('Approve') and immediately qualifies the scope. Every word earns its place; no restructuring could improve information density.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple input schema (single string parameter) and lack of output schema, the description provides minimal viable context for identifying the correct tool. However, for a significant mutation operation (negotiation approval), it lacks explanation of outcomes, side effects, or return values, leaving agents uncertain about what happens after invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage for the single 'negotiationId' parameter. The description mentions 'a negotiation' but doesn't explicitly map to the parameter name, describe its expected format (UUID? string pattern?), or clarify how to obtain valid IDs. With zero schema coverage, the description must compensate but doesn't adequately do so.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (Approve) and resource (negotiation) with a specific scope condition (exceeding the automatic threshold). This effectively distinguishes it from sibling 'accept_negotiation' by implying manual intervention for exceptional cases. However, it doesn't explicitly acknowledge the 'human' aspect from the tool name or explicitly contrast with automatic acceptance workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'exceeding the automatic threshold' provides implicit guidance on when to use this tool (when automatic approval is insufficient). However, it lacks explicit 'when to use vs when not to use' guidance and doesn't name specific alternatives like 'accept_negotiation' or 'counter_offer' from the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_blocked_agentsBInspect

List all agents you have blocked

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While 'List' implies a read-only operation, the description does not explicitly confirm safety, disclose pagination behavior, rate limits, or what happens when no agents are blocked (empty array vs error).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It is appropriately sized for a zero-parameter tool, front-loading the key action and scope without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the low complexity (zero parameters) but absence of both annotations and output schema, the description is minimally viable but incomplete. It fails to describe the return structure or explain how this tool fits into the broader blocking workflow with sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the scoring guidelines, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to clarify beyond what the schema (empty object) already indicates.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('List') and specific resource ('agents you have blocked'), making the tool's function immediately understandable. However, it lacks explicit differentiation from siblings like 'block_agent' or 'unblock_agent' regarding when to use each.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it suggest workflows (e.g., using this to retrieve agent IDs before calling 'unblock_agent'). It simply states the function without contextual usage advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_loan_offersCInspect

Browse available loan offers

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
currencyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, forcing the description to carry full behavioral disclosure. While 'browse' implies read-only access, the description omits pagination behavior, return format (what constitutes an offer?), rate limits, and whether results are filtered by user eligibility or show all public offers.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise at three words. The phrase is front-loaded with the core action, though the extreme brevity is wasteful given the need to document two undocumented parameters and distinguish from siblings in the description text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Insufficient given zero schema descriptions, missing annotations, and no output schema. The description lacks parameter documentation, behavioral specifics (pagination, filtering), and explicit differentiation from sibling loan tools that an agent would need to select correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% with no descriptions on the 'limit' (pagination) or 'currency' (filtering) parameters. The description fails to compensate for this gap by explaining parameter purpose, valid formats, or default behaviors.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb 'Browse' and resource 'loan offers'. The term 'available' implicitly distinguishes from sibling 'my_loans' (which returns the user's active loans), though it does not explicitly contrast with other loan-related tools like 'borrow' or 'create_loan_offer'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no explicit guidance on when to select this tool versus 'my_loans', 'borrow', or 'create_loan_offer', nor does it mention any prerequisites such as authentication requirements or market availability conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_open_betsCInspect

List bets available to accept

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
currencyNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of behavioral disclosure. Fails to mention read-only nature, pagination behavior (despite 'limit' parameter), filtering logic, or what constitutes an 'open' bet. Minimal behavioral context provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence is front-loaded with the core action. No wasted words or redundant phrases. However, given the lack of annotations and schema documentation, the description is undersized rather than perfectly concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 0% schema coverage, no annotations, and no output schema, the description should explain the filtering logic (currency param), pagination (limit param), and return format. Currently incomplete for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for both 'limit' and 'currency' parameters. Description adds zero information about parameter semantics, units, valid formats, or constraints. Complete failure to compensate for undocumented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States a specific verb (List) and resource (bets available to accept). Implicitly distinguishes from sibling 'my_bets' (which would return the user's own bets) and 'create_bet' by focusing on discoverable/open bets. Clear but could explicitly contrast with 'accept_bet' to clarify this is discovery vs action.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like 'my_bets' or prerequisites for using it. Does not mention that this should be called before 'accept_bet' to discover bet IDs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_betsBInspect

View all your bets

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'View' implies a read-only operation, the description lacks critical context about what 'all' encompasses (active, settled, cancelled bets), pagination behavior, or return structure when no bets exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief at four words ('View all your bets'), with the verb front-loaded and zero redundancy. While efficient, the extreme brevity contributes to under-specification given the domain complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich betting ecosystem indicated by 40+ siblings (including create_bet, accept_bet, fulfill_condition) and the absence of an output schema, the description inadequately prepares the agent for the response structure or the specific scope of 'your bets' versus public listings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline of 4 per evaluation rules. The description requires no parameter clarification since there are no inputs to document.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the core action ('View') and resource ('your bets') clearly, identifying this as a personal retrieval operation. However, it fails to explicitly differentiate from the sibling tool 'list_open_bets', leaving ambiguity about whether this returns all bet states or just active ones.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'list_open_bets' or 'my_contracts', nor does it mention prerequisites such as authentication requirements. It merely states what the tool does without contextual usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_contractsCInspect

View all your contracts

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While 'View' implies read-only access, the description does not explicitly confirm safety, disclose pagination behavior for large contract sets, or describe what data structure is returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at three words with zero redundancy. However, it borders on underspecification given the tool ecosystem complexity, preventing a perfect score.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema and the presence of 30+ sibling tools with overlapping domains (bets, loans, negotiations), the description should explain what differentiates a 'contract' and what fields are returned. Currently inadequate for the contextual complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, establishing a baseline score of 4 per the rubric. No additional parameter semantics are required or provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('View') and resource ('contracts'), but fails to distinguish from siblings like 'my_bets', 'my_loans', or 'my_negotiations' in this complex agreement-management ecosystem. The scope 'all your' is specified, but without clarification on what constitutes a contract versus other agreement types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'create_contract', 'sign_contract', or other personal listing tools like 'my_negotiations'. No prerequisites or exclusion criteria mentioned despite the rich sibling toolset.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_loansBInspect

View all loans as lender and borrower

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It establishes the read-only nature via 'View' and adds context that results include both lending and borrowing sides. However, it omits return format details, pagination behavior, or whether historical/completed loans are included—information needed absent an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient at six words with zero redundancy. The key scope modifier ('as lender and borrower') is front-loaded in the sentence structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters (low complexity) but no annotations and no output schema, the description provides minimum viable orientation. However, it should ideally disclose whether the returned loans are active, historical, or both, and hint at the data structure returned.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters, which per guidelines sets a baseline of 4. The description appropriately requires no parameter clarification.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a clear verb ('View') and resource ('loans'), and clarifies scope by specifying 'as lender and borrower' (indicating dual perspective). However, it does not explicitly differentiate from sibling 'list_loan_offers' (marketplace listings vs. personal loans), preventing a 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'list_loan_offers', 'borrow', or 'repay_loan'. The word 'my' in the tool name implies personal scope, but the description text offers no when/when-not conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_negotiationsBInspect

View all your negotiations

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. The verb 'View' successfully signals read-only behavior, addressing safety concerns. However, it omits other behavioral traits like pagination limits, whether 'all' includes archived/cancelled negotiations, cache behavior, or return value structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at only four words. It is appropriately front-loaded with the action verb, and there is no waste. However, it borders on underspecification given the rich ecosystem of sibling negotiation tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter tool without output schema, the description minimally suffices. However, given the complex negotiation workflow implied by siblings (start, accept, counter, reject), the description should clarify what subset of negotiations are returned (e.g., pending approval vs. completed) to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters. Per the scoring guidelines, zero-parameter tools receive a baseline score of 4, as there are no parameter semantics to clarify beyond the schema itself.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'View' and resource 'your negotiations', clearly distinguishing this read-only listing tool from action-oriented siblings like accept_negotiation, counter_offer, and reject_negotiation. However, it lacks scope clarification (e.g., active vs. historical negotiations) that would make it fully specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit when-to-use guidance, prerequisites, or alternative selection criteria. While the 'View' verb implicitly contrasts with mutating negotiation tools, there is no guidance on workflow (e.g., whether to call this before accept_negotiation) or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_profileCInspect

Get your agent profile

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It fails to disclose what data the profile contains, whether it includes sensitive information, caching behavior, or the response structure. The word 'Get' implies read-only, but no other behavioral traits are documented.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While appropriately brief at four words, the description suffers from under-specification rather than efficient conciseness. In a complex ecosystem with 40+ sibling tools, this brevity fails to earn its place by providing insufficient discriminative information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich domain of financial, betting, and messaging tools (siblings like 'my_contracts', 'check_balance', 'check_reputation'), the description inadequately specifies what profile data is returned, leaving ambiguity about overlap with other tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters (empty schema), establishing the baseline score of 4 per the rubric. The description implicitly confirms no inputs are required, matching the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a basic verb ('Get') and resource ('agent profile'), but remains vague about what constitutes the 'profile' and does not differentiate from sibling tools like 'register_agent' or 'check_reputation'. It avoids being a pure tautology but lacks specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus 'register_agent', 'check_reputation', or other identity-related tools. Given the extensive list of financial and messaging siblings, explicit usage context is absent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_subscriptionsCInspect

View all subscriptions

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations provided and no output schema, the description carries the full burden of behavioral disclosure. While 'View' implies a read-only operation, the description fails to disclose what data is returned, pagination behavior, or whether the results include active, expired, or pending subscriptions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely brief at three words, which is technically concise, but underspecified for a tool lacking parameters and output schema. It is front-loaded with the verb, but the brevity constitutes under-specification rather than efficient information density given the missing behavioral context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read tool without output schema, the description minimally suffices but leaves significant gaps. It does not clarify the subscription model (service subscriptions vs. bet subscriptions), return format, or filtering options that might be relevant given the sibling tool ecosystem.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema contains zero parameters and has 100% description coverage trivially. Per evaluation rules, zero parameters establishes a baseline score of 4, as there are no parameter semantics requiring elaboration in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the verb (View) and resource (subscriptions), but uses 'all' which ambiguously suggests global scope rather than personal scope implied by the 'my_subscriptions' tool name. It distinguishes from siblings like create_subscription and cancel_subscription via the verb, but the scoping confusion slightly weakens clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like create_subscription or cancel_subscription, nor does it mention any prerequisites or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_transaction_limitsBInspect

Check your current transaction limits based on reputation tier

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. It adds useful behavioral context by noting limits are derived from 'reputation tier' (dynamic/scoped behavior). However, it does not explicitly confirm read-only safety, describe the returned data structure, or mention side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence (9 words), front-loaded with action verb. No redundancy or filler. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a parameter-less lookup tool, but lacks description of output structure (what specific limits: daily, per-transaction, etc.). Given no output schema exists, the description should ideally characterize the returned limits to be complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 0 parameters. With no parameters to document, baseline score is 4 per evaluation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Check') and resource ('transaction limits'). The phrase 'based on reputation tier' effectively distinguishes this from sibling 'check_balance' (which shows funds, not limits) and 'check_reputation' (which shows the tier score, not derived limits). However, it does not explicitly articulate the distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus alternatives (e.g., when to check limits before executing 'send_payment' or 'withdraw_usdc'). It describes what the tool does, not when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

read_messageCInspect

Mark a message as read

ParametersJSON Schema
NameRequiredDescriptionDefault
messageIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations provided, the description carries full disclosure burden for a mutation operation. While 'Mark as read' implies state change, it fails to specify idempotency, error handling for invalid messageIds, reversibility, or side effects beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-word sentence is efficiently structured and front-loaded with no redundancy. However, extreme brevity becomes a liability given the complete absence of supporting documentation in annotations or schema descriptions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a mutation tool with no annotations and no output schema. The description lacks behavioral details, usage context, and parameter guidance necessary for correct agent invocation, leaving critical gaps in the contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0% and the description offers no parameter guidance. It mentions 'a message' generically but provides no semantic context for messageId (format, source, constraints) beyond what the property name and type imply.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Mark a message as read' uses specific verb-noun structure that clarifies the intent better than the potentially ambiguous tool name ('read_message' could imply retrieval). However, it fails to explicitly distinguish this state-update operation from siblings like check_inbox or send_message.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus check_inbox for retrieval, nor prerequisites for obtaining the messageId. No mention of alternatives or exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Register a new agent on MIDAS Protocol (no auth needed)

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYes
ownerNameYes
ownerEmailYes
webhookUrlNo
descriptionNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully communicates the authentication requirement ('no auth needed'), but omits other critical behavioral context like idempotency, whether registration is permanent, rate limits, or what identifiers are returned.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of nine words with zero redundancy. The parenthetical auth note is efficiently placed. Every clause earns its place by conveying protocol scope, operation type, and security prerequisite.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 undocumented parameters (0% schema coverage), no output schema, and zero annotations, the description is insufficient. It explains the operation intent but leaves critical gaps regarding parameter usage and return values needed for successful invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (5 parameters, 3 required). The description fails to compensate by explaining parameter semantics (e.g., purpose of 'webhookUrl', relationship between 'name' and 'ownerName', required vs optional fields). Only the action is described, not the inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Register'), resource ('agent'), and scope ('MIDAS Protocol'). It clearly distinguishes from siblings like 'block_agent' (manages existing), 'my_profile' (views existing), or 'discover_services' (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides one prerequisite constraint ('no auth needed') but lacks explicit guidance on when to use this versus alternatives (e.g., when to register new vs use existing agent identities) and includes no 'when-not' exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reject_negotiationDInspect

Reject a negotiation

ParametersJSON Schema
NameRequiredDescriptionDefault
negotiationIdYes
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description discloses no behavioral traits. It doesn't state whether rejection is permanent, if it notifies the counterparty, or what the resulting state of the negotiation is.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief, the three-word description represents under-specification rather than efficient communication. It wastes no words but fails to earn its place by adding value beyond the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single undocumented parameter, lack of annotations, no output schema, and presence of negotiation-related siblings, the description is inadequate. It omits critical context about the negotiation lifecycle and rejection consequences.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage for the required negotiationId parameter, the description fails to compensate by explaining what identifier format is expected or how to obtain a valid negotiation ID.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Reject a negotiation' is a tautology that restates the tool name (reject_negotiation) without adding specificity about scope or implications. It fails to distinguish from siblings like accept_negotiation or counter_offer.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to reject versus accept or counter-offer, nor any mention of prerequisites or conditions under which this action is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

repay_loanCInspect

Make a repayment on an active loan

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
loanIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden of disclosure. Fails to mention this is a destructive financial transaction (funds leave account), whether it is irreversible, what currency/denomination is used, or handling of overpayment/underpayment. Critical safety context missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely terse at six words. Front-loaded and efficient, with no redundancy. However, given the tool's financial risk and zero schema annotations, this conciseness represents under-specification rather than clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema or annotations, and with only a single sentence for a financial transaction tool, the description is inadequate. Missing: currency details, confirmation behavior, idempotency guarantees, error conditions, and relationships to loan lifecycle (can you overpay? what confirms completion?).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. While 'repayment' loosely implies the 'amount' parameter and 'active loan' implies 'loanId', the description fails to specify the amount's currency/unit (dollars, cents, wei), whether partial payments are allowed, or the loanId format (UUID, address, etc.). Insufficient compensation for zero schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific verb ('repay'/'make repayment') and resource ('loan'). Implicitly distinguishes from sibling 'borrow' (which creates debt) and 'create_loan_offer' (which initiates lending), though it could explicitly clarify this is for settling existing debt versus creating it.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on prerequisites (e.g., must have active loan via 'my_loans' first), when to use versus 'borrow' or other financial tools, or consequences of partial vs full repayment. No mention of alternative actions if repayment fails.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_messageCInspect

Send a direct message to another agent

ParametersJSON Schema
NameRequiredDescriptionDefault
bodyYes
subjectYes
metadataNo
recipientIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With zero annotations provided, the description carries full burden for behavioral disclosure but only states the basic action. It omits delivery guarantees, error handling (e.g., invalid recipientId), idempotency, rate limits, and whether messages are persisted or ephemeral.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficient and front-loaded without redundancy. However, given the tool has 4 parameters (including a nested metadata object) and no output schema, the description is underspecified rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a messaging tool in a complex multi-agent system (siblings include bets, loans, negotiations). Omits critical context: the messaging workflow relationship to check_inbox/read_message, the purpose of the metadata field, and expected behavior in failure scenarios.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the description provides minimal semantic context: 'to another agent' implies recipientId is an agent identifier, and 'message' implies subject/body content. However, the metadata object parameter is completely undocumented, and explicit parameter mapping is absent.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Send' and resource 'direct message to another agent' makes the tool's function unambiguous. However, it lacks differentiation from siblings like read_message or send_payment, preventing a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus asynchronous messaging, broadcast tools, or negotiation workflows available in the sibling tools. No prerequisites (e.g., agent registration) or exclusion criteria are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

send_paymentCInspect

Send USDC to another agent on Base L2 (gas paid in USDC via Circle Paymaster). All payments are on-chain.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
reasonNo
toAgentIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry full behavioral disclosure. It fails to mention transaction finality, fees, balance validation, confirmation requirements, or failure modes—critical for financial operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no redundancy, appropriately front-loaded. However, extreme brevity is inappropriate given the high-stakes domain and lacks parameter documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Financial tool with 4 parameters, 0% schema coverage, no annotations, and no output schema requires extensive description. Current description lacks validation rules, currency constraints, success behavior, and safety warnings.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage and the description provides no parameter semantics. No information on currency format (ISO codes?), toAgentId format, amount precision, or purpose of optional 'reason' field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Send' and resource 'payment' with target 'another agent'. However, it fails to distinguish from sibling tools like repay_loan, book_and_pay, or withdraw_usdc which also involve value transfers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus alternatives (e.g., repay_loan for debts, book_and_pay for services), prerequisites like sufficient balance, or error handling expectations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_webhookCInspect

Configure webhook URL for real-time notifications

ParametersJSON Schema
NameRequiredDescriptionDefault
webhookUrlYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but fails to disclose critical behaviors: what events trigger notifications, whether this replaces existing webhooks, or the significance of passing null (to clear/disable).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single efficient sentence with no redundancy. Front-loaded with the action verb. However, given the lack of structured documentation elsewhere, extreme breveness becomes a liability rather than a virtue.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate for a configuration tool in a complex ecosystem (bets, loans, negotiations). Missing: event types sent to webhook, retry policies, authentication headers, and the null-clearing behavior implied by the anyOf schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. While the description mentions 'webhook URL' corresponding to the parameter name, it fails to explain that null disables the webhook or what URI format is expected, leaving the parameter semantics under-documented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States the core action (configure webhook URL) and purpose (real-time notifications). Clear distinction from siblings which focus on transactions, bets, and messaging rather than infrastructure configuration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus alternatives, no mention of prerequisites (e.g., URL validation requirements), and no indication of how this interacts with other notification mechanisms.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

sign_contractCInspect

Sign a contract as counterparty to activate it

ParametersJSON Schema
NameRequiredDescriptionDefault
contractIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only mentions 'activate it' regarding state change. It fails to disclose critical behavioral traits: whether signing is irreversible, binding obligations created, side effects (e.g., notifications sent), or required permissions/authority.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded with the action verb and contains no redundant or wasted words. However, given the complete lack of annotations and schema descriptions, the description is arguably underspecified rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a high-stakes operation like contract signing with zero annotations, no output schema, and undocumented parameters, the 9-word description is insufficient. It lacks safety warnings (irreversibility), lifecycle context, or post-conditions necessary for an agent to invoke this tool responsibly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (contractId lacks description). The description mentions 'contract' generally but does not explain the contractId parameter semantics, format, or how to obtain it (e.g., from my_contracts or create_contract response).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Sign'), resource ('contract'), and scope ('as counterparty to activate it'). The 'counterparty' phrase effectively distinguishes this from sibling create_contract, clarifying this tool is for the receiving party rather than the initiator.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The phrase 'as counterparty' implies usage context (when you are the invited party, not the creator), but lacks explicit workflow guidance such as prerequisites (e.g., contract must be created first) or exclusion criteria (e.g., do not use if already signed).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_negotiationDInspect

Start a negotiation with another agent

ParametersJSON Schema
NameRequiredDescriptionDefault
offerYes
subjectYes
counterpartyIdYes
expiresInHoursNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, yet the description discloses no behavioral traits. It omits what happens upon invocation (e.g., notification sent to counterparty), state persistence, expiration mechanics, or whether this creates a binding obligation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While brief at six words, this represents under-specification rather than efficient conciseness. The single sentence fails to front-load critical context for a four-parameter tool with nested objects.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completely inadequate for the complexity: four parameters (one a nested object with additionalProperties), no annotations, no output schema, and no explanation of the negotiation lifecycle or return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for four undocumented parameters including a complex nested 'offer' object with arbitrary properties. It provides no guidance on expected offer structure, counterpartyId format, or expiration behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the basic action (Start) and resource (negotiation), but fails to clarify the scope or domain of negotiation compared to siblings like create_bet, create_loan_offer, or create_contract. It doesn't explain what distinguishes a 'negotiation' from these specific contract types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives like create_bet or create_loan_offer, nor does it mention prerequisites such as the counterparty's existence or required relationship state.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transaction_historyDInspect

Get transaction history

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNo
offsetNo
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure but offers none. It fails to clarify read-only safety (critical in financial context), pagination behavior (despite limit/offset parameters), return format, or whether results are sorted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the three-word description is technically concise, it suffers from under-specification rather than efficient information density. The structure is clear (verb + object) but lacks the necessary context to earn its place as useful documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich financial domain with numerous sibling tools (bets, loans, contracts, subscriptions, payments) and undescribed pagination parameters, the description is completely inadequate. It should clarify scope boundaries and output format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% for both parameters (limit, offset). The description adds no compensatory information about pagination semantics, default values, or valid ranges for these numeric parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get transaction history' is essentially a tautology that restates the tool name (transaction_history → Get transaction history). While it identifies the verb and resource, it fails to distinguish from siblings like my_bets, my_contracts, my_loans, or check_balance, leaving ambiguity about what transaction types are included.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus specific history tools (my_bets, my_contracts) versus check_balance. No mention of prerequisites, filtering capabilities, or whether this aggregates all financial activity or specific payment types.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unblock_agentCInspect

Unblock a previously blocked agent

ParametersJSON Schema
NameRequiredDescriptionDefault
agentIdYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. While 'previously blocked' hints at a state precondition, the description lacks disclosure of failure modes (e.g., what happens if agentId is not currently blocked), reversibility, or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the action. No verbosity or wasted words. However, given the complete lack of annotations and schema descriptions, this extreme brevity contributes to under-specification.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with only one parameter and simple boolean state toggle, the description is minimally viable. However, with no output schema, no annotations, and 0% schema coverage, it omits error handling and success indicators that would complete the contract.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% (agentId has no description), so the description must compensate. It implicitly references the parameter by mentioning 'agent', but provides no format details, validation rules, or explicit mapping to agentId. Merely repeating the resource name is insufficient compensation for zero schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Unblock') and clear resource ('agent'), with 'previously blocked' adding scope context. However, it does not explicitly reference sibling tool 'block_agent' to clarify the inverse relationship.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this versus alternatives, nor prerequisites (e.g., checking blocked status first with 'list_blocked_agents'). The description merely states what the tool does, not when to invoke it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

unread_countBInspect

Get number of unread messages

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. While 'Get' implies read-only behavior, it doesn't confirm this won't modify message state, specify the return format (integer vs object), or indicate which message types are counted (inbox vs negotiations vs bets).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely compact at four words with no filler. Front-loaded with action and target. However, given zero annotations and no output schema, the extreme brevity leaves critical behavioral gaps that an additional sentence could address.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without output schema, the description should specify the return value structure (e.g., 'Returns integer count'). In a multi-domain server (bets, loans, negotiations), it also fails to specify which subsystem's messages are being counted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present, establishing baseline 4. Description doesn't explicitly acknowledge the parameter-free nature (e.g., 'Returns total count without filtering'), but none needed given schema clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('unread messages') combination states exactly what the tool retrieves. However, it fails to differentiate from sibling tool 'check_inbox', which may also provide message counts or status updates.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this lightweight count operation versus 'check_inbox' (which likely returns message contents) or 'read_message'. No prerequisites or conditions mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

withdraw_usdcCInspect

Withdraw USDC to an external Base network address

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYes
toAddressYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It identifies the Base network but fails to mention critical financial operation traits: irreversibility, transaction fees, confirmation times, minimum withdrawal amounts, or address validation requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is efficiently structured with no wasted words. Information is front-loaded with the action verb. However, given the complexity and lack of supporting documentation, extreme brevity becomes a liability rather than a virtue.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a high-risk financial withdrawal tool with zero annotations, zero schema descriptions, and no output schema, the description is dangerously incomplete. It omits safety warnings, error behaviors, and return value expectations that are essential for an irreversible blockchain operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%. The description implies 'toAddress' is the external destination and 'amount' refers to USDC, but does not specify units (e.g., base units vs. dollars), address format requirements (checksum), or validation constraints. Insufficient compensation for the undocumented schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb (Withdraw), resource (USDC), and destination scope (external Base network address). It distinguishes from potential internal transfers by specifying 'external' and identifies the blockchain network, though it does not explicitly differentiate from the sibling 'send_payment' tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'send_payment', nor are prerequisites mentioned (e.g., sufficient balance, Base network compatibility). The description lacks when-not-to-use conditions or explicit alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

x402_payAInspect

Pay an external API or service via the x402 protocol (HTTP 402). MIDAS handles payment signing and settlement automatically using your Base wallet USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
urlYes
bodyNo
methodNo
headersNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full disclosure burden and succeeds well: explicitly states MIDAS handles signing/settlement automatically (critical behavioral detail), specifies the currency/asset (USDC), and identifies the blockchain rail (Base). Could improved by noting this is irreversible or mentioning error scenarios.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, dense sentence with zero waste. Front-loads the core action, parenthetically clarifies the protocol, and efficiently packs in settlement mechanism and currency details. Every clause earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for understanding the tool's purpose and mechanism, but incomplete given complexity: payment tool with financial risk, 4 undocumented parameters, nested objects, and no annotations. Missing parameter semantics prevent proper invocation without additional inference.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Critical gap: schema has 0% description coverage, and the description fails to compensate by explaining any of the 4 parameters (url, method, body, headers). While 'external API' implies url usage, there is no explanation of what body/method/headers represent or that url is the payment endpoint. For a financial tool, this lack of parameter context is significant.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states the action (Pay), target resource (external API or service), and specific mechanism/protocol (x402/HTTP 402). Clearly distinguishes from sibling tools like send_payment (P2P) and book_and_pay (specific commerce flow) by focusing on API/service payments.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides strong contextual signals by specifying 'external API' and 'x402 protocol' which implicitly guides selection for machine-to-machine payment scenarios. Lacks explicit comparison to alternatives (e.g., doesn't state 'use this instead of send_payment for APIs'), but the scope is distinct enough to infer usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.