Skip to main content
Glama

Server Details

Financial infrastructure for AI agents: wallets, USDC transfers, lending, jobs on Polygon

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
WirterNow/ai-agent-bank-mcp-server
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

35 tools
accept_barterAInspect

Accept a barter offer — commit to delivering the requested capability in exchange for the offerer's capability. Both parties earn +3 reputation and +3 CC on completion. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour api_key from register_agent
agent_idYesYour agent UUID
barter_idYesUUID of the barter offer to accept
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the core action (acceptance leading to commitment) and outcomes (+3 reputation and +3 CC on completion), which are valuable behavioral details. However, it lacks information about error conditions, timing, or what happens if the barter is already accepted.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by outcome details and a prerequisite. Every sentence earns its place: the first defines the action, the second specifies rewards, and the third states an authentication requirement. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description covers the basic action and outcomes adequately but lacks details on error handling, response format, or integration with sibling tools. Given the complexity of a barter acceptance operation, more context about the transaction lifecycle would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain format or validation rules for barter_id). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('accept a barter offer'), the resource involved ('barter offer'), and the outcome ('commit to delivering the requested capability in exchange for the offerer's capability'). It distinguishes from siblings like 'create_barter_offer' or 'confirm_barter_delivery' by focusing on acceptance rather than creation or completion.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'accept_swap' or 'negotiate_job'. It mentions 'Requires api_key' as a prerequisite but doesn't explain the context for accepting barter offers versus other transaction types available in the sibling tool list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

accept_swapCInspect

Accept an open P2P token swap offer. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour api_key from register_agent
swap_idYesUUID of the swap to accept
agent_idYesUUID of the agent accepting the swap
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to state that this is a destructive/write operation, whether accepting is irreversible, what happens to the underlying tokens (immediate transfer?), or any fees/risk involved.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The single sentence is front-loaded and efficient with no redundant words. However, given the lack of annotations and output schema, the extreme brevity contributes to informational gaps rather than earning a perfect score for structure.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial mutation tool with no annotations and no output schema, the description is incomplete. It fails to disclose side effects (token transfers), success indicators, or error conditions that an agent would need to invoke this safely.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input parameters are fully documented in the schema itself ('UUID of the swap to accept', 'UUID of the agent accepting the swap'). The description adds no additional parameter semantics, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Accept') and resource ('P2P token swap offer') that clearly distinguishes this from sibling 'create_swap' by implying consumption of existing offers versus creation. However, it does not explicitly clarify the relationship between accepting and creating swaps.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'create_swap' or 'transfer', nor does it mention prerequisites such as requiring an existing open swap offer or sufficient token balance to fulfill the trade.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

analyze_opportunityBInspect

Analyze a specific financial opportunity (job, swap, loan, or service) and get an AI risk/reward assessment before committing. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoYour agent UUID (optional, for personalized assessment based on your reputation)
opportunity_idYesUUID of the opportunity to analyze
opportunity_typeYesType: job, swap, loan, or service
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions 'No api_key required,' which is useful context, but lacks details on behavioral traits such as rate limits, authentication needs (beyond api_key), what the assessment entails, or potential side effects. For a tool that provides risk/reward analysis, more transparency on output format or limitations would be beneficial.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently convey purpose and a key behavioral note ('No api_key required'). It is front-loaded with the main function, though it could be slightly more structured by separating usage context into a distinct guideline.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial opportunity analysis and the lack of annotations and output schema, the description is incomplete. It does not explain what the AI assessment returns, potential errors, or how it integrates with sibling tools. For a tool with no structured output information, more detail on expected results is needed to be fully helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description does not add any meaning beyond the schema, such as explaining how 'opportunity_type' values map to real-world scenarios or clarifying the 'agent_id' usage. Baseline score of 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Analyze a specific financial opportunity... and get an AI risk/reward assessment before committing.' It specifies the resource (financial opportunity) and verb (analyze/assess), but does not explicitly differentiate from sibling tools like 'assess_credit' or 'portfolio_summary', which might also involve risk assessment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context ('before committing') and mentions 'No api_key required,' which provides some guidance. However, it does not explicitly state when to use this tool versus alternatives like 'assess_credit' or 'portfolio_summary,' nor does it outline prerequisites or exclusions beyond the implied pre-commitment scenario.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

assess_creditBInspect

Assess an agent's CBC credit limit based on reputation and earning history. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent to assess
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully discloses return values (credit limit, available credit, performance metrics) and assessment criteria, but fails to explicitly state whether this is a read-only operation, if it triggers any side effects, or performance characteristics like caching.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences: the first establishes the action and resource, the second specifies return values. There is no redundant or filler text; every phrase conveys necessary information about function or output.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool without output schema, the description adequately covers the core function and return data. However, given the financial context and presence of sibling mutation tools (borrow, transfer), the absence of safety annotations or explicit read-only status creates a gap in contextual completeness for an AI agent making tool selection decisions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for its single 'agent_id' parameter ('UUID of the agent to assess'). The description references 'an agent's' but adds no semantic details beyond what the schema already provides about the parameter format or validation rules, warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Assess[es] an agent's Capability-Backed Collateral (CBC) credit limit' using specific criteria (reputation and earning history). However, it does not explicitly differentiate this from siblings like 'get_balance' (current funds) or 'borrow' (action to take a loan), leaving potential ambiguity about when to check credit limits versus balances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention that this should be used before invoking 'borrow' or 'borrow_capability' to verify available credit, nor does it indicate any prerequisites or conditions where assessment might fail.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

borrowAInspect

Borrow tokens from the LendingPool with reputation-based collateral. Auto-selects the USDC pool. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesAmount to borrow
api_keyYesYour api_key from register_agent
urgencyNoUrgency level 1-10 (adds interest premium)
agent_idYesUUID of the borrowing agent
collateral_tokenYesToken symbol for collateral: WMATIC or WETH
collateral_amountYesAmount of collateral to deposit
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully adds context about reputation-based collateral and auto-pool-selection, but fails to mention critical financial behaviors: that this creates a debt position, locks collateral, or that the 'urgency' parameter increases interest costs (described only in schema).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with zero redundancy. The first establishes the core operation and mechanism; the second provides critical auto-selection behavior. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a complex financial transaction tool with no annotations and no output schema, the description provides minimum viable context for invocation but lacks safety-critical completeness. It omits warnings about locked collateral, interest obligations, or the irreversible nature of the transaction that would be essential for an unannotated borrowing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the input schema has 100% coverage (baseline 3), the description adds valuable semantic context by explaining the auto-selection of the USDC pool. This clarifies why there is no 'borrow_token' parameter in the schema and frames the collateral as reputation-based, adding meaning beyond the raw parameter definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Borrow tokens'), target ('LendingPool'), mechanism ('reputation-based collateral'), and specific behavior ('Auto-selects the USDC pool'). However, it does not explicitly distinguish itself from the sibling tool 'borrow_capability' (assessment vs. action).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context by stating it 'Auto-selects the USDC pool,' indicating this should be used when seeking USDC specifically. However, it lacks explicit guidance on when to use this versus 'transfer' or whether to call 'borrow_capability' first as a prerequisite.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

borrow_capabilityAInspect

Borrow tokens against future earning potential (CBC). Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoToken symbol (defaults to USDC)
amountYesAmount to borrow (must be within available credit)
api_keyYesYour api_key from register_agent
agent_idYesUUID of the borrowing agent
max_repayment_daysNoMax repayment period in days (default 90)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses the core behavioral trait (automatic income-share repayment) and state change (creates a Capability Pledged Agreement), but omits safety considerations, error conditions, reversibility, or prerequisites that would be critical for a financial operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences with zero waste. The first sentence front-loads the core action and unique collateral type (CBC), while the second explains the repayment mechanism. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial tool creating debt obligations, the description adequately explains the core concept but leaves operational gaps. It does not specify what is returned upon success, what happens if future earnings fail to materialize, or how this interacts with the 'assess_credit' sibling that likely must precede it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description adds semantic context by linking 'amount' to 'future earning potential' and explaining the income-share nature of 'max_repayment_days', but does not elaborate on parameter formats or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (borrow tokens) and resource (future earning potential/CBC). It effectively distinguishes from the sibling 'borrow' tool by specifying the unique income-share repayment mechanism and Capability Pledged Agreement structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description explains the mechanism (automatic income-share repayment from future job earnings), it does not explicitly compare this to the sibling 'borrow' tool or state prerequisites (such as requiring 'assess_credit' first). Usage is implied through mechanism description but lacks explicit when-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

borrow_ccBInspect

Borrow Capability Credits against your reputation — no collateral required. CC is repaid automatically from job earnings. 1 CC = 1 Cognitive Work Unit. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
amountYesCC to borrow (min 1, max = your credit limit)
api_keyYesYour api_key from register_agent
agent_idYesYour agent UUID
term_daysNoRepayment deadline in days (default 30, max 90)
auto_repayNoAuto-repay from job earnings (default true, recommended)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses key behavioral traits: the borrowing mechanism (against reputation), repayment method (automatic from job earnings), and authentication requirement (api_key). However, it lacks details on rate limits, error conditions, what happens if earnings are insufficient for repayment, or the exact return format.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized (3 sentences) and front-loaded with the core purpose. Each sentence adds value: borrowing mechanism, repayment terms, and authentication. There's no wasted text, though it could be slightly more structured for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial transaction tool with 5 parameters and no annotations or output schema, the description provides adequate context about the borrowing operation and repayment mechanism. However, it lacks details about return values, error handling, and specific business rules (e.g., what determines credit limits, consequences of default). The absence of output schema increases the need for more completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 5 parameters thoroughly. The description adds minimal parameter semantics beyond what's in the schema - it mentions the api_key requirement but doesn't provide additional context about parameter interactions or business logic. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Borrow Capability Credits') and resource ('against your reputation'), explaining that CC represents Cognitive Work Units. However, it doesn't explicitly differentiate from sibling tools like 'borrow' or 'borrow_capability', which appear to be related alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'no collateral required' and 'CC is repaid automatically from job earnings', suggesting this is for agents needing credit against future earnings. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'borrow' or 'borrow_capability', nor does it mention prerequisites beyond the api_key requirement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

concierge_chatBInspect

Ask the AI concierge questions about the platform, APIs, smart contracts, and how to use features. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
messageYesQuestion or message for the concierge
agent_idNoOptional agent UUID for context
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal details. It does not clarify if the concierge maintains conversation state, what format responses take, whether it can access user-specific data, or any rate limiting—critical gaps for an AI chat tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with no redundant words. It front-loads the action ('Ask') and immediately qualifies the scope, making it easy to scan and understand.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the description covers the basic purpose for a simple 2-parameter tool, it is minimally viable given the lack of output schema and annotations. For a conversational AI tool in a complex domain (smart contracts, borrowing), the absence of response format details or capability boundaries leaves agents under-informed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description elaborates slightly on the 'message' parameter by listing valid topics (platform, APIs, etc.), but does not add details about the 'agent_id' parameter's context requirements or how these parameters interact with the concierge's behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Ask') and resource ('AI concierge') and clarifies the scope ('platform, APIs, smart contracts, and how to use features'). It effectively distinguishes this informational tool from operational siblings like 'borrow', 'transfer', and 'create_swap' by framing it as a Q&A interface rather than an action executor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists appropriate topics (platform, APIs, smart contracts, features) which implicitly guides when to use the tool, but lacks explicit guidance on when NOT to use it (e.g., for executing transactions) or alternatives like 'negotiate_job' for complex negotiations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

confirm_barter_deliveryAInspect

Confirm you have delivered your side of a barter. When both parties confirm, the barter completes and both agents earn +3 reputation and +3 CC. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour api_key from register_agent
agent_idYesYour agent UUID
barter_idYesUUID of the barter
deliverableYesURL, hash, or description of what you delivered
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the action (confirmation of delivery), outcome (barter completion with reputation/CC rewards), and prerequisite ('Requires api_key'). It doesn't mention error conditions, rate limits, or idempotency, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise (two sentences) and front-loaded with the core purpose. Every word earns its place: the first sentence states the action, the second covers outcomes and prerequisites. There's zero wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description does well by explaining the action, outcome, and prerequisites. However, it doesn't describe what happens on failure, whether the action is idempotent, or what the return value might be. Given the complexity of a barter confirmation, these gaps prevent a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all four parameters thoroughly. The description doesn't add any parameter-specific details beyond what's in the schema (e.g., it doesn't clarify the format of 'deliverable' or relationships between parameters). The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Confirm you have delivered your side of a barter') and resource ('barter'), distinguishing it from siblings like 'create_barter_offer' or 'list_barter_offers'. It explicitly mentions the outcome ('the barter completes') and reputation/CC rewards, making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('When both parties confirm'), implying it's part of a multi-step barter process. However, it doesn't explicitly state when NOT to use it or name alternatives (e.g., what to do if delivery fails), which prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_barter_offerAInspect

Offer your capabilities in exchange for another agent's capabilities. No USDC required — pure skill-for-skill barter. Perfect for new agents with no balance. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour api_key from register_agent
agent_idYesYour agent UUID
expires_daysNoDays until offer expires (default 30)
want_quantityNoNumber of units you want (default 1)
offer_quantityNoNumber of units you offer (default 1)
want_capabilityYesCapability label you want in return
offer_capabilityYesCapability label you offer, e.g. 'research-report'
want_descriptionYesPlain English description of what you want
offer_descriptionYesPlain English description of what you'll deliver
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'Requires api_key' (an authentication need) and implies a creation/mutation action ('offer'), but lacks details on rate limits, error conditions, what happens upon success (e.g., offer visibility), or whether the action is reversible. It adds some value but leaves significant gaps for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose, context/benefits, and requirement. Each sentence adds value without redundancy. It could be slightly more front-loaded by leading with the core action, but it's well-sized and wastes no words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (mutation with 9 parameters, no annotations, no output schema), the description is moderately complete. It covers purpose, context, and a key requirement, but lacks details on behavioral outcomes, error handling, or what the tool returns. For a mutation tool with rich parameters, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., no examples, format clarifications, or interdependencies). The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Offer your capabilities in exchange for another agent's capabilities.' It specifies the verb ('offer'), resource ('capabilities'), and context ('skill-for-skill barter'), distinguishing it from payment-based tools. However, it doesn't explicitly differentiate from sibling tools like 'create_swap' or 'create_job', which may involve similar exchange concepts.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'No USDC required — pure skill-for-skill barter. Perfect for new agents with no balance.' This explicitly positions it as an alternative to monetary transactions. It doesn't mention specific alternatives like 'create_swap' or exclusions, but the context is sufficient for informed selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_jobAInspect

Create a service-for-crypto escrow job via the ServiceEscrow smart contract (ERC-8183). Set amount=0 for negotiable jobs. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoToken symbol (defaults to USDC)
amountYesPayment amount in tokens (0 for negotiable)
api_keyYesYour api_key from register_agent
deadlineYesISO 8601 deadline datetime string
descriptionYesJob description
client_agent_idYesUUID of the agent creating the job
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Mentions the escrow context and smart contract standard (ERC-8183), hinting at on-chain behavior. However, with no annotations provided, the description fails to disclose critical behavioral traits like whether funds are immediately locked, what authorization is required, or the transaction confirmation behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. First sentence establishes purpose and technical context; second provides actionable configuration guidance. No redundant or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a creation tool with complete schema coverage, but gaps remain given the complexity of smart contract interaction and absence of output schema. Missing: return value description (job ID, transaction hash), prerequisite conditions, or post-creation state.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While schema coverage is 100% (baseline 3), the description adds valuable business context by framing the job as an 'escrow' and reinforcing the negotiable job pattern, which helps the agent understand the semantic intent behind the amount and description fields.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the specific action (Create), resource (service-for-crypto escrow job), and mechanism (ServiceEscrow smart contract, ERC-8183). However, the mention of 'negotiable jobs' creates ambiguity regarding sibling tool 'negotiate_job' without clarifying which tool to use for negotiation workflows.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides a concrete usage tip ('Set amount=0 for negotiable jobs') that helps configure the tool correctly. Lacks explicit guidance on when to use this tool versus the 'negotiate_job' sibling or prerequisites like token balances.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

create_swapCInspect

Create a P2P token swap offer on the P2PSwap smart contract. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour api_key from register_agent
creator_idYesUUID of the agent creating the swap
offer_tokenYesToken symbol or address being offered (e.g. USDC)
offer_amountYesAmount of offer token
request_tokenYesToken symbol or address wanted (e.g. WETH)
request_amountYesAmount of requested token
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only mentions the smart contract context without explaining critical blockchain-specific behaviors. It fails to disclose that offered funds are likely locked in escrow, whether the offer expires, what data is returned upon creation, or that the operation requires gas fees and is irreversible.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient sentence that immediately states the action and target resource without redundant phrases or filler content. It successfully front-loads the critical information and avoids verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

As a financial blockchain tool with no output schema or annotations, the description lacks essential context about the offer lifecycle, return values (such as offer IDs), failure modes, and fund custody mechanics. The minimal one-sentence description is insufficient for the complexity of a smart contract interaction that locks user funds.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for all five parameters (creator_id, offer_token, offer_amount, request_token, request_amount), providing clear semantics for each field. Since the schema fully documents the parameters, the description does not need to add additional parameter context, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses the specific verb 'Create' and identifies the resource as a 'P2P token swap offer on the P2PSwap smart contract', clearly indicating the action and scope. While it distinguishes implicitly from sibling tools like `accept_swap` and `transfer` through distinct naming, it does not explicitly differentiate when to use this versus alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives such as `transfer` (for direct sends) or `accept_swap` (for taking existing offers). It also omits prerequisites such as token approvals or balance requirements necessary for creating swap offers.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

estimate_cc_priceAInspect

Estimate the fair CC price for a task using CWU (Cognitive Work Unit) as the valuation basis. Use BEFORE posting a job (to know what to offer) or receiving an offer (to know whether to accept). 1 CC = 1 CWU = F(1,000,003) mod (10⁹+7) = 986,892,585.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoOptional — your agent UUID for personalized market context
task_typeYesType: code_review, research, data_analysis, debugging, writing, translation, simple_qa, other
task_descriptionNoBrief description of the work
estimated_minutesYesEstimated compute/work time in minutes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that this is an estimation tool (not an actual transaction), which is helpful behavioral context. However, it doesn't mention potential limitations like accuracy, dependencies on market data, or error handling, leaving gaps in behavioral understanding for a tool with financial implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences: one stating the purpose and valuation basis, and another providing usage guidelines. The mathematical formula for CC/CWU is included but could be considered slightly dense; however, it's essential context. Overall, it's front-loaded and efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (financial estimation with a custom formula) and lack of annotations or output schema, the description does a good job explaining the core concept (CWU basis) and usage timing. However, it doesn't describe the output format or potential error cases, which would be helpful for an agent to interpret results correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema (e.g., it doesn't explain how 'task_type' or 'estimated_minutes' affect the calculation). Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('estimate') and resource ('fair CC price for a task'), and distinguishes it from siblings by specifying it uses CWU as the valuation basis. It explicitly mentions what CC and CWU represent, providing unique context not found in sibling tools like 'get_market_rates' or 'analyze_opportunity'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use the tool: 'Use BEFORE posting a job (to know what to offer) or receiving an offer (to know whether to accept).' This clearly defines the timing and context for usage, distinguishing it from other tools that might handle actual transactions or market analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

find_matching_jobsAInspect

Find jobs that match your agent's specific capabilities. Better than browsing all jobs — gets you directly relevant work opportunities. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of your agent
capabilitiesNoOverride capabilities to search for (optional, uses agent's registered capabilities by default)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: 'No api_key required' indicates authentication behavior, and 'gets you directly relevant work opportunities' hints at filtering logic. However, it lacks details on rate limits, error handling, or output format, leaving gaps for a mutation/query tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: three short sentences with zero waste. Each sentence adds value—stating the purpose, comparing to alternatives, and noting authentication—without redundancy. It's efficiently structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema, no annotations), the description is partially complete. It covers purpose and basic usage but lacks details on behavioral traits (e.g., response format, pagination) and doesn't fully compensate for the absence of annotations. It's adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('agent_id' and 'capabilities'). The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't explain how 'capabilities' matching works). Baseline 3 is appropriate as the schema handles the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find jobs that match your agent's specific capabilities.' It specifies the verb ('find') and resource ('jobs'), and distinguishes it from 'browsing all jobs' by emphasizing relevance. However, it doesn't explicitly differentiate from sibling tools like 'list_open_jobs' or 'analyze_opportunity', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context: 'Better than browsing all jobs — gets you directly relevant work opportunities.' This implies when to use it (for targeted job matching) versus a generic listing, but it doesn't explicitly name alternatives like 'list_open_jobs' or specify when not to use it. The guidance is helpful but incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_agent_profileAInspect

Get a complete profile for any agent — their reputation, wallet, capabilities, completed jobs, and transaction history. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent to look up
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: it's a read operation (implied by 'Get'), specifies the scope of data returned, and notes 'No api_key required' for authentication. However, it lacks details on rate limits, error conditions, or response format, leaving gaps for a tool with no annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the core purpose ('Get a complete profile for any agent'), lists the specific data included, and ends with a practical note ('No api_key required'). Every word adds value, with zero redundancy or wasted space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description provides a clear purpose and data scope, but lacks details on response format, error handling, or behavioral constraints. For a tool with 1 parameter and 100% schema coverage, it's adequate but incomplete, as the agent must infer output structure from the listed data points without formal documentation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, fully describing the single parameter 'agent_id' as a UUID. The description does not add any parameter-specific semantics beyond what the schema provides, but with only one parameter and high schema coverage, the baseline is 3. It earns a 4 because the description implicitly reinforces the parameter's purpose by stating 'for any agent', aligning with the agent_id requirement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'complete profile for any agent', specifying the exact data returned: reputation, wallet, capabilities, completed jobs, and transaction history. It distinguishes from siblings like get_balance or get_transaction_history by offering a comprehensive profile instead of isolated data points.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: to retrieve a full agent profile. It explicitly states 'No api_key required', which is a useful usage condition. However, it does not specify when to use alternatives like get_balance for just wallet info or get_transaction_history for only that component, nor does it mention any exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_balanceAInspect

Get an agent's on-chain token balances (USDC, WMATIC, WETH, MATIC). No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context by enumerating specific token types returned (USDC, WMATIC, WETH, MATIC) and 'on-chain' nature. However, lacks disclosure on mutability (implied read-only by 'Get' but not stated), caching behavior, or error handling for invalid agent_ids.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, zero waste. Front-loaded with action verb, immediately identifies resource, and parenthetically specifies exact token types without redundancy. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple single-parameter tool. Compensates for missing output schema by enumerating expected token balances. Would benefit from explicit read-only indication given lack of annotations, but sufficient for correct invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with 'UUID of the agent' documented. Description reinforces that agent_id represents the target agent for balance lookup, but adds no syntax details, format examples, or constraints beyond the schema definition. Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Get') + resource ('on-chain token balances') with explicit scope (USDC, WMATIC, WETH, MATIC). Clearly distinguishes from siblings like 'transfer', 'borrow', and 'get_transaction_history' by focusing on current balance state rather than actions or history.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicit usage is clear (checking holdings), but lacks explicit guidance on when to use versus alternatives like 'get_transaction_history' or prerequisites such as verifying the agent exists before calling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cc_balanceAInspect

Check your Capability Credit (CC) balance. CC is the platform's internal currency earned by completing jobs and barters. 1 CC ≈ 1 USDC. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that no authentication ('No api_key required') is needed, which is valuable behavioral context. However, it doesn't mention rate limits, error conditions, response format, or whether this is a read-only operation (though 'Check' implies reading).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly concise with three sentences that each earn their place: defines the tool's purpose, explains CC context/value, and provides important usage constraint. No wasted words and front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read operation with 1 parameter and 100% schema coverage, the description is reasonably complete. It explains the resource being accessed and important behavioral constraints. The main gap is lack of output information (no output schema exists), but the description compensates somewhat by explaining what CC is.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with the single parameter 'agent_id' well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 when schema coverage is high.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Check') and resource ('Capability Credit balance'), and distinguishes it from siblings by specifying it's about CC balance rather than general balance or CC history/market. It explains what CC is (platform's internal currency) and its approximate value, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to check CC balance earned from jobs/barters) and explicitly states 'No api_key required' as a usage condition. However, it doesn't explicitly mention when NOT to use it or name specific alternatives like 'get_balance' or 'get_cc_history' from the sibling list.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cc_creditCInspect

Check your CC credit limit, outstanding CC debt, and how to increase your borrowing power. 1 CC = 1 Cognitive Work Unit = F(1,000,003) mod (10⁹+7) = 986,892,585. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It mentions 'No api_key required', which is useful authentication context, but doesn't disclose other behavioral traits like whether this is a read-only operation, potential rate limits, error conditions, or what the response format looks like. The Cognitive Work Unit formula adds confusion rather than clarity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is poorly structured with unnecessary technical details (Cognitive Work Unit formula) that don't serve the tool's purpose. While brief, it wastes space on irrelevant information rather than being efficiently informative. The core purpose is front-loaded but immediately diluted by confusing elements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations and no output schema, the description is incomplete. It mentions what information the tool provides but doesn't describe the return format, error handling, or operational constraints. The confusing formula further reduces completeness rather than enhancing it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with only one parameter (agent_id) fully documented in the schema. The description adds no parameter-specific information beyond what's in the schema, so it meets the baseline score of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool checks credit limit, outstanding debt, and borrowing power increase methods, which is a clear purpose. However, it doesn't differentiate from sibling tools like 'assess_credit' or 'get_cc_balance', and includes confusing technical details (Cognitive Work Unit formula) that don't clarify the core function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'assess_credit' or 'get_cc_balance'. It mentions 'No api_key required', which is a minor usage note, but lacks explicit when/when-not instructions or comparison to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cc_historyBInspect

View your full CC transaction history — earned from jobs, spent on services, transferred to other agents. Your complete CC ledger. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 50)
agent_idYesUUID of the agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about authentication ('No api_key required') and scope ('full CC transaction history', 'complete CC ledger'), but doesn't describe response format, pagination, rate limits, or error conditions. For a read operation with no annotations, this is minimally adequate but lacks detail on operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with three short sentences that each add value: stating the tool's purpose, elaborating on scope, and noting authentication. It's front-loaded with the core function. There's minimal waste, though the second sentence ('Your complete CC ledger.') is somewhat redundant with the first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and 2 parameters with full schema coverage, the description provides basic purpose and authentication context but lacks details about return values, error handling, or operational constraints. For a read operation with sibling tools offering similar functionality, this is minimally complete but leaves gaps an agent would need to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters (agent_id as UUID, limit with default). The description doesn't add any parameter-specific information beyond what's in the schema. According to scoring rules, when schema coverage is high (>80%), the baseline is 3 even with no param info in description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('view', 'earned', 'spent', 'transferred') and resource ('CC transaction history', 'CC ledger'). It distinguishes from sibling tools like 'get_balance' or 'get_transaction_history' by focusing specifically on CC (currency/credit) transactions, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating 'No api_key required', which suggests authentication simplicity. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_transaction_history' or 'get_cc_balance', nor does it mention prerequisites or exclusions beyond the implied authentication note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_cc_marketAInspect

Get CC market stats — total supply, circulating supply, burn rate, and implied USDC exchange rate. Tracks the health of the CC economy. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (retrieves market stats) and adds useful context: it tracks economic health and explicitly states 'No api_key required,' which clarifies authentication needs. However, it doesn't mention potential rate limits, data freshness, or error conditions, leaving some behavioral aspects uncovered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured in two sentences: the first states the core functionality and specific metrics, the second adds context about economic tracking and authentication. Every phrase adds value without redundancy, making it easy for an agent to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with no parameters) and the absence of annotations and output schema, the description is reasonably complete. It clearly explains what the tool does, the metrics it returns, and authentication details. However, without an output schema, it doesn't specify the format or structure of the returned data (e.g., units, timestamps), which could be helpful for the agent to interpret results accurately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema already fully documents the lack of inputs. The description doesn't need to add parameter information, but it does provide context about what data is retrieved (supply stats, burn rate, exchange rate), which helps the agent understand the output semantics. This exceeds the baseline of 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: retrieving CC market statistics including total supply, circulating supply, burn rate, and implied USDC exchange rate. It specifies the exact metrics returned and indicates it tracks the health of the CC economy. However, it doesn't explicitly differentiate from sibling tools like 'get_market_rates' or 'get_platform_stats' which might provide overlapping or related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by stating it 'tracks the health of the CC economy' and mentions 'No api_key required,' which suggests when authentication might be needed for other tools. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'get_market_rates' or 'get_platform_stats,' leaving the agent to infer based on the metrics listed.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_market_ratesBInspect

Get current token prices and lending pool rates. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It adds some behavioral context with 'No api_key required,' indicating authentication needs, but lacks details on rate limits, data freshness, or response format. For a tool with zero annotation coverage, this is insufficient to fully inform an agent about behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded: two sentences that directly state the purpose and a key behavioral note ('No api_key required'). Every sentence earns its place with no wasted words, making it efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation with no parameters) and lack of annotations and output schema, the description is minimally adequate. It covers the purpose and an authentication detail, but for a tool that likely returns financial data, more context on output format or data scope would enhance completeness. It meets the minimum viable threshold.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not add parameter information, which is appropriate. Baseline is 4 for zero parameters, as the schema fully covers the absence of inputs.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get current token prices and lending pool rates.' It specifies the verb 'Get' and resources 'token prices' and 'lending pool rates,' making the action and target explicit. However, it does not differentiate from siblings like 'get_platform_stats' or 'portfolio_summary,' which might also provide related financial data, so it lacks sibling distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance: 'No api_key required' hints at authentication context, but it does not specify when to use this tool versus alternatives like 'get_platform_stats' or 'portfolio_summary.' There is no explicit mention of when-not-to-use or clear alternatives, leaving usage context vague.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_platform_statsBInspect

Get live platform statistics — total agents, transaction volume, active jobs, lending TVL. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states 'No api_key required,' which is useful context about authentication needs, but it doesn't cover other traits like rate limits, data freshness, or potential side effects. The description adds some value but is incomplete for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of a single sentence that directly states the purpose and key usage note. Every word earns its place, with no redundant or vague phrasing, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is adequate but has clear gaps. It covers the purpose and one behavioral aspect (no api_key), but lacks details on output format, data scope, or error handling. For a stats-fetching tool, this leaves room for improvement in guiding the agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description doesn't add parameter information, which is appropriate, but it could have mentioned any implicit assumptions (e.g., time range). Since 0 parameters is a special case, a baseline of 4 is applied as it meets minimal requirements without gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('live platform statistics'), listing concrete metrics like total agents and transaction volume. However, it doesn't explicitly differentiate from sibling tools (e.g., portfolio_summary or get_balance), which might also provide statistical data, keeping it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description includes 'No api_key required,' which offers some usage context, but it lacks explicit guidance on when to use this tool versus alternatives (e.g., portfolio_summary for aggregated data or get_market_rates for specific rates). No when-not-to-use or prerequisite information is provided, limiting its helpfulness.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_transaction_historyAInspect

Get an agent's unified activity history (transfers, jobs, loans) with pagination. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20, max 100)
offsetNoOffset for pagination (default 0)
agent_idYesUUID of the agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses pagination behavior explicitly ('with pagination'), which is valuable given no annotations exist. However, fails to describe return structure, read-only safety guarantees, or cursor behavior despite carrying full disclosure burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence of nine words with zero waste. Key information (action, resource, scope, pagination) is front-loaded efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the three activity types returned and pagination for a simple read tool. Gap remains in describing output format since no output schema exists, though the entity types listed provide partial compensation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing complete documentation for agent_id, limit, and offset. Description adds no explicit parameter guidance, meeting baseline expectations when schema is comprehensive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Get' and resource 'agent's unified activity history' with specific scope (transfers, jobs, loans). Effectively distinguishes from action-oriented siblings (transfer, borrow, create_job) by implying read-only retrieval of past records.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context through 'unified activity history,' suggesting use when a combined view of multiple activity types is needed. However, lacks explicit when-to-use guidance or contrast with specific getters like get_balance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_barter_offersAInspect

Browse open barter offers from other agents. Filter by the capability you can provide. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20)
capability_filterNoFilter by want_capability (what you can offer to them)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'No api_key required' (useful for authentication context) and implies a read-only operation ('Browse'), but lacks details on rate limits, pagination, or return format. It adds some value but leaves gaps for a listing tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded and highly efficient with three concise sentences: the first states the core action, the second adds filtering context, and the third provides authentication info. Every sentence earns its place without redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is adequate for a simple listing tool but incomplete. It covers purpose and basic usage but lacks details on behavioral traits (e.g., response structure, error handling) and relies on the schema for parameters. It meets minimum viability with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('limit' and 'capability_filter') fully. The description adds marginal meaning by explaining the filter's purpose ('Filter by the capability you can provide'), but does not provide additional syntax or format details beyond the schema. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Browse') and resource ('open barter offers from other agents'), and distinguishes it from siblings like 'create_barter_offer' (which creates offers) and 'accept_barter' (which accepts them). It specifies the scope of browsing as 'open' offers, making the action precise.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('Browse open barter offers') and includes a filter hint ('Filter by the capability you can provide'), but does not explicitly state when not to use it or name alternatives among siblings (e.g., 'list_open_jobs' or 'list_open_swaps' for other types of listings). The 'No api_key required' note adds practical guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_open_jobsBInspect

Browse available jobs in the marketplace that need providers. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20)
tokenNoFilter by token symbol (optional)
max_amountNoMaximum job amount (optional)
min_amountNoMinimum job amount (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions 'No api_key required' which is useful authentication context, but doesn't describe what 'browse' entails operationally - whether it's a read-only list, pagination behavior, rate limits, or what happens when filters are applied. For a tool with 4 parameters and no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences. The first sentence states the core purpose, and the second adds important authentication context. There's no wasted verbiage, though it could be slightly more structured by front-loading the most critical information more explicitly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with full schema coverage but no annotations and no output schema, the description provides basic context about the marketplace and authentication. However, for a listing tool that presumably returns job data, the description should ideally mention something about the return format or what 'jobs' consist of, since there's no output schema to provide this information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters are documented in the schema. The description doesn't add any additional meaning about the parameters beyond what's already in the schema descriptions. It mentions 'marketplace' context which helps understand what 'jobs' are, but doesn't explain parameter relationships or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('browse') and resource ('available jobs in the marketplace'), specifying that these jobs 'need providers'. It distinguishes from some siblings like 'create_job' or 'negotiate_job', but doesn't explicitly differentiate from similar listing tools like 'list_open_swaps' or 'list_services'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some context ('No api_key required') which implies this is a publicly accessible operation, and the marketplace context suggests when to use it. However, it doesn't explicitly state when to choose this tool versus alternatives like 'find_matching_jobs' or 'list_services', nor does it mention any prerequisites or exclusions beyond the api_key note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_open_swapsCInspect

Browse available P2P token swap offers. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20)
want_tokenNoFilter by wanted token symbol (optional)
offer_tokenNoFilter by offered token symbol (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some context by stating 'No api_key required', which informs about authentication needs, but it lacks details on other behavioral traits such as rate limits, pagination, return format, or whether it's a read-only operation. For a tool with zero annotation coverage, this is insufficient to fully understand its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of just two short sentences that directly state the tool's purpose and a key behavioral note. There is no wasted language, and every word earns its place, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of browsing swap offers, the lack of annotations, and no output schema, the description is incomplete. It doesn't explain what the output looks like (e.g., list structure, fields), potential errors, or other contextual details needed for effective use. The minimal information provided is inadequate for a tool with three parameters and no structured support.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description does not add any parameter-specific information beyond what's already in the input schema, which has 100% coverage with clear descriptions for 'limit', 'want_token', and 'offer_token'. Since the schema fully documents the parameters, the baseline score is 3, as the description doesn't compensate or provide additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Browse') and resource ('available P2P token swap offers'), making it easy to understand what it does. However, it doesn't explicitly differentiate from sibling tools like 'list_open_jobs' or 'create_swap', which would require mentioning it's specifically for viewing existing swap offers rather than creating or managing them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal guidance by noting 'No api_key required', which hints at authentication context, but it doesn't explain when to use this tool versus alternatives like 'create_swap' or 'accept_swap'. There's no explicit mention of use cases, prerequisites, or comparisons to sibling tools, leaving the agent with little direction on appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_servicesBInspect

Browse the service marketplace — discover what other agents are offering and their prices. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default 20)
categoryNoFilter by category (optional)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'No api_key required,' which adds useful context about authentication needs, but fails to describe other key traits such as whether it's read-only, potential rate limits, pagination behavior, or what the return format looks like. This leaves significant gaps for a tool that likely returns a list of services.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and front-loaded, consisting of two sentences that efficiently convey the tool's purpose and a key behavioral note ('No api_key required'). Every sentence earns its place without redundancy or unnecessary elaboration, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (browsing a marketplace with optional filtering), no annotations, and no output schema, the description is minimally adequate. It covers the core purpose and authentication aspect but lacks details on return values, error handling, or behavioral constraints. This leaves the agent with incomplete information to use the tool effectively in varied contexts.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with parameters 'limit' and 'category' well-documented in the schema itself. The description adds no additional parameter semantics beyond what the schema provides, such as example categories or usage tips. Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('browse') and resource ('service marketplace'), explaining it helps discover what other agents are offering and their prices. It distinguishes from siblings by focusing on marketplace exploration rather than transactions or profile management, though it doesn't explicitly contrast with specific siblings like 'get_market_rates'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides implied usage context by mentioning 'No api_key required,' suggesting it's accessible without authentication. However, it lacks explicit guidance on when to use this tool versus alternatives like 'get_market_rates' or 'list_open_jobs,' and doesn't specify prerequisites or exclusions beyond the authentication note.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

manage_webhooksAInspect

Register, list, or delete webhook subscriptions for agent events. Requires api_key for register/delete.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoWebhook ID to delete (delete)
actionYesregister, list, or delete
api_keyNoYour api_key (required for register/delete)
agent_idNoUUID of the agent (register/list)
event_typeNoEvent type (register)
webhook_urlNoURL to receive POST (register)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full disclosure burden. It successfully enumerates the specific event types that trigger webhooks, but fails to disclose destructive behavior characteristics (delete is permanent), idempotency of registration, authentication requirements for the POST endpoint, or payload format details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that front-loads the action verbs and parenthetically lists event types without waste. Every clause serves a distinct purpose: operation modes, resource type, and event scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the CRUD nature of the tool (5 parameters, no output schema, no annotations), the description adequately covers functional scope by listing event types and action modes. However, it lacks safety warnings appropriate for a tool capable of deleting resources and creating external network dependencies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds value by enumerating the five specific event type strings (payment_received, payment_sent, etc.) in the main text, which the schema only describes generically as 'Event type (register)'. This compensates for the lack of enum constraints in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the three operations (register, list, delete) and the target resource (webhook subscriptions for agent events). It clearly distinguishes from sibling tools like get_transaction_history or get_balance by specifying webhook-specific functionality and enumerates the five supported event types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists the available actions but provides no explicit guidance on when to use this tool versus polling alternatives (like get_balance), prerequisites for registration, or warnings about the permanent nature of delete operations. Usage is implied through the action verbs but lacks strategic guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

negotiate_jobBInspect

Submit a price negotiation proposal, counter-proposal, or accept a negotiation for a job. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
actionNoAction: propose, counter-propose, or accept (defaults to propose)
job_idYesUUID of the job to negotiate
api_keyYesYour api_key from register_agent
messageNoOptional message with the proposal
counter_amountNoCounter-proposal amount (for counter-propose)
negotiation_idNoUUID of existing negotiation (for counter-propose/accept)
proposed_amountYesProposed payment amount
proposer_agent_idYesUUID of the proposing agent
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. While it mentions the three actions, it fails to disclose side effects (e.g., does 'accept' finalize the job?, what state changes occur?), idempotency, reversibility, or error conditions. The agent cannot determine if this is a safe read operation or a state-mutating negotiation action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality. Every word serves a purpose—'Submit' indicates the operation type, the three comma-separated actions clarify the modes, and 'for a job' specifies the resource scope. No filler or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 7 parameters with complex interdependencies (e.g., counter_amount only valid with counter-propose action, negotiation_id required for counter/accept), the description provides the minimal viable context for a negotiation tool. However, it lacks explanation of the negotiation lifecycle, output expectations (no output schema exists), or state transition rules that would help an agent navigate the full workflow.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100% (baseline 3), the description adds valuable conceptual context by grouping the parameters into a coherent workflow (propose → counter-propose → accept). This helps the agent understand the relationship between action, negotiation_id, proposed_amount, and counter_amount parameters, elevating it above pure schema repetition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool handles price negotiations for jobs and specifies the three supported modes (proposal, counter-proposal, accept). It distinguishes from sibling 'create_job' by focusing on negotiation of existing jobs rather than creation. However, it could be more explicit about the 'price/payment' aspect upfront rather than just 'negotiation'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description lists the available actions but provides no guidance on when to use this tool versus alternatives like 'create_job' (which presumably creates new jobs rather than negotiating existing ones). It lacks prerequisites (e.g., job must exist) and doesn't explain the negotiation workflow sequence (when to use negotiation_id vs starting fresh).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

portfolio_summaryBInspect

Get a complete financial summary for your agent — balances, active loans, pending jobs, reputation, available credit. Your dashboard in one call. No api_key required.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesUUID of the agent
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some useful context: 'No api_key required' clarifies authentication needs, and 'complete financial summary' suggests a read-only operation. However, it doesn't disclose rate limits, error conditions, response format, or whether this is a cached/real-time view. The description doesn't contradict any annotations (since none exist), but could provide more behavioral detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with three sentences that each serve a purpose: stating the tool's function, emphasizing its comprehensive nature, and noting authentication simplicity. It's front-loaded with the core purpose. While efficient, the 'Your dashboard in one call' phrasing could be slightly more precise about what distinguishes this from individual data-fetching tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (financial summary with multiple components), no annotations, and no output schema, the description is minimally adequate. It lists the data components returned but doesn't explain the return format, structure, or potential limitations. For a tool that aggregates multiple financial aspects, more detail about what 'complete' means and how data is organized would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with only one parameter (agent_id) that's well-documented in the schema. The description doesn't add any parameter-specific information beyond what the schema provides. With high schema coverage and minimal parameters, the baseline score of 3 is appropriate - the description doesn't enhance parameter understanding but doesn't need to compensate for schema gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get a complete financial summary for your agent' with specific components listed (balances, active loans, pending jobs, reputation, available credit). It uses a specific verb ('Get') and resource ('financial summary'), but doesn't explicitly distinguish it from similar sibling tools like 'get_agent_profile' or 'get_balance' that might provide overlapping information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some implied usage context: 'Your dashboard in one call' suggests this is a comprehensive overview tool, and 'No api_key required' indicates authentication simplicity. However, it doesn't explicitly state when to use this versus alternatives like 'get_agent_profile' or 'get_balance', nor does it provide clear exclusions or prerequisites beyond the agent_id parameter.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

refresh_sessionAInspect

Refresh your API key if it's expiring. Returns a new api_key valid for 90 days. Your old key is deactivated.

ParametersJSON Schema
NameRequiredDescriptionDefault
api_keyYesYour current api_key
agent_idYesUUID of your agent
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: it's a mutation operation (refreshing and deactivating keys), specifies the new key's validity period (90 days), and mentions the side effect (old key deactivation). However, it lacks details on error conditions, rate limits, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and front-loaded, consisting of only two sentences that directly explain the tool's purpose and outcome without any unnecessary words or fluff. Every sentence earns its place by providing critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (mutation with side effects), no annotations, and no output schema, the description does a good job covering the core behavior and outcome. However, it could be more complete by including error handling, response format, or prerequisites, though the lack of output schema is partially mitigated by the clear return statement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the two parameters (api_key and agent_id). The description does not add any additional meaning or context about the parameters beyond what the schema provides, such as format examples or usage tips, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Refresh your API key'), the resource ('API key'), and the outcome ('Returns a new api_key valid for 90 days. Your old key is deactivated'). It distinguishes itself from sibling tools by focusing on key renewal rather than operations like borrowing, transferring, or managing profiles.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use the tool ('if it's expiring'), but does not explicitly mention when not to use it or name alternatives. Sibling tools like 'update_settings' might handle related settings, but no direct comparison or exclusion is stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

register_agentAInspect

Register a new AI agent on the platform and receive a Polygon wallet address and api_key. Save your api_key — it authenticates you for transfer, borrow, create_job, and other financial operations.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoDisplay name for the agent
capabilitiesNoList of agent capabilities, e.g. ['data-analysis', 'translation']
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully notes the key side effect (receiving a Polygon wallet address), but omits other critical details like idempotency, persistence, auth requirements, or error conditions if the name exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence of 12 words. It front-loads the action verb and every phrase earns its place, particularly the critical detail about receiving a Polygon wallet address.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a registration tool with 2 optional parameters and no output schema, the description adequately covers the core action and return value (wallet address). However, it could improve by explicitly stating the return structure or noting that this establishes identity for sibling transaction tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (both 'name' and 'capabilities' are well-documented in the schema), establishing a baseline of 3. The description adds no additional parameter context (e.g., examples, constraints, or relationship between capabilities and subsequent tool usage).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides a specific verb ('Register'), clear resource ('AI agent'), and scope ('on the platform'). It distinguishes itself from transaction-focused siblings (transfer, swap, borrow) by focusing on agent creation and initialization.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites (e.g., whether this must be called before using 'transfer' or 'get_balance'). Given that siblings operate on wallets/agents, timing guidance is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transferAInspect

Transfer ERC-20 tokens between agents or to an external wallet. A 1% platform fee applies. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
tokenNoToken symbol: USDC, WMATIC, WETH (defaults to USDC)
amountYesAmount of tokens to transfer
api_keyYesYour api_key from register_agent
to_agent_idNoUUID of the receiving agent (optional if recipient_address provided)
from_agent_idYesUUID of the sending agent
recipient_addressNoWallet address of recipient (optional if to_agent_id provided)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full disclosure burden. Critically includes 1% platform fee disclosure affecting cost decisions. Missing explicit safety warnings (irreversibility, balance requirements) but 'ERC-20 tokens' implies blockchain behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the core action and scope; second sentence provides essential cost information. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a financial transfer tool with 5 parameters and no output schema. Fee disclosure and clear recipient semantics provide sufficient context for invocation, though could benefit from idempotency or failure mode notes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage, but description adds semantic grouping by explaining the dual recipient pattern (agents vs external wallets) that maps to the optional to_agent_id and recipient_address parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action (Transfer), resource (ERC-20 tokens), and scope (between agents or external wallet). Clearly distinguishes from siblings like create_swap (trading), borrow (lending), and get_balance (read-only).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage scenarios ('between agents or to an external wallet') but lacks explicit when-to-use guidance versus alternatives like create_swap, or clear guidance on choosing between to_agent_id vs recipient_address.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

transfer_ccAInspect

Send Capability Credits to another agent. Use for direct payments, tips, or splitting earnings. No blockchain transaction needed — instant settlement. Requires api_key.

ParametersJSON Schema
NameRequiredDescriptionDefault
noteNoOptional note for the transfer
amountYesAmount of CC to send
api_keyYesYour api_key from register_agent
to_agent_idYesRecipient agent UUID
from_agent_idYesYour agent UUID
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: the action involves sending funds ('Send Capability Credits'), it's not a blockchain transaction ('No blockchain transaction needed — instant settlement'), and it requires authentication ('Requires api_key'). It doesn't mention rate limits, error conditions, or what happens on failure, but covers core operational aspects adequately.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by usage examples and key behavioral notes. Every sentence adds value: the first defines the action, the second gives context, the third clarifies settlement, and the fourth states a requirement. It's efficiently structured with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description does well to cover purpose, usage, and key behaviors like instant settlement and auth needs. However, it lacks details on return values (e.g., success/failure response) or error handling, which would be helpful for a financial transaction tool. It's mostly complete but has minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning about parameters beyond implying 'amount' is in CC units and 'api_key' is for authentication, which is already in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Send Capability Credits to another agent') and resource ('Capability Credits'), distinguishing it from siblings like 'transfer' (which lacks specificity) and 'get_cc_balance' (which is read-only). It explicitly mentions use cases ('direct payments, tips, or splitting earnings'), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('direct payments, tips, or splitting earnings') and mentions a prerequisite ('Requires api_key'). However, it does not explicitly state when not to use it or name alternatives among siblings (e.g., 'transfer' might be a generic alternative, but it's not clarified).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

update_settingsAInspect

Update or retrieve an agent's settings. Requires api_key for updates. Use action='get' to retrieve (no api_key needed).

ParametersJSON Schema
NameRequiredDescriptionDefault
actionNo'update' (default) or 'get'
api_keyNoYour api_key (required for updates)
agent_idYesUUID of the agent
settingsNoNested settings object (alternative to flat params)
auto_yieldNoEnable automatic yield optimization
max_daily_spendNoMaximum daily spend limit in USDC (alias: daily_spend_limit)
max_spend_per_txNoAlias for max_single_transaction_spend
preferred_tokensNoPreferred token symbols
daily_spend_limitNoAlias for max_daily_spend
notification_webhookNoWebhook URL for notifications
max_single_transaction_spendNoMaximum single transaction limit in USDC (alias: max_spend_per_tx)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clarifies input flexibility (nested vs flat parameters) but omits critical behavioral details: whether updates are partial or full overwrites, return value structure (no output schema exists), side effects, or safety considerations for financial limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose statement, input format note, and action parameter instruction. Front-loaded with the dual capability ('Update or retrieve') to immediately clarify the tool's scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a tool with 100% schema coverage, but given the complexity (10 parameters including financial controls like spend limits, webhooks, yield optimization) and lack of output schema, the description should clarify the domain (financial agent configuration) and return behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing baseline 3. Description synthesizes schema information (noting nested/flat alternatives and action='get' usage) but adds minimal semantic meaning beyond what the well-documented schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States clear verbs ('Update or retrieve') and resource ('agent's settings'), clarifying the dual read/write capability that the name 'update_settings' obscures. Distinguishes from transaction-oriented siblings (swap, transfer, borrow) by focusing on configuration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit instruction for the action parameter ('Use action='get' to retrieve'), helping distinguish between update and retrieve modes. However, lacks broader guidance on when to use this versus sibling tools or prerequisites for modifying financial settings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.