InsureLink
Server Details
AI agent-to-agent SLA agreements on Base with insurance, reputation, and x402 payments.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 15 of 15 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but simulate_action and simulate_paid_action are very similar and could cause confusion. The descriptions help differentiate, but the overlap is notable.
Tools follow a mostly consistent verb_noun snake_case pattern, but deviations like 'mitigation_receipt' (noun_noun) and 'claim_breach_credit' (verb_noun_noun) introduce minor inconsistency.
15 tools is well-scoped for a blockchain SLA management platform. Each tool covers a distinct aspect of the lifecycle without being excessive.
The tool surface covers key SLA operations (mint, renew, exit, reset, wrap, query, simulate). Minor gaps exist, such as a dedicated tool for fetching individual SLA details by token ID, but core workflows are supported.
Available Tools
21 toolsclaim_breach_creditAInspect
Verifies the on-chain SLA agreement state for tokenId and, if active and the caller is a party, calls microResetInsurance to mitigate the breach. Use after a drift or expiry alert. Requires x402 payment ($0.005). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Optional human-readable breach reason for audit | |
| tokenId | Yes | SLA token ID to verify and mitigate |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the conditional flow (verification then mitigation), payment requirement ($0.005 in x402), and that it returns payment instructions. It lacks details on error handling or idempotency but covers key behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. The first sentence clearly defines the action, the second provides usage context and payment requirement. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description states 'Returns payment instructions,' which is helpful but not detailed. It covers purpose, usage, preconditions, payment, and output, but omits error scenarios or detailed output format. Still adequate for a tool with simple parameters and clear flow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the description adds minimal extra meaning. For tokenId, it repeats 'SLA token ID to verify and mitigate' similar to the schema. Reason is described as 'audit trail' in the description, slightly expanding on the schema's 'human-readable breach reason.' Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool verifies on-chain SLA state and conditionally calls microResetInsurance to mitigate a breach. This is a specific verb-resource-action description that distinguishes it from sibling tools like micro_reset and mitigation_receipt.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Use after a drift or expiry alert,' providing clear when-to-use guidance. It also mentions preconditions ('if active and the caller is a party'), but does not specify alternatives or when not to use, which would elevate it to a 5.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_capabilitiesAInspect
Returns the full InsureLink capability manifest including supported actions, tokens, pricing, protection schedule, and framework compatibility.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the output content (capability manifest with specific details) but lacks behavioral traits such as performance characteristics, error handling, or data freshness. The description does not contradict any annotations, but it misses opportunities to disclose operational aspects beyond the return data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently lists all key components of the returned manifest. It is front-loaded with the main action and resource, with no redundant or verbose language, making it highly concise and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (metadata retrieval with no inputs) and lack of annotations and output schema, the description is moderately complete. It specifies what information is included in the manifest but does not detail the format, structure, or potential limitations of the returned data, leaving gaps for an agent to infer usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description adds no parameter-specific information, which is appropriate here. A baseline of 4 is applied as it compensates adequately for the lack of parameters by focusing on output semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Returns') and the specific resource ('full InsureLink capability manifest'), listing concrete components like actions, tokens, pricing, protection schedule, and framework compatibility. It distinguishes itself from siblings by focusing on system metadata rather than operational functions like 'mint_sla' or 'get_activity'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying what information is returned (e.g., supported actions, pricing), suggesting it should be used to understand system capabilities before invoking other tools. However, it does not explicitly state when to use it versus alternatives or provide exclusion criteria, leaving some ambiguity about its priority in workflows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
early_exitAInspect
Exits an SLA early with protection adjustment. Requires x402 payment ($0.005). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | SLA token ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it's a mutation (exiting), requires payment, and returns payment instructions. It adds value beyond the schema by explaining costs and output behavior, though it could detail more about the protection adjustment or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three concise sentences that are front-loaded: it states the action, cost, and return value efficiently, with no wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with payment), no annotations, and no output schema, the description is fairly complete by covering purpose, cost, and output. However, it lacks details on the protection adjustment mechanism or potential side effects, leaving some gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the tokenId parameter. The description does not add meaning beyond the schema, such as explaining what tokenId represents or its format, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Exits an SLA early') and specifies the mechanism ('with protection adjustment'), distinguishing it from siblings like renew_sla or mint_sla. It uses precise verbs and identifies the resource (SLA) effectively.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Exits an SLA early') and implies a financial prerequisite ('Requires x402 payment'), but does not explicitly state when not to use it or name alternatives among siblings like renew_sla or micro_reset.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_activityCInspect
Returns recent platform transactions.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max transactions (default 50, max 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns data but doesn't specify what 'recent' means (e.g., time range), whether the data is paginated, if authentication is required, or any rate limits. This leaves significant behavioral gaps for a tool that likely accesses transactional data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's appropriately sized for a simple tool and front-loads the core purpose immediately, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'platform transactions' entail, the format of the returned data, or any behavioral constraints. For a tool that returns data without structured output documentation, this leaves too many contextual gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the 'limit' parameter fully documented in the schema. The description adds no additional parameter information beyond what the schema provides, so it meets the baseline score of 3 for adequate but not additive parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Returns recent platform transactions' clearly states the verb ('returns') and resource ('recent platform transactions'), making the tool's purpose understandable. However, it doesn't differentiate this tool from potential sibling tools that might also return transaction data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any context, prerequisites, or exclusions, nor does it reference any sibling tools for comparison. This leaves the agent with minimal usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_attestationAInspect
Returns a signed, portable reputation credential (EIP-191 personal_sign) for an agent wallet. Includes score, tier, SLA summary, breach rate, signer address, and 30-day expiry. Issued only to wallets with at least one SLA on InsureLink (Bronze+). Free.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, description provides key behavioral traits (free, 30-day expiry, condition of SLA). But it omits potential errors (e.g., wallet without SLA) and doesn't detail authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently pack key info: purpose, contents, conditions, and cost. No redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input (one parameter) and no output schema, the description covers return contents and issuance conditions adequately. Slightly detracted by missing error scenario details.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the single parameter 'wallet' described as 'Wallet address (0x...)'. Description adds context about what the parameter is used for but no extra format guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states it 'Returns a signed, portable reputation credential (EIP-191 personal_sign) for an agent wallet' and lists components (score, tier, SLA summary, etc.). It distinguishes from sibling 'verify_attestation' by focusing on retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description specifies when to use (wallet with at least one SLA on InsureLink) and mentions it's free. However, it doesn't explicitly state when not to use or name alternatives, though 'verify_attestation' is a sibling.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_leaderboardAInspect
Returns the top 25 most reliable agents ranked by reputation score.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It states the tool returns a ranked list but lacks details on format (e.g., structured data, pagination), freshness of data, rate limits, or authentication needs. For a read operation with zero annotation coverage, this leaves significant behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Returns the top 25 most reliable agents') with essential qualifiers ('ranked by reputation score'). Zero wasted words, perfectly sized for a no-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and 0 parameters, the description adequately covers the basic purpose but lacks details on return format, data freshness, or error handling. For a simple read tool, it's minimally viable but incomplete for robust agent use without additional context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description appropriately adds no parameter information, maintaining focus on the tool's purpose without redundancy. Baseline for 0 parameters is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Returns') and resource ('top 25 most reliable agents ranked by reputation score'), distinguishing it from siblings like get_reputation (likely individual scores) or get_activity (different metric). It precisely defines scope and ranking criteria.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving top-ranked agents by reputation, but provides no explicit guidance on when to use this versus alternatives like get_reputation (for individual scores) or get_activity (for activity metrics). Usage context is inferred rather than stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_reputationBInspect
Returns reputation score, tier, stats, and flags for a wallet address.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data, implying it is a read-only operation, but does not specify any behavioral traits such as rate limits, authentication needs, error handling, or what 'flags' might entail. For a tool with no annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without any unnecessary words. It is front-loaded with the core purpose and avoids redundancy, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a single-parameter read operation) and the lack of annotations and output schema, the description is minimally complete. It covers what the tool returns but does not address behavioral aspects or usage context. For a simple tool, this is adequate but leaves gaps, warranting a middle score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'wallet' parameter clearly documented as 'Wallet address (0x...)'. The description adds minimal value beyond this by implying the parameter is used to fetch reputation data, but it does not provide additional semantics like format constraints or examples. Given the high schema coverage, a baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Returns reputation score, tier, stats, and flags for a wallet address.' It specifies the verb ('Returns') and the resource/scope ('reputation score, tier, stats, and flags'), making it easy to understand what the tool does. However, it does not explicitly differentiate from sibling tools like 'get_activity' or 'get_leaderboard', which might also retrieve wallet-related data, so it falls short of a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any context, prerequisites, or exclusions, such as when to choose 'get_reputation' over 'get_activity' or other sibling tools. This lack of usage instructions leaves the agent without clear direction, making it a minimal score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_sla_historyCInspect
Returns the full SLA history for a wallet address.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool returns history but doesn't cover critical aspects like whether this is a read-only operation, potential rate limits, authentication needs, error conditions, or the format of the returned history (e.g., list of events, timestamps). This leaves significant gaps for a tool that retrieves historical data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It's appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of retrieving historical data, no annotations, and no output schema, the description is incomplete. It doesn't explain what 'SLA history' entails (e.g., time range, event types), the return format, or behavioral constraints, leaving the agent with insufficient context for reliable use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'wallet' parameter clearly documented as 'Wallet address (0x...)'. The description adds no additional parameter semantics beyond this, so it meets the baseline score of 3 where the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Returns' and the resource 'full SLA history for a wallet address', making the purpose specific and understandable. However, it doesn't differentiate this tool from potential sibling tools like 'get_activity' or 'get_reputation' that might also retrieve wallet-related data, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., whether the wallet must have an SLA), exclusions, or comparisons to siblings like 'get_activity' or 'get_reputation', leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
insurelink_chargeAInspect
Lovable-friendly entry point: same x402 quote → pay → call loop as pay_then_call, but the EIP-191 signed receipt envelope is tagged source: "lovable" so downstream verifiers and partner-revenue accounting can attribute the settlement to a Lovable-shipped agent. Use this from any Lovable agent that needs to charge or pay another agent on Base via USDC/USDT/DAI/EURC. Returns { paid, upstream_status, response, signed_context: { envelope (with source), envelope_json, envelope_sha256, signature, signer, scheme: 'EIP-191' } }. Pass idempotency_key to safely retry without double-settling.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Target paid endpoint. | |
| body | No | Raw request body. JSON-encode before passing. | |
| mode | No | `quote_only` returns the 402 quote without settling. Default `pay_and_call`. | |
| method | No | HTTP method. Default GET. | |
| headers | No | Extra request headers (Content-Type, Authorization, etc.). | |
| purpose | No | Free-form audit string included in the signed receipt. | |
| agent_wallet | No | Calling agent wallet (logged in the signed receipt for downstream attribution). | |
| idempotency_key | No | Client-supplied idempotency key. If provided, repeated calls with the same key return the cached response and never settle a second payment. A subsequent call with the same key but a different request payload is rejected with `idempotency_key_conflict`. Keys are retained for 24h. | |
| max_amount_usdc | No | Refuse to settle if the 402 quote exceeds this USDC amount. Default 10. | |
| x_payment_header | No | Base64-encoded X-PAYMENT header (caller pre-builds settlement payload). Required unless mode='quote_only'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It details that the receipt is EIP-191 signed with a 'lovable' source tag, explains idempotency key behavior (caching, rejection on conflict, 24h retention), mentions max_amount refusal, and outlines the return structure including signed_context fields. This provides comprehensive behavioral insight.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is detailed but well-structured: it starts with the primary purpose, then explains the loop and attribution, then usage context, then return format, then idempotency hint. It front-loads key information, though it could be slightly more concise to improve scanability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, no output schema), the description covers all essential aspects: workflow, signed receipt structure, idempotency behavior, amount limits, and usage constraints. The return structure is thoroughly explained, compensating for the lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with each parameter having a description. The description adds value by explaining how parameters relate to the workflow (e.g., mode='quote_only' avoids settlement, x_payment_header required unless quote_only) and elaborates on idempotency_key and max_amount_usdc behavior beyond schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a 'Lovable-friendly entry point' for the same quote-pay-call loop as 'pay_then_call', with a special tag for attribution. It specifies the input/output workflow and distinguishes it from the sibling by explicitly mentioning the alternative 'pay_then_call' and the source tag.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states 'Use this from any Lovable agent that needs to charge or pay another agent on Base via USDC/USDT/DAI/EURC.' This gives a clear context of use. However, it does not explicitly state when not to use or provide a full exclusion list, though the sibling 'pay_then_call' is implied as an alternative for non-Lovable agents.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
micro_resetBInspect
Resets the insurance window for an SLA. Requires x402 payment ($0.001). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | SLA token ID |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context beyond the schema by specifying a payment requirement ($0.001) and that it returns payment instructions, which are behavioral traits. However, it doesn't cover other aspects like potential side effects, error conditions, or authentication needs, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with two sentences that are front-loaded with the main action and include essential details like payment and return values. There's minimal waste, though it could be slightly more structured for clarity, but overall it's efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no annotations, no output schema, and a simple input schema, the description is moderately complete. It covers the action, payment requirement, and return type, but lacks details on output format, error handling, or deeper behavioral context, which would be beneficial for full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the 'tokenId' parameter as 'SLA token ID'. The description doesn't add any further meaning or details about this parameter beyond what the schema provides, meeting the baseline for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Resets the insurance window') and resource ('for an SLA'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from sibling tools like 'renew_sla' or 'wrap_usdc', which might have overlapping domains, so it doesn't achieve full differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when resetting an SLA's insurance window is needed, and mentions a payment requirement that could serve as a prerequisite. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'renew_sla' or other siblings, leaving the context somewhat implied rather than clearly defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mint_slaAInspect
Creates a new ERC-721 SLA agreement NFT. Requires x402 payment ($0.01). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| duration | Yes | Duration in years (5, 7, or 10) | |
| bondAmount | Yes | Bond amount in iUSDC base units | |
| counterparty | Yes | Counterparty wallet address | |
| coverageLevel | No | Insurance coverage level (0-3) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses key behavioral traits: creation action (implies mutation), payment requirement, and return type (payment instructions). However, it lacks details about permissions, rate limits, error conditions, or what happens after payment. The description adds value but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each serve distinct purposes: first states the core action, second adds crucial behavioral context (payment and return). No wasted words, front-loaded with the main purpose. Perfectly sized for this tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description provides basic completeness: purpose, payment requirement, and return type. However, it lacks details about the SLA creation process, what the NFT represents, how payment instructions are used, or error handling. Given the complexity of blockchain/SLA operations, more context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any parameter-specific information beyond what's in the schema. It mentions payment requirement which relates to the tool's behavior but doesn't explain parameter meanings, interactions, or constraints. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Creates a new ERC-721 SLA agreement NFT'), identifies the resource (SLA agreement NFT), and distinguishes it from sibling tools like 'renew_sla' or 'get_sla_history' by specifying it's a creation operation rather than renewal or querying. The mention of ERC-721 standard and payment requirement adds technical specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating 'Requires x402 payment ($0.01)', which suggests this tool should be used when ready to pay for creating an SLA. However, it doesn't explicitly state when to use this versus alternatives like 'renew_sla' or provide clear exclusions. The guidance is present but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mitigation_receiptAInspect
Verifies a claim_breach_credit transaction by tx hash and returns a canonical mitigation receipt suitable for insurer/registry attestations. Reads from existing on-chain settlement records — no new state. Requires x402 payment ($0.001). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| txHash | Yes | Transaction hash of the breach-mitigation call |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description discloses that the tool reads from existing on-chain records (no new state), requires payment, and returns instructions. This gives good transparency for a read-only verification tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three concise sentences with no fluff. The main action is front-loaded, and every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one parameter and no output schema, the description fully covers what the tool does, its constraints (payment), and return (instructions). No gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter txHash is described in the schema with pattern and description. The description adds meaning by stating the tool verifies that hash and returns a receipt, going beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool verifies a claim_breach_credit transaction by tx hash and returns a canonical mitigation receipt. It distinguishes from siblings like claim_breach_credit by specifying verification vs. creation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides usage context: requires x402 payment ($0.001) and returns payment instructions. It implies use after a claim_breach_credit transaction, though it does not explicitly mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
network_lookupAInspect
Discovers bonded agent-network providers (clean-IP proxy egress, private mempool relay) listed on InsureLink. Returns provider wallets, x402 payment URLs, prices, and Smart Wallet / EOA compatibility hints. Free preview via this tool; full directory requires x402 payment ($0.001) at the paid endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| template | No | Optional template slug filter (e.g. proxy-egress-provider, private-mempool-relay) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses that the tool returns specific data and is free but limited (preview). It does not mention authentication or rate limits, but for a read-only preview tool, these are minor omissions. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences that efficiently convey purpose, return values, and the free/paid distinction. No unnecessary words, and key information is front-loaded. Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one optional parameter, no output schema) and lack of annotations, the description adequately covers the purpose, returned data, and limitations. It could mention whether authentication is required, but overall it is comprehensive enough for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%—the single parameter 'template' is already described with examples in the schema. The tool description reiterates the template filter and examples, adding no new semantic information beyond what the schema provides. Baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool discovers bonded agent-network providers (clean-IP proxy egress, private mempool relay) on InsureLink, and specifies the data returned (wallets, payment URLs, prices, compatibility hints). It is specific and distinguishes from siblings, none of which perform a similar lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates this tool offers a free preview, with full directory requiring payment at a paid endpoint. It also mentions the optional template filter for narrowing results. However, it does not explicitly state when not to use it or provide direct comparisons to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pay_then_callAInspect
Wrap any paid HTTP endpoint in the x402 pay-then-call loop and return the downstream response plus an EIP-191 signed receipt proving InsureLink mediated the call. The signed context envelope contains: { request: {url, method, body_sha256}, payment: {asset, amount, network, settlement_id}, response: {status, sha256, content_type, length}, observed_at, mediator: 'insurelink' } and a signature recoverable to InsureLink's deployer key. Two modes: (a) caller_supplies_payment — pass x_payment_header (base64) so the upstream call is settled by your wallet; (b) mode='quote_only' — return only the 402 quote without paying. Pass idempotency_key to safely retry: identical requests return the cached signed receipt without settling a second payment, and conflicting payloads under the same key are rejected. Use this to give an LLM verifiable provenance for any paid agent call (banking, market data, gov endpoints).
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | Target paid endpoint. | |
| body | No | Raw request body. JSON-encode before passing. | |
| mode | No | `quote_only` returns the 402 quote without settling. Default `pay_and_call`. | |
| method | No | HTTP method. Default GET. | |
| headers | No | Extra request headers (Content-Type, Authorization, etc.). | |
| purpose | No | Free-form audit string included in the signed receipt. | |
| agent_wallet | No | Calling agent wallet (logged in the signed receipt for downstream attribution). | |
| idempotency_key | No | Client-supplied idempotency key. If provided, repeated calls with the same key return the cached response and never settle a second payment. A subsequent call with the same key but a different request payload is rejected with `idempotency_key_conflict`. Keys are retained for 24h. | |
| max_amount_usdc | No | Refuse to settle if the 402 quote exceeds this USDC amount. Default 10. | |
| x_payment_header | No | Base64-encoded X-PAYMENT header (caller pre-builds settlement payload). Required unless mode='quote_only'. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so the description carries full burden. It covers the two modes, the signed receipt contents, and the max_amount_usdc safeguard. However, it does not disclose failure behaviors, error responses, or authentication details beyond x_payment_header.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured, split into modes and signed envelope details. It is concise yet informative, with no redundant sentences.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex tool with 9 parameters and no output schema, the description explains the purpose, modes, and signed receipt structure. It could be more explicit about the return format and error handling, but it covers the essential workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents each parameter. The description adds value by explaining the overall flow and the two modes but does not significantly enhance parameter-level meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool wraps paid HTTP endpoints in a pay-then-call loop and returns response plus a signed receipt. It differentiates from sibling tools like simulate_paid_action by emphasizing actual payment and verifiable provenance.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use (for verifiable paid agent calls) and describes two modes (pay_and_call and quote_only). It lacks explicit 'when not to use' or direct sibling comparison, but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
renew_slaAInspect
Renews an existing SLA agreement. Requires x402 payment ($0.005). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| tokenId | Yes | SLA token ID | |
| duration | No | Renewal duration (5, 7, or 10 years) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it's a mutation operation (implied by 'Renews'), requires payment ($0.005), and returns payment instructions. It doesn't cover rate limits, error conditions, or authentication needs, but provides essential context beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: two sentences that each earn their place by stating the action, cost, and return value. There's zero wasted text, and information is presented in logical order (purpose → requirement → outcome).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic completeness for a mutation tool with payment: it covers purpose, cost, and return type. However, it lacks details on error handling, what 'renew' actually does to the SLA, or format of payment instructions, leaving gaps for an agent to operate safely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain 'tokenId' or 'duration' further). This meets the baseline of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Renews') and resource ('an existing SLA agreement'), making the purpose immediately understandable. It distinguishes from siblings like 'mint_sla' (creation) and 'get_sla_history' (read-only). However, it doesn't specify what 'renew' entails operationally beyond payment, leaving some ambiguity about the outcome.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when an SLA needs renewal and payment is available, but provides no explicit guidance on when to use this vs. alternatives like 'mint_sla' for new agreements. It mentions a prerequisite ('Requires x402 payment') which gives some context, but lacks clear when/when-not scenarios or comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
simulate_actionAInspect
Free dry-run for any paid action (mint_sla, renew_sla, wrap_usdc, micro_reset, early_exit). Validates params, x402 header shape, and simulates the contract call against Base. Never broadcasts. Returns { simulated: true, would_succeed, revertReason? } or a precise error. Use this before calling any paid tool.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Paid action name (kebab-case) | |
| params | Yes | Same params you would send to the paid tool |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes behavior: never broadcasts, validates params and x402 header, simulates contract call, returns specific JSON fields. No annotations provided, so description fills gap completely.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, no fluff, front-loaded with key concept 'dry-run'. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 params, no output schema, description explains return format, error behavior, and usage context. Agent can confidently use tool without additional info.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (both parameters described in schema). Description lists allowed actions (kebab-case) and clarifies 'params' are same as paid tool, adding slight context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'dry-run' for paid actions, lists specific action names, and explains it simulates calls without broadcasting. Distinguishes from sibling paid tools by being a test version.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use this before calling any paid tool', providing clear when-to-use guidance. Implies alternatives are the actual paid tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
simulate_paid_actionAInspect
FREE dry-run for any paid action. Returns a deterministic shape: { would_succeed, exact_cost_usd (LIVE from /agent-pricing-oracle), gas_estimate, contract_call, reputation_delta, expected_receipt_schema, validation_errors[], pricing: { source, expected_value_usd, roi_ratio, recommended_priority, coupon? } }. Never broadcasts, never charges. Use BEFORE any paid tool to avoid wasted x402 settlements. Supported actions: wrap, mint-sla, renew, micro-reset, early-exit.
| Name | Required | Description | Default |
|---|---|---|---|
| action | Yes | Paid action name (kebab-case) | |
| params | Yes | Same params you would send to the paid tool. Required keys vary by action — see expected_receipt_schema. Tip: include `wallet`, `bond_amount`, `duration_years`, `amount_usdc` so /agent-pricing-oracle can return wallet-specific discounts and ROI. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but the description fully discloses behavior: it's a dry-run with no broadcast or charge, returns a specific structure, and uses a live pricing oracle. The description is transparent about limitations and supported actions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is dense but well-structured: starts with the core purpose, then output shape, then usage advice. It is not overly long given the complexity, but a slight trim could improve conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description provides a detailed return structure and explains supported actions. It covers key aspects for an agent to use the tool correctly, though it omits error behavior or unsupported actions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description adds value by explaining that params should mirror paid tool parameters and giving tips (e.g., include 'wallet', 'bond_amount'). This goes beyond the schema, justifying a score above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a 'FREE dry-run for any paid action' and lists the returned deterministic shape. It distinguishes from siblings by specifying 'Use BEFORE any paid tool' and enumerating supported actions (wrap, mint-sla, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicit guidance: 'Use BEFORE any paid tool to avoid wasted x402 settlements.' Also clarifies 'Never broadcasts, never charges,' so the agent knows when to invoke this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
simulate_paid_action_batchAInspect
FREE batch dry-run. Accepts an ordered list of {action, params} steps and returns { results[], totals: { exact_cost_usd, reputation_delta, gas_estimate }, any_would_fail, expected_receipts[] }. Never broadcasts, never charges. Use to plan and price multi-step flows (e.g. wrap → mint-sla → renew) before executing.
| Name | Required | Description | Default |
|---|---|---|---|
| steps | Yes | Ordered list of paid actions to simulate (max 20). | |
| stop_on_first_failure | No | If true, stop simulating after the first step whose would_succeed=false. Default false: simulate all steps. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Describes no side effects (free, dry-run), return structure, and failure behavior. Since no annotations exist, description fully discloses behavior, though some details like rate limits or auth are absent but not needed for a simulation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences only, front-loaded with key purpose. Every word adds value, no fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers input (steps, stop_on_first_failure), output (results, totals, any_would_fail), and use case. Despite no output schema, return structure is well-described. Tool complexity is addressed completely.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. Description adds context like 'ordered list' and example steps, but doesn't significantly enhance understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it is a FREE batch dry-run for simulating multi-step paid actions. Differentiates from sibling simulate_action/simulate_paid_action by emphasizing batch and planning use case.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Never broadcasts, never charges. Use to plan and price multi-step flows before executing.' This provides clear when-to-use guidance and contrasts with actual execution tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_attestationAInspect
Verifies an InsureLink reputation attestation for a wallet. Re-derives the signed payload server-side, recovers the EIP-191 signer, and returns { valid, signer, recovered, expires_at, checks }. Free.
| Name | Required | Description | Default |
|---|---|---|---|
| wallet | Yes | Wallet address (0x...) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Given no annotations, the description provides detailed steps (re-derives payload, recovers signer) and return fields, but does not mention potential side effects or prerequisites.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences efficiently convey purpose, process, and return value, with no extraneous information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description adequately covers behavior and return structure, though error conditions are omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; the description adds context by linking the wallet parameter to the verification process, but does not provide additional detail beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'verifies' and the resource 'InsureLink reputation attestation for a wallet', differentiating it from sibling tools that perform other actions like claiming or getting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when verification is needed, but does not explicitly state when to use or not use this tool versus siblings like get_attestation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
voice_intentAInspect
Parses a natural-language command (e.g. 'renew my SLA #42 for 5 years', 'check reputation for 0x…', 'pay 1.50 USDC to 0x…') into a structured InsureLink action plan with endpoint, params, payment requirements, and confidence. Free preview returns the supported grammar; full classification requires x402 payment ($0.001) at the paid endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| utterance | No | Optional sample utterance to echo in the preview response. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description mentions payment requirement and preview but lacks details on safety, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with no waste; front-loaded with examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple schema and no output schema, description covers essential aspects like preview vs paid and supported grammar.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%; description adds meaning by explaining that 'utterance' is an example for preview, beyond the schema's optionality.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it parses natural-language commands into a structured plan, with examples and differentiation from sibling tools which are mostly actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions free preview vs paid classification, guiding when to use each. No alternatives given but context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wrap_usdcAInspect
Wraps USDC into iUSDC. Requires x402 payment ($0.001). Returns payment instructions.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of USDC to wrap (base units) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the payment requirement and return of instructions, but it lacks details on permissions, rate limits, or potential side effects like transaction confirmation times. This is adequate but has clear gaps for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action, followed by key constraints and outcomes in just two sentences. Every sentence earns its place by providing essential information without any waste, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity as a mutation with no annotations and no output schema, the description is minimally complete. It covers the action, cost, and return type, but lacks details on error handling, response format, or integration with sibling tools. This meets basic needs but leaves room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the 'amount' parameter fully. The description doesn't add any additional meaning or examples beyond what the schema provides, such as clarifying the 'base units' format. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Wraps') and resource ('USDC into iUSDC'), making the purpose specific and understandable. However, it doesn't explicitly differentiate this tool from sibling tools like 'mint_sla' or 'renew_sla', which might involve similar financial operations, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage by mentioning 'Requires x402 payment ($0.001)', which suggests a context of cost, but it doesn't provide explicit guidance on when to use this tool versus alternatives or any exclusions. This leaves the agent with only implied context for decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!