xportalx
Server Details
AI agent gateway with web fetching, data extraction, crypto pricing, and x402 payments
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 29 of 29 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes, but there is some overlap in web data fetching (fetch_api, fetch_page, crawl, extract) and wallet funding (fund_wallet, fund_wallet_eth, x402_send) that could cause confusion. Descriptions help differentiate, but agents might misselect between similar tools.
Naming is mixed with snake_case (e.g., coinbase_price, create_wallet) and underscores with prefixes (e.g., erc8004_register, x402_proxy), but lacks a uniform verb_noun pattern. Some tools use descriptive names (e.g., nansen_token_god_mode), while others are more generic (e.g., crawl, extract), leading to inconsistency.
With 29 tools, the set feels overly broad and heavy for a single server, spanning domains like crypto payments, web scraping, wallet management, and analytics. This many tools can overwhelm agents and suggests poor scoping, as it combines multiple distinct functionalities into one interface.
The server covers key areas well, such as wallet lifecycle (create, fund, balance, send, withdraw), crypto data (prices, analytics), and web interactions (fetching, crawling, extraction). Minor gaps exist, like missing wallet deletion or more granular crypto operations, but core workflows are supported without major dead ends.
Available Tools
29 toolscoinbase_facilitatorCInspect
x402 payment facilitation - get payment info, verify transactions, check rates. FREE
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Token address (for verify) | |
| action | Yes | Facilitator action | |
| txHash | No | Transaction hash (for verify) | |
| expectedAmount | No | Expected amount (for verify) | |
| expectedRecipient | No | Expected recipient (for verify) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'FREE,' which hints at no cost, but doesn't cover critical aspects like authentication needs, rate limits, error handling, or what 'verify' entails (e.g., confirmation of transaction success). The description lists actions but lacks depth on how they behave or what outputs to expect.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, listing key actions in a single sentence followed by a cost note. Every phrase ('get payment info, verify transactions, check rates') earns its place by outlining functionality, though 'FREE' could be integrated more smoothly. It avoids redundancy and is appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete for a tool with 5 parameters and multiple actions. It doesn't explain return values, error cases, or behavioral nuances like what 'check rates' involves. For a payment facilitation tool with verification capabilities, more context on outcomes and usage is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying 'action' can be 'info' or 'verify' (though the enum in schema covers this) and hinting at parameter usage (e.g., 'for verify'). However, it doesn't explain semantics like what 'info' returns or how verification works, keeping it at the baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('get payment info, verify transactions, check rates') and identifies the resource ('x402 payment facilitation'). It distinguishes itself from most siblings by focusing on payment facilitation rather than wallet operations, price checking, or other crypto functions, though it doesn't explicitly differentiate from 'x402_proxy' or 'x402_send' which are related x402 tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it mentions 'FREE,' this doesn't help an agent choose between this and sibling tools like 'x402_proxy' or 'x402_send' for x402-related tasks, or 'verify_agent' for verification. There's no context about prerequisites, scenarios, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
coinbase_priceCInspect
Get cryptocurrency prices and exchange rates from Coinbase. FREE - no payment required.
| Name | Required | Description | Default |
|---|---|---|---|
| symbol | Yes | Crypto symbol (e.g., 'BTC', 'ETH') | |
| currency | No | Fiat currency (e.g., 'USD') | USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It adds some context with 'FREE - no payment required,' indicating no authentication or cost barriers. However, it lacks details on rate limits, error handling, response format, or whether this is a read-only operation (implied by 'Get' but not explicit). For a tool with zero annotation coverage, this is insufficient.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences. The first sentence states the core purpose, and the second adds useful context about being free. There's no wasted verbiage, and it's front-loaded with the main functionality. However, it could be slightly more structured by explicitly mentioning parameters or output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters, no nested objects) and high schema coverage, the description is somewhat complete but has gaps. No output schema exists, so the description doesn't explain return values (e.g., price format). With no annotations, it should provide more behavioral context (e.g., read-only nature, potential errors). It's adequate but not fully helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't mention any parameters, but the input schema has 100% description coverage, fully documenting both 'symbol' and 'currency' with examples. Since schema coverage is high (>80%), the baseline score is 3. The description adds no parameter semantics beyond what the schema provides, but it doesn't need to compensate for gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get cryptocurrency prices and exchange rates from Coinbase.' It specifies the verb ('Get') and resource ('cryptocurrency prices and exchange rates'), and mentions the data source ('Coinbase'). However, it doesn't explicitly differentiate this tool from potential siblings like 'x402_proxy_price' or 'mcp_pricing', which might offer similar price data from different sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions 'FREE - no payment required,' which hints at accessibility but doesn't specify when to use this tool versus alternatives (e.g., 'x402_proxy_price' or 'dexscreener_token'). No explicit context, exclusions, or prerequisites are provided, leaving the agent with little direction on tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
crawlAInspect
Crawl a website starting from a URL, following links up to specified depth. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The starting URL to crawl | |
| maxDepth | No | Maximum link depth (default: 2) | |
| maxPages | No | Maximum pages to crawl (default: 10) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden. It discloses cost ('0.01 USDC') and basic behavioral traits (crawling with depth/page limits), but lacks details on rate limits, authentication needs, output format, error handling, or what 'crawl' entails beyond following links.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise (two sentences) and front-loaded with core functionality, followed by cost information. Every sentence earns its place with zero wasted words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, no annotations, and no output schema, the description is minimally adequate. It covers basic purpose and cost but lacks details on return values, error cases, or advanced behavioral context that would be helpful given the complexity of web crawling.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, providing full parameter documentation. The description adds no additional semantic context beyond implying parameters control crawling scope, matching the baseline score when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('crawl', 'following links') and resources ('a website starting from a URL'), distinguishing it from siblings like 'fetch_page' or 'extract' by emphasizing traversal behavior rather than single-page retrieval or content extraction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'fetch_page' for single pages or 'extract' for content parsing. It mentions cost but not contextual factors like performance, depth needs, or sibling tool comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_walletAInspect
Create a custodial wallet for an AI agent using Privy. Cost: 0.01 USDC. The owner wallet is automatically detected from your x402 payment - no need to provide it explicitly. The wallet can be used for automated transactions including swaps.
| Name | Required | Description | Default |
|---|---|---|---|
| label | No | Optional friendly name for this wallet (e.g., 'Trading Bot', 'Payment Agent') | |
| owner_wallet | Yes | Your signing wallet address (0x...) that will own this custodial wallet. Used as the lookup key for future operations. TIP: many clients can also pass just the address string as arguments. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively adds context beyond what the input schema offers: it discloses the cost ('0.01 USDC'), the automatic owner wallet detection from x402 payment, and the wallet's purpose for automated transactions. This covers key operational traits like financial cost and authentication needs, though it doesn't mention rate limits or error handling, preventing a perfect score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized and front-loaded, with three sentences that each add value: the first states the core action and cost, the second explains the owner wallet detection, and the third describes the wallet's use. There is zero waste or redundancy, making it highly efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a creation operation with cost and automated use), no annotations, and no output schema, the description does a good job of covering key aspects: purpose, cost, owner detection, and usage context. It compensates well for the lack of structured data. However, it doesn't fully address potential outputs or error cases, which slightly limits completeness for a tool with financial implications.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds no specific parameter semantics beyond what's in the schema (e.g., it doesn't clarify 'label' or 'owner_wallet' further). According to the rules, with high schema coverage, the baseline is 3 even without param info in the description, which applies here.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a custodial wallet') and resource ('for an AI agent using Privy'), with specific details about the owner wallet detection mechanism. It distinguishes from sibling tools like 'fund_wallet' or 'get_wallet_balance' by focusing on creation rather than funding or querying. However, it doesn't explicitly differentiate from all wallet-related siblings (e.g., 'withdraw_wallet'), keeping it at 4 instead of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'for an AI agent' and 'automated transactions including swaps,' suggesting when this tool might be appropriate. It also notes that the owner wallet is automatically detected, which provides some operational guidance. However, it lacks explicit when-to-use vs. alternatives (e.g., compared to 'fund_wallet' or other wallet tools) or any exclusions, resulting in a score of 3.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dexscreener_tokenBInspect
Get real-time token price and market data from DexScreener. FREE - no payment required.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain (e.g., 'base', 'ethereum') | base |
| tokenAddress | Yes | The token contract address |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'real-time' data and 'FREE - no payment required,' which adds some context about timeliness and cost. However, it lacks critical behavioral details such as rate limits, authentication requirements, data freshness guarantees, error handling, or what specific market data fields are returned (e.g., price, volume, liquidity).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise—two short sentences with zero waste. The first sentence directly states the purpose, and the second adds valuable context about being free. It's front-loaded with the core functionality, making it easy to scan and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of fetching financial data (which often involves rate limits, authentication, or data formatting) and the absence of both annotations and an output schema, the description is incomplete. It doesn't explain what the return values look like (e.g., JSON structure, units), error conditions, or operational constraints like API call limits. This leaves significant gaps for an AI agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with both parameters ('chain' and 'tokenAddress') having clear descriptions in the schema. The tool description doesn't add any parameter-specific information beyond what's already documented in the schema, such as format examples or validation rules. With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get real-time token price and market data from DexScreener.' It specifies the action (get), resource (token price and market data), and source (DexScreener). However, it doesn't explicitly differentiate from potential sibling tools like 'coinbase_price' or 'x402_proxy_price' that might provide similar financial data from different sources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes 'FREE - no payment required,' which implies this tool is cost-effective compared to paid alternatives. However, it doesn't provide explicit guidance on when to use this tool versus sibling tools like 'coinbase_price' or 'x402_proxy_price,' nor does it mention any prerequisites or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
erc8004_registerCInspect
Register your AI agent on-chain with ERC-8004 Trustless Agent Identity. Cost: 0.01 USDC. You receive an NFT representing your agent identity.
| Name | Required | Description | Default |
|---|---|---|---|
| agentName | Yes | Name of your agent (e.g., 'MyAwesomeAgent') | |
| paymentProof | No | Transaction hash of payment (required after initial 402 response) | |
| agentEndpoints | No | Optional array of endpoints [{name, endpoint, version}] | |
| recipientWallet | Yes | Your wallet address (0x...) to receive the Agent NFT | |
| agentDescription | Yes | Description of what your agent does |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions cost (0.01 USDC) and the NFT outcome, but lacks details on permissions, rate limits, error handling, or the registration process (e.g., blockchain interactions). This is inadequate for a tool that likely involves on-chain transactions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, with two sentences that efficiently convey key information (registration action, cost, and outcome). There's no wasted text, though it could be slightly more structured by separating cost and outcome details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of an on-chain registration tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral aspects (e.g., transaction flow, error cases) and doesn't explain the return value or NFT specifics, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no additional meaning beyond the schema, such as explaining parameter relationships or constraints. Baseline 3 is appropriate as the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('register your AI agent on-chain') and the resource ('ERC-8004 Trustless Agent Identity'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'verify_agent' or 'create_wallet', which could have related functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions cost and the NFT outcome but doesn't specify prerequisites, timing, or comparisons to sibling tools like 'verify_agent' or 'create_wallet', leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
extractBInspect
Extract structured data from a web page using a JSON schema. AI-powered extraction. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The web page URL to extract from | |
| intent | No | Natural language description of what to extract | |
| schema | Yes | JSON schema string defining the data structure to extract |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds some context: 'AI-powered extraction' suggests automated processing, and 'Cost: 0.01 USDC' indicates a financial cost per use, which is valuable beyond the input schema. However, it lacks details on rate limits, error handling, output format, or performance characteristics (e.g., speed, reliability). The description doesn't contradict any annotations, but it's incomplete for a tool with potential complexity.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded: the first sentence states the core functionality, the second adds key behavioral context ('AI-powered extraction'), and the third provides cost information. Every sentence earns its place with no wasted words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (AI-powered extraction with a cost) and the absence of annotations and output schema, the description is moderately complete. It covers the basic purpose and cost, but lacks details on output structure, error cases, or integration notes. For a tool that likely returns structured data, the missing output schema means the description should ideally hint at return values, but it doesn't. It's adequate but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning the input schema fully documents the three parameters (url, intent, schema). The description adds no specific parameter semantics beyond what's in the schema—it doesn't explain the relationship between 'intent' and 'schema', or provide examples. Given the high coverage, the baseline score of 3 is appropriate, as the description doesn't compensate but also doesn't detract.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Extract structured data from a web page using a JSON schema. AI-powered extraction.' It specifies the verb ('extract'), resource ('structured data from a web page'), and method ('using a JSON schema'), distinguishing it from siblings like 'fetch_page' (which retrieves raw content) or 'crawl' (which likely navigates multiple pages). However, it doesn't explicitly contrast with all potential extraction-related siblings, keeping it at a 4 rather than a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions 'AI-powered extraction' and 'Cost: 0.01 USDC', which implies a paid service, but it doesn't specify when to use this tool versus alternatives (e.g., 'fetch_page' for raw HTML or other extraction methods). No explicit when/when-not instructions or named alternatives are provided, leaving the agent with little contextual direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
failure_reasonBInspect
Understand why a job failed. FREE - lookup job failures or get failure code reference.
| Name | Required | Description | Default |
|---|---|---|---|
| jobId | No | xportalx job ID to look up failure details | |
| failure_code | No | Any failure code to explain (e.g., 'RATE_LIMIT_429') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'FREE' (implying no cost) and hints at lookup/reference functionality, but lacks critical behavioral details: whether this is read-only, what permissions are needed, rate limits, response format, or if it modifies data. For a tool with zero annotation coverage, this is inadequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two clauses) and front-loaded with the core purpose. The 'FREE' tag adds context efficiently. However, the second clause could be more structured (e.g., clarifying the two use cases separately), and there's minor redundancy with 'failure' repeated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description is incomplete. It doesn't explain what the tool returns (e.g., error details, codes, timestamps), behavioral traits like idempotency or side effects, or how to interpret results. For a tool with 2 parameters and zero structured context, this leaves significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (jobId and failure_code). The description adds marginal value by implying these are alternatives ('lookup job failures or get failure code reference'), but doesn't provide additional semantics beyond what the schema states. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Understand why a job failed' (verb+resource). It distinguishes itself from sibling tools by focusing on job failure analysis, which none of the listed siblings address. However, it doesn't specify what type of 'job' (e.g., xportalx job) or fully differentiate from potential generic error lookup tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context with 'lookup job failures or get failure code reference,' suggesting two use cases. However, it doesn't explicitly state when to use this tool versus alternatives (e.g., debugging tools or error logs in other systems) or provide exclusions. The 'FREE' tag hints at cost considerations but lacks detail.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_apiAInspect
Fetch structured JSON/XML data from any API endpoint. Best for REST APIs and data feeds. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The API endpoint URL to fetch | |
| body | No | Optional request body for POST/PUT | |
| method | No | GET | |
| headers | No | Optional HTTP headers |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context beyond the input schema by mentioning cost ('Cost: 0.01 USDC'), which is a critical behavioral trait not captured elsewhere. However, it doesn't cover other important aspects like rate limits, authentication requirements, error handling, or response formats.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each add distinct value: the core functionality and the cost information. It's front-loaded with the primary purpose and wastes no words, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 4 parameters, no annotations, and no output schema, the description is somewhat incomplete. While it covers cost and general use case, it lacks information about response formats, error conditions, authentication needs, and practical constraints. The cost disclosure is helpful but doesn't fully compensate for other missing context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 75%, so the schema already documents most parameters well. The description doesn't add any specific parameter semantics beyond what's in the schema. It mentions 'JSON/XML data' which relates to expected responses but doesn't clarify parameter usage or constraints. Baseline 3 is appropriate given the schema does most of the work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch structured JSON/XML data from any API endpoint.' It specifies the action (fetch), resource (structured data), and target (API endpoints). However, it doesn't explicitly differentiate from sibling tools like 'fetch_page' or 'crawl' which might have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some guidance with 'Best for REST APIs and data feeds,' which implies usage context. However, it doesn't explicitly state when NOT to use this tool or mention alternatives among the many sibling tools, leaving room for ambiguity in tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_pageAInspect
Fetch raw HTML content from any web page. Best for static HTML scraping. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The web page URL to fetch | |
| method | No | GET |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds important context about cost ('Cost: 0.01 USDC') which isn't in the schema, and the 'static HTML scraping' guidance provides behavioral insight. However, it doesn't disclose other important traits like rate limits, authentication needs, error handling, or what 'raw HTML' specifically means (e.g., whether it includes rendered JavaScript).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with just two sentences that each add value. The first sentence states the core purpose, the second provides important behavioral context (optimization for static content) and cost information. There's zero wasted text and it's front-loaded with the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a web fetching tool with 2 parameters, no annotations, and no output schema, the description is minimally adequate. It covers the core purpose and provides some behavioral context (static HTML focus, cost), but leaves significant gaps: no parameter guidance, no output format information, and incomplete behavioral disclosure about important operational aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only the 'url' parameter has a description). The tool description doesn't mention any parameters at all, providing no additional semantic information beyond what's in the schema. With 2 parameters and partial schema coverage, the description fails to compensate for the coverage gap, earning only the baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch raw HTML content from any web page' specifies the verb (fetch) and resource (HTML content from web pages). It distinguishes from some siblings like 'fetch_pdf' or 'fetch_api' by specifying HTML content, but doesn't explicitly differentiate from 'crawl' or 'extract' which might have overlapping use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance with 'Best for static HTML scraping,' which implies this tool is optimized for static content rather than dynamic pages. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the many sibling tools, leaving some ambiguity about tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fetch_pdfBInspect
Fetch and extract text content from PDF documents. Returns structured text with page numbers. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL of the PDF document to fetch and parse |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses cost ('0.01 USDC') and output format ('structured text with page numbers'), but lacks details on error handling, rate limits, authentication needs, or file size constraints, leaving behavioral gaps for a tool that fetches external resources.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core functionality and includes cost information efficiently. However, the cost detail, while useful, could be integrated more seamlessly, and the sentence structure is slightly abrupt.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations, no output schema, and a simple input schema, the description covers purpose and cost adequately but lacks details on error cases, performance, or output structure beyond 'structured text with page numbers', leaving some context gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents the single 'url' parameter. The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('fetch and extract text content') and resource ('PDF documents'), and distinguishes it from siblings by focusing on PDF processing rather than web crawling, cryptocurrency operations, or other document types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'extract' or 'fetch_page', nor are any prerequisites or exclusions mentioned. The description only states what it does, not when it should be selected.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fund_walletAInspect
Fund your custodial wallet with USDC. DYNAMIC PRICING: You pay (amount + 0.01 USDC fee). The owner wallet is auto-detected from your x402 payment. Example: To deposit 5 USDC, you pay 5.01 USDC and 5 USDC is transferred to your custodial wallet.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of USDC to deposit into your custodial wallet (e.g., '5' for 5 USDC). You will pay this amount plus a 0.01 USDC platform fee. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key traits: the tool involves a financial transaction (funding), includes a dynamic pricing model (amount + 0.01 USDC fee), auto-detects the owner wallet, and transfers USDC. It does not cover aspects like rate limits, error handling, or authentication needs, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by pricing details and a practical example. Every sentence adds essential information without redundancy, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with fees) and lack of annotations or output schema, the description is reasonably complete. It covers the action, pricing, and process, but could benefit from details on return values or error conditions. However, it provides enough context for basic usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly explaining the 'amount' parameter. The description adds minimal value beyond the schema by reiterating the fee in the example, but does not provide additional semantics like format constraints or edge cases. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fund your custodial wallet with USDC'), identifies the resource (custodial wallet), and distinguishes it from sibling tools like 'fund_wallet_eth' by specifying USDC. It provides a concrete example that reinforces the purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'custodial wallet' and 'x402 payment', suggesting when this tool is applicable. However, it does not explicitly state when to use this versus alternatives like 'fund_wallet_eth' or 'send_tokens', nor does it provide exclusion criteria, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fund_wallet_ethAInspect
Fund your custodial wallet with ETH for gas. DYNAMIC PRICING: You pay the USDC equivalent of (ETH amount + 0.01 USDC fee). The owner wallet is auto-detected from your x402 payment. Example: To deposit 0.001 ETH, you pay ~$3.50 USDC (ETH value) + 0.01 USDC fee.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of ETH to deposit into your custodial wallet (e.g., '0.001' for 0.001 ETH). You will pay the USDC equivalent plus a 0.01 USDC platform fee. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the tool involves a payment transaction (funding), uses dynamic pricing (USDC equivalent conversion), includes a fee structure (0.01 USDC platform fee), auto-detects the owner wallet from x402 payment, and provides a concrete example. However, it does not mention rate limits, error conditions, or confirmation details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with two sentences: the first states the purpose and key behavioral details, and the second provides a concrete example. Every sentence adds value without redundancy, and it is appropriately front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with dynamic pricing), no annotations, and no output schema, the description does a good job covering the core behavior, pricing model, and example. However, it lacks details on return values (e.g., transaction confirmation), error handling, or prerequisites (e.g., needing a pre-existing wallet), leaving minor gaps for a mutation tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents the 'amount' parameter thoroughly. The description adds marginal value by reinforcing the parameter's purpose (ETH deposit) and providing an example ('0.001'), but does not introduce new semantic details beyond what the schema states. This meets the baseline of 3 for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Fund your custodial wallet with ETH for gas') and distinguishes it from siblings like 'fund_wallet' (which likely funds with other assets) and 'get_wallet_balance' (which reads rather than funds). It specifies the resource (custodial wallet) and purpose (gas funding).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool: to fund a custodial wallet with ETH specifically for gas purposes. It implicitly distinguishes from alternatives like 'fund_wallet' (which may handle other assets) but does not explicitly name when-not-to-use cases or list all sibling alternatives. The example reinforces the usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_wallet_balanceBInspect
Check the balance of your custodial wallet. FREE - no payment required. Returns ETH and USDC balances on Base network.
| Name | Required | Description | Default |
|---|---|---|---|
| owner_wallet | Yes | Your signing wallet address (0x...) to look up the linked custodial wallet balance. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool is 'FREE - no payment required,' which adds useful context about cost, and specifies the network and token types returned. However, it lacks details on error handling, rate limits, authentication needs, or what happens if the wallet doesn't exist. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, followed by cost and return details in two efficient sentences. Every phrase adds value: 'Check the balance' defines the action, 'custodial wallet' specifies the resource, 'FREE - no payment required' addresses cost concerns, and 'Returns ETH and USDC balances on Base network' clarifies output. There is no wasted text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides basic context like network and token types, but it's incomplete for a tool that interacts with financial data. It doesn't explain return format, error cases, or security implications. The schema covers the single parameter well, but overall, the description should do more to compensate for the lack of structured data, especially for a balance-checking operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the single parameter 'owner_wallet' well-described in the schema as 'Your signing wallet address (0x...) to look up the linked custodial wallet balance.' The description adds no additional parameter information beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without compensating value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check the balance of your custodial wallet' with specific details about what it returns ('ETH and USDC balances on Base network'). It distinguishes from some siblings like 'fund_wallet' or 'send_tokens' by focusing on balance checking rather than transactions. However, it doesn't explicitly differentiate from all potential balance-related tools that might exist.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying it's for 'custodial wallet' balance on 'Base network' and mentions 'FREE - no payment required,' which suggests when cost might be a consideration. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'fund_wallet' or 'withdraw_wallet,' nor does it mention prerequisites or exclusions beyond the network scope.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
markdown_convertAInspect
Convert a web page to clean Markdown format. Great for LLM consumption. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The web page URL to convert |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about the output format ('clean Markdown format') and cost ('Cost: 0.01 USDC'), but does not cover other behavioral aspects like rate limits, error handling, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two sentences, each earning its place by stating the core functionality and cost, with no wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (single parameter, no output schema, no annotations), the description is adequate but has gaps. It explains the purpose and cost but lacks details on output structure, error cases, or integration with sibling tools, leaving room for improvement in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single parameter 'url'. The description does not add any parameter-specific details beyond what the schema provides, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Convert a web page to clean Markdown format') and resource ('web page'), distinguishing it from sibling tools like 'fetch_page' or 'extract' by specifying the output format and purpose ('Great for LLM consumption').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Convert a web page to clean Markdown format') and implies an alternative use case ('Great for LLM consumption'), but does not explicitly state when not to use it or name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mcp_pricingAInspect
Get pricing information for all MCP tools. FREE - returns cost breakdown for each tool.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool is read-only ('Get') and returns cost breakdowns, but lacks details on behavioral traits like rate limits, authentication needs, or data freshness. This is adequate but has gaps, making it a baseline score for a read-only tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, consisting of two short sentences that directly state the tool's function and key feature ('FREE - returns cost breakdown'). There is no wasted text, making it highly efficient and easy to understand.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is complete enough for its purpose. It explains what the tool does and what it returns, though it could benefit from more behavioral context (e.g., data sources or update frequency) to achieve a perfect score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description adds value by clarifying the tool's scope (MCP tools) and output type (cost breakdown), which goes beyond the schema, earning a score above the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Get') and resource ('pricing information for all MCP tools'), and distinguishes it from siblings by specifying it returns cost breakdowns for MCP tools specifically, unlike other tools focused on crypto operations, wallets, or data fetching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool—to obtain pricing information for MCP tools—and implies it's for cost analysis. However, it does not explicitly state when not to use it or name alternatives, such as using other tools for non-pricing queries, which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nansen_smart_money_holdingsAInspect
Screen tokens with Smart Money activity using Nansen Token Screener. Supports natural language like 'what are whales buying on base?' or 'trending tokens on solana'. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Single chain (alternative to chains array) | base |
| input | No | Natural language query (e.g., 'what are whales buying on base?', 'trending tokens this week') | |
| limit | No | Number of results (default: 20, max: 100) | |
| query | No | Natural language query (alternative) | |
| chains | No | Chains to query (e.g., ['ethereum', 'base', 'solana', 'arbitrum']) | |
| sortBy | No | Sort field | smart_money_netflow_24h |
| sortDir | No | Sort direction | desc |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context: it mentions a cost ('Cost: 0.01 USDC'), which is a key behavioral trait not in the schema. However, it doesn't cover other important aspects like rate limits, error handling, or what the output looks like (e.g., format, pagination). The description doesn't contradict any annotations, as none are given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured in two sentences: the first states the purpose and method, the second provides usage examples and cost. Every sentence adds value without redundancy, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (7 parameters, no annotations, no output schema), the description is somewhat complete but has gaps. It covers purpose, usage examples, and cost, but lacks details on output format, error conditions, or how to interpret results. Without an output schema, the description should ideally explain what the tool returns, but it doesn't, leaving room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema: it mentions natural language queries, which aligns with the 'input' and 'query' parameters, but doesn't provide additional syntax or format details. With high schema coverage, the baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Screen tokens with Smart Money activity using Nansen Token Screener.' It specifies the verb ('screen'), resource ('tokens'), and method ('using Nansen Token Screener'), making it distinct from most siblings. However, it doesn't explicitly differentiate from 'nansen_smart_money_netflows' or 'nansen_token_god_mode', which appear related.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by mentioning natural language queries and giving examples ('what are whales buying on base?' or 'trending tokens on solana'), which implies when to use it. However, it lacks explicit guidance on when to choose this tool over alternatives like 'nansen_smart_money_netflows' or 'nansen_token_god_mode', and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nansen_smart_money_netflowsAInspect
Track net capital flows from Smart Money wallets. Supports natural language like 'are whales accumulating ETH?' or 'smart money flows for PEPE last 7 days'. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Single chain (alternative to chains array) | ethereum |
| input | No | Natural language query (e.g., 'are smart money accumulating ETH?', 'whale flows last week') | |
| limit | No | Number of results (default: 20) | |
| query | No | Natural language query (alternative) | |
| chains | No | Chains to query (e.g., ['ethereum', 'base', 'solana']) | |
| timeframe | No | Timeframe for netflow data | 24h |
| tokenAddress | No | Filter by specific token address OR symbol (e.g., 'PEPE', 'VIRTUAL', '0x...') | |
| token_address | No | Filter by token address (alternative) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about supporting natural language queries and the cost (0.01 USDC), which are not covered by the schema. However, it lacks details on permissions, rate limits, error handling, or what the output looks like (e.g., format, data structure), leaving gaps for a tool with 8 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first states the core functionality with examples, and the second adds cost information. Every sentence earns its place by providing essential context without redundancy, making it front-loaded and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, no annotations, no output schema), the description is incomplete. It covers purpose and cost well but lacks details on behavioral aspects like permissions, rate limits, and output format. While it compensates somewhat with natural language examples, more context is needed for a tool of this scope to be fully actionable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds marginal value by implying natural language usage through examples, but does not provide additional syntax, format details, or clarification beyond what the schema specifies (e.g., it doesn't explain how 'input' and 'query' differ). Baseline 3 is appropriate when the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('track net capital flows') and resources ('Smart Money wallets'), and distinguishes it from sibling tools like nansen_smart_money_holdings by focusing on flows rather than holdings. It provides concrete examples of natural language queries that help clarify the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool (e.g., for natural language queries about smart money flows), and the examples help illustrate typical use cases. However, it does not explicitly state when not to use it or name alternatives among siblings, such as nansen_smart_money_holdings for holdings data instead of flows.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nansen_token_god_modeBInspect
Full token analytics with Nansen TGM. Supports natural language like 'analyze VIRTUAL token' or 'who holds the most PEPE?'. Auto-resolves token symbols to addresses. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Blockchain (e.g., 'ethereum', 'base', 'solana', 'arbitrum') | ethereum |
| input | No | Natural language query (e.g., 'analyze VIRTUAL token', 'show me PEPE holders') | |
| query | No | Natural language query (alternative) | |
| timeframe | No | Timeframe for flow data | 24h |
| tokenAddress | No | Token contract address OR symbol (e.g., 'VIRTUAL', 'PEPE', '0x...') | |
| token_address | No | Token contract address (alternative) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions key behaviors: supports natural language queries, auto-resolves token symbols to addresses, and has a cost (0.01 USDC). However, it doesn't describe what 'full token analytics' includes, response format, rate limits, authentication requirements, or error conditions. The description adds some value but leaves significant behavioral aspects unspecified.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose statement, usage examples with capabilities, and cost information. Each sentence adds value. However, the third sentence about cost feels somewhat disconnected from the analytics focus and could be better integrated.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 6-parameter analytics tool with no annotations and no output schema, the description is moderately complete. It covers the core purpose and usage pattern but lacks details about what analytics are returned, how results are structured, error handling, or performance characteristics. Given the complexity of token analytics and absence of structured output documentation, the description should provide more guidance about expected results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description mentions natural language queries and auto-resolution of symbols, which aligns with the 'input'/'query' and 'tokenAddress'/'token_address' parameters. However, it doesn't add meaningful semantic context beyond what the schema provides - no explanation of how parameters interact or which combinations are valid.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides 'Full token analytics with Nansen TGM' and gives specific examples of natural language queries. It distinguishes itself from sibling tools by focusing on comprehensive token analytics rather than price data, wallet operations, or other blockchain functions. However, it doesn't explicitly differentiate from nansen_smart_money_holdings or nansen_wallet_profiler which might also provide analytics.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage through examples ('analyze VIRTUAL token' or 'who holds the most PEPE?') and mentions auto-resolution of symbols. However, it doesn't explicitly state when to use this tool versus alternatives like dexscreener_token or other Nansen tools. The cost mention (0.01 USDC) provides some context about when cost considerations might apply.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
nansen_wallet_profilerAInspect
Comprehensive wallet analysis. Supports natural language like 'profile vitalik.eth' or 'what labels does 0x... have?'. Returns balances, Nansen labels, and transactions. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Single chain (alternative) | ethereum |
| input | No | Natural language query (e.g., 'profile this wallet 0x...', 'analyze vitalik.eth') | |
| query | No | Natural language query (alternative) | |
| chains | No | Chains to query for balances/transactions | |
| wallet | No | Wallet address (alternative) | |
| address | No | Wallet address (0x...) or ENS name to analyze | |
| includeTransactions | No | Include recent transactions (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's function, supports natural language queries, and explicitly states the cost ('0.01 USDC'), which is crucial behavioral information. However, it doesn't cover aspects like rate limits, error handling, or authentication needs, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately sized with three sentences: purpose, usage examples, and cost. It's front-loaded with the core function and uses zero wasted words. A minor deduction for not structuring alternative parameter guidance more explicitly, but overall efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description provides good coverage of the tool's purpose, usage, and cost. However, for a tool with 7 parameters and complex functionality (wallet analysis across chains), it lacks details on output format, error cases, and how results are structured, which limits completeness for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds value by illustrating natural language usage examples ('profile vitalik.eth'), which helps clarify the 'input' and 'query' parameters, but it doesn't provide additional semantic details beyond what the schema offers. Baseline 3 is appropriate given high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool performs 'comprehensive wallet analysis' and specifies it returns 'balances, Nansen labels, and transactions.' It uses a specific verb ('analysis') and identifies the resource ('wallet'). However, it doesn't explicitly differentiate from sibling tools like 'get_wallet_balance' or 'nansen_smart_money_holdings,' which prevents a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implied usage through examples ('profile vitalik.eth' or 'what labels does 0x... have?') and mentions cost, but it lacks explicit guidance on when to use this tool versus alternatives like 'get_wallet_balance' or other Nansen tools. No when-not-to-use or prerequisite information is included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_tokensAInspect
Send USDC from your custodial wallet to ANY wallet address. Cost: 0.01 USDC. The sender (your custodial wallet) is auto-detected from your x402 payment. Use this to pay other agents, transfer to exchanges, or send to any address.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of USDC to send (e.g., '5' for 5 USDC). Must have sufficient balance in your custodial wallet. | |
| to_address | Yes | Destination wallet address (0x...) to send USDC to. Can be any valid EVM address. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by stating the cost (0.01 USDC) and that the sender is auto-detected from x402 payment, which are important behavioral traits. However, it doesn't mention potential failure modes (e.g., insufficient balance beyond what's implied in parameter description), rate limits, or what happens on success/failure, leaving some gaps for a financial transaction tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in three sentences: purpose statement, cost/auto-detection details, and usage examples. Every sentence adds value with no redundant information, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial transaction tool with no annotations and no output schema, the description provides good basic context (purpose, cost, auto-detection, examples) but lacks important details about what happens after invocation (success response format, error conditions, transaction confirmation). Given the complexity of sending tokens, more behavioral transparency would be needed for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters. The description doesn't add any additional parameter semantics beyond what's in the schema (e.g., it doesn't clarify format examples beyond '0x...' or '5'). This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Send USDC from your custodial wallet to ANY wallet address') and distinguishes it from siblings like fund_wallet (which adds funds) or get_wallet_balance (which checks balance). It specifies the resource (USDC), source (custodial wallet), and destination scope (ANY wallet address).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Use this to pay other agents, transfer to exchanges, or send to any address'), giving practical examples. However, it doesn't explicitly state when NOT to use it or mention alternatives like x402_send (a sibling tool that might have different characteristics), leaving some room for improvement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
solve_captchaAInspect
Solve captchas on protected pages and extract content. Uses 2Captcha to bypass reCAPTCHA/hCaptcha, then Claude AI selects the best tool to retrieve content. Cost: 0.01 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The captcha-protected URL to access | |
| intent | No | What you want to extract from the page after solving captcha | |
| siteKey | No | Site key for reCAPTCHA/hCaptcha (optional - auto-detected if not provided) | |
| captchaType | No | Type of captcha to solve | recaptcha_v2 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and discloses key behavioral traits: it uses 2Captcha for bypassing specific captcha types, involves Claude AI for tool selection post-solve, and includes cost information (0.01 USDC). It does not detail rate limits, error handling, or authentication needs, but provides substantial operational context beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with core functionality ('solve captchas on protected pages and extract content'), followed by implementation details and cost, with no wasted sentences. It efficiently conveys essential information in a compact, structured manner, making it easy for an agent to grasp the tool's purpose and constraints quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (captcha-solving with external services), no annotations, and no output schema, the description provides good context on the process and cost. However, it lacks details on output format, error cases, or integration specifics with Claude AI, leaving some gaps for full agent understanding without additional documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal semantic value beyond the schema, as it does not explain parameter interactions, defaults, or usage nuances. The baseline score of 3 reflects adequate but not enhanced parameter understanding from the description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('solve captchas'), target resource ('protected pages'), and outcome ('extract content'), distinguishing it from sibling tools like 'fetch_page' or 'crawl' by emphasizing captcha bypass and content extraction. It explicitly mentions using 2Captcha for reCAPTCHA/hCaptcha and Claude AI for tool selection, providing a detailed operational scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'captcha-protected pages' and mentions cost, but does not explicitly state when to use this tool versus alternatives like 'fetch_page' (for non-protected pages) or provide exclusions. It offers clear intent for captcha-solving scenarios but lacks direct sibling comparisons or when-not-to-use guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swapAInspect
Execute a token swap on Base via 0x DEX Aggregator. AI-ENHANCED: Accepts natural language like 'swap 10 USDC for VIRTUAL'. THREE MODES: (1) PRIVY MODE (RECOMMENDED): Provide owner_wallet - uses your secure Privy custodial wallet (create one first with create_wallet tool). (2) PRIVATE KEY MODE: Provide privateKey - server signs with your key (less secure). (3) BRIDGE MODE: Provide takerAddress only - returns a URL where you sign with browser wallet. Cost: 0.10 USDC
| Name | Required | Description | Default |
|---|---|---|---|
| request | No | Natural language swap request (e.g., 'swap 10 USDC for VIRTUAL', 'buy 100 DEGEN with USDC'). AI will extract the parameters automatically. | |
| buyToken | No | Token symbol (USDC, WETH, VIRTUAL, DEGEN, cbBTC, DAI, AERO, BRETT) OR token address (0x...) | |
| sellToken | No | Token symbol (USDC, WETH, VIRTUAL, DEGEN, cbBTC, DAI, AERO, BRETT) OR token address (0x...) | |
| privateKey | No | LEGACY: Private key (0x... hex) to sign and broadcast transactions server-side. Consider using owner_wallet with Privy instead for better security. | |
| sellAmount | No | Amount to sell. Use human-readable (e.g., '1.5' for 1.5 tokens) or atomic units (e.g., '1000000' for 1 USDC). Human-readable is auto-detected. | |
| slippageBps | No | Slippage in basis points (e.g., '50' = 0.50%). Takes precedence over slippagePercentage. | |
| owner_wallet | No | RECOMMENDED: Your wallet address (0x...) that owns a Privy custodial wallet. The swap will be executed securely through your Privy wallet. Create one first using the create_wallet tool. | |
| takerAddress | No | User's wallet address (0x...). When provided WITHOUT owner_wallet or privateKey, returns a secure web bridge URL where user can sign transactions with their browser wallet. | |
| slippagePercentage | No | Slippage tolerance in percent (e.g., '0.5' for 0.5%). Default: 0.5 | 0.5 |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: the AI-enhanced natural language parsing capability, the three distinct execution modes with their security implications, the cost ('0.10 USDC'), and the bridge mode's URL return behavior. It doesn't mention rate limits or error handling, but covers most critical operational aspects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with clear sections (AI-ENHANCED, THREE MODES, Cost) and uses bullet-like formatting. While somewhat information-dense, every sentence serves a purpose - explaining capabilities, guiding usage, or disclosing costs. It could be slightly more streamlined but remains highly functional.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a complex financial tool with 9 parameters and no annotations or output schema, the description does an excellent job covering execution modes, security considerations, costs, and natural language capabilities. The main gap is the lack of information about return values or error conditions, but given the detailed parameter guidance and behavioral context, it's mostly complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds some context about parameter relationships (e.g., 'When provided WITHOUT owner_wallet or privateKey' for takerAddress) and the natural language parsing for the 'request' parameter, but doesn't provide significant additional semantic value beyond what's in the schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Execute a token swap'), target platform ('on Base via 0x DEX Aggregator'), and distinguishes this from sibling tools like 'send_tokens' or 'fund_wallet' by focusing on DEX-based token exchange. It goes beyond just restating the name 'swap' by specifying the execution mechanism.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use each of the three modes (Privy Mode, Private Key Mode, Bridge Mode), including prerequisites ('create one first with create_wallet tool') and security considerations ('less secure', 'RECOMMENDED'). It clearly distinguishes between the modes based on the parameters provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_agentCInspect
Verify a wallet signature (EIP-191) to authenticate an agent. FREE - no payment required.
| Name | Required | Description | Default |
|---|---|---|---|
| message | Yes | The message that was signed | |
| signature | Yes | The signature to verify | |
| wallet_address | Yes | The wallet address that signed |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool verifies signatures for authentication and is free, but lacks details on error handling, rate limits, security implications, or what happens upon success/failure. For a security-related tool, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, with two sentences that directly convey core functionality and cost. Every word earns its place, with no redundant or vague phrasing. It efficiently communicates the essential information without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (authentication/verification), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what the tool returns upon verification, error conditions, or integration context. For a security-critical tool with no structured behavioral hints, more descriptive context is needed to guide proper agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, clearly documenting all three required parameters. The description adds no parameter-specific semantics beyond implying EIP-191 standard usage. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, though the description could have elaborated on parameter relationships or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Verify a wallet signature (EIP-191) to authenticate an agent.' It specifies the action (verify), resource (wallet signature), and authentication context. However, it doesn't explicitly differentiate from sibling tools like 'create_wallet' or 'get_wallet_balance' beyond the verification focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal usage guidance. It mentions 'FREE - no payment required,' which hints at cost considerations, but offers no explicit when-to-use rules, prerequisites, or alternatives. For example, it doesn't clarify if this should be used before other wallet operations or how it relates to sibling tools like 'erc8004_register' or 'fund_wallet'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
withdraw_walletAInspect
Withdraw USDC from your custodial wallet to your owner wallet. Cost: 0.01 USDC. The owner wallet is auto-detected from your x402 payment.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of USDC to withdraw from custodial wallet (e.g., '5' for 5 USDC). Must have sufficient balance. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing cost ('Cost: 0.01 USDC') and auto-detection behavior. However, it doesn't mention potential failure modes, rate limits, or what happens with insufficient balance beyond the schema hint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with zero waste: first states purpose, second provides cost, third explains auto-detection. Each sentence adds distinct value and the description is appropriately front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter mutation tool with no annotations or output schema, the description provides good context about cost and auto-detection. However, it doesn't explain return values or error conditions, leaving some gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single 'amount' parameter. The description adds no additional parameter semantics beyond what's in the schema, meeting the baseline expectation for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('withdraw'), resource ('USDC from your custodial wallet'), and destination ('to your owner wallet'), distinguishing it from sibling tools like fund_wallet or send_tokens. It provides a complete purpose statement with no ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when transferring USDC from custodial to owner wallet, but doesn't explicitly state when to use this versus alternatives like send_tokens or x402_send. It mentions auto-detection of owner wallet from x402 payment, which provides some context but not clear exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_proxyAInspect
Access x402-protected APIs with automatic payment. Dynamic pricing: 0.01 USDC fee + target cost. Supports USDC on Base.
| Name | Required | Description | Default |
|---|---|---|---|
| targetUrl | Yes | The x402-protected URL to access | |
| targetBody | No | Request body for POST/PUT requests | |
| paymentProof | No | Transaction hash of USDC payment (required after initial 402 response) | |
| targetMethod | No | HTTP method | GET |
| targetHeaders | No | Additional headers to send to target |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context about dynamic pricing ('0.01 USDC fee + target cost') and supported currency ('Supports USDC on Base'), which are not covered in the input schema. However, it lacks details on error handling, rate limits, authentication needs, or what the tool returns, leaving gaps for a mutation tool (implied by 'automatic payment').
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with three short sentences that efficiently convey the core functionality, pricing details, and currency support. Every sentence adds value without redundancy, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (handling API calls with payments), no annotations, and no output schema, the description is incomplete. It covers purpose and pricing but misses critical behavioral aspects like response format, error cases, or interaction flow (e.g., how 'paymentProof' is used after initial response). This leaves significant gaps for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already documents all parameters thoroughly. The description doesn't add any parameter-specific semantics beyond what's in the schema (e.g., it doesn't clarify 'targetUrl' format or 'paymentProof' usage). Baseline 3 is appropriate as the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Access x402-protected APIs with automatic payment.' It specifies the action (access) and resource (x402-protected APIs) with the key feature of automatic payment handling. However, it doesn't explicitly differentiate from sibling tools like 'x402_proxy_price' or 'x402_send', which appear related to the same x402 system.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'x402-protected APIs' and 'automatic payment,' suggesting this tool is for interacting with APIs that require payment. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'x402_proxy_price' or 'x402_send,' nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_proxy_priceBInspect
Get dynamic pricing for x402 proxy requests. FREE - returns total cost (proxy fee + target cost).
| Name | Required | Description | Default |
|---|---|---|---|
| targetUrl | Yes | The x402-protected target URL to probe for pricing |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the tool is 'FREE' and returns 'total cost (proxy fee + target cost)', which adds some context about cost structure and non-destructive nature. However, it doesn't cover other critical aspects like rate limits, error handling, authentication needs, or response format, leaving significant gaps for a pricing tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, with two short sentences that efficiently convey the core functionality and key details (FREE, cost breakdown). Every word earns its place, and there's no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (pricing calculation for proxy requests), lack of annotations, and no output schema, the description is incomplete. It doesn't explain the return format (e.g., numeric value, structured data), error conditions, or dependencies, which are essential for an agent to use this tool effectively in a workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the parameter 'targetUrl' well-documented in the schema. The description adds minimal value beyond the schema by implying the URL is 'x402-protected' and used for 'pricing', but doesn't provide additional syntax, format details, or examples. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get dynamic pricing for x402 proxy requests.' It specifies the verb ('Get') and resource ('dynamic pricing'), and distinguishes it from siblings like 'x402_proxy' (likely for execution) and 'x402_send' (likely for sending). However, it doesn't explicitly differentiate from 'mcp_pricing' (a sibling tool), which might cause ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'x402 proxy requests' and 'FREE', suggesting it's for cost estimation before actual proxy use. However, it lacks explicit guidance on when to use this versus alternatives like 'x402_proxy' or 'mcp_pricing', and doesn't state prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
x402_sendAInspect
Send USDC directly to ANY wallet address via x402 payment. Just provide to_address and amount - the flow auto-completes: 1) Pay 0.01 USDC platform fee 2) Pay the amount directly to destination. Tokens go from you to recipient. Perfect for paying agents, tipping, or sending to any address.
| Name | Required | Description | Default |
|---|---|---|---|
| amount | Yes | Amount of USDC to send (e.g., '5' for 5 USDC). | |
| to_address | Yes | Destination wallet address (0x...) to receive the USDC. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the two-step auto-complete flow (platform fee payment followed by direct transfer), specifies the 0.01 USDC platform fee, and clarifies token movement ('Tokens go from you to recipient'). However, it lacks details on error handling, rate limits, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise and front-loaded, with every sentence earning its place. It starts with the core action, explains the auto-complete flow efficiently in two steps, and ends with usage examples—all without redundant information or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (financial transaction with a fee structure), no annotations, and no output schema, the description does well by explaining the flow and fee. However, it lacks details on return values, error cases, or confirmation mechanisms, leaving some gaps for a payment tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description adds minimal value beyond the schema by implying the parameters are straightforward ('Just provide to_address and amount'), but doesn't provide additional semantic context like format examples or constraints beyond what's in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Send USDC directly to ANY wallet address via x402 payment') and distinguishes it from siblings like 'send_tokens' by specifying the x402 payment method and auto-complete flow. It explicitly mentions the resource (USDC) and destination (wallet address), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage contexts: 'Perfect for paying agents, tipping, or sending to any address.' It distinguishes this tool from alternatives by specifying the x402 payment method and auto-complete flow, which likely differs from other sending tools like 'send_tokens' or 'fund_wallet' in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zora_walletAInspect
Get comprehensive Zora NFT wallet data: NFTs minted, NFTs owned, creator profile, total mints/collections, and estimated earnings. FREE - no payment required.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Chain to query (e.g., 'zora', 'base', 'ethereum') | zora |
| address | Yes | Wallet address (EOA or contract) to look up |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'FREE - no payment required,' which is useful context about cost, but doesn't disclose other behavioral traits like rate limits, authentication needs, error handling, or what happens if the address is invalid. For a tool with no annotations, this leaves significant gaps in understanding its operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first clause, followed by specific data points and a cost note. Every sentence earns its place by providing essential information without redundancy, making it efficient and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description adequately covers the purpose and cost but lacks details on behavior, return values, or error cases. For a tool with 2 parameters and 100% schema coverage, it's minimally viable but could benefit from more context about what the output looks like or potential limitations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents both parameters (chain and address). The description doesn't add any parameter-specific details beyond what's in the schema, such as format examples for the address or chain options. This meets the baseline of 3 since the schema handles the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get comprehensive Zora NFT wallet data') and enumerates the exact data points retrieved (NFTs minted, NFTs owned, creator profile, total mints/collections, estimated earnings). It distinguishes itself from sibling tools by focusing on Zora NFT wallet analytics rather than general wallet operations, token swaps, or other blockchain functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'Zora NFT wallet data' and noting it's 'FREE - no payment required,' which suggests when to use it (for free Zora wallet analysis). However, it doesn't explicitly state when not to use it or name alternatives among sibling tools (e.g., get_wallet_balance for general balances, nansen_wallet_profiler for broader analytics).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!