Satoshidata Wallet Intel
Server Details
Bitcoin wallet intelligence for AI agents: trust, labels, tx verify, fees, and timestamps.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- wrbtc/satoshidata-examples
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.2/5 across 29 of 29 tools scored. Lowest: 2.5/5.
Most tools have clear, distinct purposes (e.g., wallet_summary vs. wallet_detail vs. wallet_flow_graph). However, some pulse-related tools (pulse_consolidations, pulse_dormant, pulse_whales) share similar patterns and could cause minor ambiguity, slightly lowering the score.
Naming follows consistent patterns within subgroups (e.g., entity_*, wallet_*, pulse_*, tx_*). However, mixed conventions exist (e.g., verify_timestamp vs. timestamp_hash, and some verbs placed differently), making the overall pattern slightly inconsistent but still readable.
With 29 tools, the server covers a broad range of Bitcoin intelligence features (price, blocks, entities, wallets, mempool, mining pool, pulse, timestamping). While slightly heavy, each tool contributes to a comprehensive offering, earning its place.
The tool set is remarkably thorough for Bitcoin wallet intelligence and on-chain data. It covers CRUD-like operations for entities, wallets, transactions, mempool, mining pools, events, and timestamping, with obvious gaps like address balance likely covered by wallet_summary. No critical missing operations.
Available Tools
36 toolsaddress_lookupAInspect
Look up White Rabbit's current entity label, confidence, risk flags, caveats, and upgrade links for a single Bitcoin address. This is the MCP-friendly alias for wallet_trust_safety.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states the tool looks up data, implying read-only behavior, but does not mention rate limits, data freshness, or whether it requires authentication. The description is adequate but lacks depth beyond the listed return fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is two sentences, front-loading the main purpose and providing alias context. Every sentence is concise and informative, with no superfluous content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter and an output schema, the description covers the essential return fields and alias information. It is complete enough for the AI agent to understand the tool's function, though minor additional context (e.g., output schema reference) could be added.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description only mentions 'for a single Bitcoin address' which aligns with the required address parameter. No additional meaning or format guidance is provided, failing to compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool looks up White Rabbit entity details for a single Bitcoin address, specifying the exact data returned (label, confidence, risk flags, caveats, upgrade links). It also distinguishes itself from siblings by noting it is an alias for wallet_trust_safety, aiding differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for entity lookup on a Bitcoin address but does not explicitly state when to use this tool over alternatives like address_risk or batch_address_lookup. No exclusions or context signals are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
address_riskAInspect
Return factual address risk signals: entity label/category, source count, bounded behavioral flags, coarse risk_indicator, and an informational-only disclaimer. This is not AML/KYT/compliance advice.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must convey behavioral traits. It states the outputs are 'factual address risk signals' and 'informational-only,' implying read-only and non-authoritative behavior. However, it does not disclose potential side effects, rate limits, or authorization requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence plus a disclaimer, both concise and front-loaded with the purpose and key outputs. No redundant or irrelevant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has one parameter and an output schema (not shown), the description lists expected outputs adequately. It provides sufficient context for a simple risk lookup tool, though it lacks explicit comparison to sibling tools for optimal selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no details in the schema. The description does not elaborate on the 'address' parameter's format, constraints, or acceptable values, failing to compensate for the missing schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns factual address risk signals and lists specific outputs (entity label/category, source count, bounded behavioral flags, coarse risk_indicator, disclaimer). It distinguishes from siblings like address_lookup and entity_lookup by focusing on risk indicators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description includes a disclaimer that this is not AML/KYT/compliance advice, providing some guidance on limitations. However, it does not explicitly state when to use this tool versus alternatives like address_lookup or entity_lookup, nor does it mention any prerequisites or context for use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
batch_address_lookupAInspect
Look up labels and trust-safety signals for multiple Bitcoin addresses in one call. Returns the REST batch trust-safety payload and forwards X-WR-API-Key/Bearer auth when present.
| Name | Required | Description | Default |
|---|---|---|---|
| addresses | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions auth forwarding and returns the batch payload, but lacks details on idempotency, rate limits, error handling, or batch size limits. Adds some behavioral context but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff. First sentence captures purpose, second adds key behavior (auth forwarding). Front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Has output schema so return value details are covered. Missing operational details like idempotency, error handling, and batch size limits. For a one-parameter tool, it's acceptable but not thorough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, and description only mentions 'multiple Bitcoin addresses' which adds little beyond the schema type. No further constraints, format, or validation hints are provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it 'looks up labels and trust-safety signals' for 'multiple Bitcoin addresses in one call', distinguishing it from the sibling 'address_lookup' which is likely single-address. The batch nature is explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly suggests use for batch operations by mentioning 'multiple... in one call'. Does not explicitly contrast with single lookup or provide when-not scenarios, but the tool name and description give sufficient context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bitcoin_priceBInspect
Return the current Bitcoin price snapshot and 24 hour change from satoshidata.ai.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the data source ('satoshidata.ai') and what's returned ('current Bitcoin price snapshot and 24 hour change'), but lacks details on rate limits, authentication needs, error handling, or whether it's a read-only operation. For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality ('Return the current Bitcoin price snapshot and 24 hour change') and includes the data source. There is no wasted wording, and it's appropriately sized for a simple tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no annotations, but has an output schema), the description is minimally adequate. It explains what the tool does and the data source, but with no annotations, it should ideally mention more behavioral aspects (e.g., read-only nature, potential rate limits). The presence of an output schema means return values don't need explanation, but other contextual gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so no parameter documentation is needed. The description doesn't add parameter semantics (as there are none), but this is appropriate. A baseline of 4 is applied since the schema fully covers the absence of parameters, and the description doesn't need to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the current Bitcoin price snapshot and 24 hour change from satoshidata.ai.' It specifies the verb ('Return'), resource ('Bitcoin price snapshot'), and data source ('satoshidata.ai'). However, it doesn't explicitly differentiate from sibling tools like 'onchain_stats' or 'mempool_stats' that might also provide price-related data, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools or contexts where other tools might be more appropriate (e.g., using 'onchain_stats' for broader metrics or 'wallet_summary' for wallet-specific data). The only implied usage is for current price data, but no explicit alternatives or exclusions are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
block_detailAInspect
Return Bitcoin block metadata, coinbase attribution, and a transaction sample for a height or block hash.
| Name | Required | Description | Default |
|---|---|---|---|
| height_or_hash | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It accurately describes the output (metadata, attribution, transaction sample) without mentioning side effects or restrictions. For a read-only tool, this is adequate but minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, 15-word sentence that communicates the tool's purpose efficiently. No extraneous information, and the key components (verb, resource, input) are front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the existence of an output schema, the description does not need to detail return values. It covers input and output nature sufficiently. For a simple tool, it is complete, though could mention accepted formats for the parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'height_or_hash' has 0% schema description coverage. The description compensates fully by explaining it accepts a block height or hash, adding valuable context beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns Bitcoin block metadata, coinbase attribution, and a transaction sample, specifying the input as a height or block hash. It effectively distinguishes from sibling tools that focus on transactions or other resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or alternatives are provided. However, the tool has no direct sibling for block data, so the lack of guidance is less critical. The description implicitly suggests use for block information retrieval.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
dormancy_flushesCInspect
Return recent CMCS dormancy awakenings classified as exchange-bound sell pressure, housekeeping consolidation, HODLer rotation, or unknown.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| since | No | ||
| classification | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must cover behavioral traits. It does not disclose whether the operation is read-only, any side effects, rate limits, or data freshness. This is a minimal description that assumes safe retrieval.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise but omits necessary detail. It front-loads the tool's core purpose but sacrifices completeness for brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 3 optional parameters and an output schema, but the description does not explain pagination, the meaning of 'recent', or the output format. It gives a high-level overview but lacks detail for a fully autonomous AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description has 0% schema coverage, yet only the 'classification' parameter is hinted via the listed categories. 'limit' and 'since' are not explained at all, leaving the agent to infer their purpose from names alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns recent CMCS dormancy awakenings with specific classifications (exchange-bound, housekeeping, HODLer rotation, unknown). This distinguishes it from sibling tools like pulse_dormant which may cover broader dormant activity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as pulse_dormant or other pulse tools. The description does not mention prerequisites, filters, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_categoriesBInspect
Return available Bitcoin entity rollup categories, counts, and coverage_status distributions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It only states 'Return' implying a read operation, but does not disclose safety, permissions, or whether it is destructive. Minimal behavioral context is given.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single sentence conveys the purpose with no extraneous words. Every word is necessary and clear.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple parameterless tool, the description is adequate but not thorough. It does not explain what 'rollup categories' are or how counts are aggregated. The output schema exists but its content is unknown; without it, an agent might need more context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters and schema coverage is 100%. The baseline is 4, and the description does not need to add parameter information. No value is lost.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns categories and counts for Bitcoin entity rollups. It distinguishes from sibling tools like entity_list and entity_lookup which likely return different data, though no explicit differentiation is provided.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as entity_list or entity_recent_activity. An agent cannot determine if this is the right tool for their goal.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_listCInspect
Return named Bitcoin entity rollups, optionally filtered by category. coverage_status indicates whether each rollup is full, substantial, partial, or seed-only coverage; last_activity_at is label-DB activity, not last on-chain transaction time.
| Name | Required | Description | Default |
|---|---|---|---|
| sort | No | balance | |
| limit | No | ||
| category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description must fully disclose behavior. It only mentions returning rollups with optional filter, but omits details on sorting, limit handling, pagination, or any side effects. Minimal disclosure for a read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence, which is concise but underspecified. While not verbose, it sacrifices necessary detail for brevity, so a mid-range score is appropriate.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters, no annotations, and existing output schema, the description is too sparse. It fails to explain sort options, limit constraints, or the nature of 'entity rollups'. Completeness is low.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description should clarify each parameter. It only elaborates on 'category' (optional filter), ignoring 'sort' and 'limit' semantics. The schema provides defaults but no explanation, leaving the agent uninformed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'named Bitcoin entity rollups' and mentions optional category filtering. It distinguishes from siblings like entity_lookup and entity_categories, though explicit differentiation is lacking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use entity_list versus its siblings (e.g., entity_lookup for details, entity_categories for category listing). The agent has no context for appropriate selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_lookupCInspect
Return one named Bitcoin entity rollup and a bounded member-address sample. coverage_status indicates whether the rollup is full, substantial, partial, or seed-only coverage; last_activity_at is label-DB activity, not last on-chain transaction time.
| Name | Required | Description | Default |
|---|---|---|---|
| entity_name | Yes | ||
| member_limit | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description should disclose behavioral traits, but it only states the return type. It does not mention whether the tool is read-only, any authorization needs, rate limits, or side effects. The term 'rollup' is ambiguous, and the sampling behavior is not explained.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core purpose and hints at parameters. However, the use of 'rollup' adds unnecessary jargon, and the sentence could be restructured for clarity without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description does not need to detail return values. However, it lacks context about what a 'rollup' entails and the sampling methodology. For a simple lookup tool, the description is adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. It hints at the 'entity_name' parameter through 'one named', and 'bounded member-address sample' alludes to member_limit. However, it does not explain the default value or the precise meaning of member_limit, leaving ambiguity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Return' and mentions the resource ('Bitcoin entity rollup' and 'bounded member-address sample'), making the tool's purpose clear. However, the term 'rollup' is vague and not defined, slightly reducing clarity. It distinguishes from siblings like entity_list which would return a list, but doesn't explicitly differentiate.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives such as entity_list or entity_categories. The description does not state when a user should choose entity_lookup over its siblings, leaving the agent to guess based on the name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
entity_recent_activityCInspect
Return a bounded recent on-chain activity sample for a small named Bitcoin entity.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| since | No | ||
| entity_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It indicates a read operation returning a sample, but does not specify that it is non-destructive, any authentication needs, or the meaning of 'recent' and 'bounded'. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 12 words, efficiently conveying the core purpose. No waste, but could be slightly expanded without losing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description omits key context: the requirement for an entity name, optional parameters, the definition of 'recent' and 'bounded', and what constitutes a 'small' entity. With an output schema present, the return values are documented elsewhere, but the description still lacks essential usage context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description does not mention any of the three parameters (entity_name, limit, since). It fails to add meaning beyond the schema, which is insufficient for a tool with multiple parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (return a sample) and resource (recent on-chain activity for a Bitcoin entity). It differentiates from sibling tools like entity_lookup or entity_categories by focusing on recent activity, but the qualifiers 'bounded' and 'small' are somewhat ambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool over alternatives, nor any prerequisites or exclusions. It simply states what the tool does without contextual recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
fees_recommendedBInspect
Return current recommended Bitcoin fee estimates from satoshidata.ai.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states it 'returns' data, implying a read-only operation, but doesn't mention any behavioral traits such as rate limits, data freshness, error handling, or authentication needs. This leaves gaps for an agent to understand operational constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose with zero waste. It's appropriately sized for a no-parameter tool and doesn't include redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is minimally adequate. However, it lacks context about the data source (satoshidata.ai) reliability, update frequency, or how it compares to sibling tools, which could help an agent use it more effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and schema description coverage is 100%, so there are no parameters to document. The description appropriately doesn't add unnecessary param details, earning a high baseline score for this dimension.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and resource ('current recommended Bitcoin fee estimates from satoshidata.ai'), making the tool's purpose immediately understandable. It doesn't explicitly differentiate from siblings like 'mempool_stats' or 'onchain_stats', which might also provide fee-related data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'mempool_stats' or 'onchain_stats', which might offer overlapping or complementary fee information. The description lacks context about use cases, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mempool_statsAInspect
Return the current Bitcoin mempool size, fee floor, and congestion summary.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the return data (mempool size, fee floor, congestion summary) but does not disclose behavioral traits like rate limits, authentication needs, or whether it's a read-only operation. It adds some context but lacks detailed behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose without any wasted words. Every part of the sentence contributes directly to explaining what the tool does.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, an output schema exists, and no annotations, the description is reasonably complete for a simple data-fetching tool. It specifies the data returned, though it could benefit from more behavioral context given the lack of annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description does not add parameter semantics, but this is acceptable given the lack of parameters, aligning with the baseline for 0 parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return') and the exact resources ('current Bitcoin mempool size, fee floor, and congestion summary'), distinguishing it from siblings like bitcoin_price or onchain_stats by focusing on mempool metrics rather than price or blockchain data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when mempool data is needed, but does not explicitly state when to use this tool versus alternatives like fees_recommended or onchain_stats. It provides basic context but lacks explicit exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mempool_txBInspect
Inspect a single unconfirmed Bitcoin transaction currently in the mempool.
| Name | Required | Description | Default |
|---|---|---|---|
| txid | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose behavioral traits. It only says 'inspect', implying read-only, but does not state behavior on missing txid, rate limits, or any side effects. This is insufficient for a tool with no annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no wasted words. It is front-loaded and to the point. However, it could include more useful information without becoming overly long.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, details about return values are not needed. The description covers the basic scope but lacks context on error handling (e.g., missing txid) and how it differs from siblings. For a one-parameter tool, it is minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has one required parameter 'txid' with 0% description coverage. The description adds only that it is for a 'single unconfirmed Bitcoin transaction', which is already implied by the tool name and purpose. It does not specify format (e.g., hex string length) or provide additional meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool inspects a single unconfirmed Bitcoin transaction in the mempool. It uses a specific verb ('Inspect') and resource, and the context 'unconfirmed' and 'currently in the mempool' distinguishes it from sibling tools that might handle confirmed transactions or other data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like tx_status. There is no mention of prerequisites, exclusions, or context for when this tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mining_pool_infoBInspect
Return mining-pool attribution by block height/hash, known pool name, or candidate payout address. Block identifiers use /v1/blocks, pool names use /v1/pools/{pool_name}, and addresses use wallet trust-safety.
| Name | Required | Description | Default |
|---|---|---|---|
| address | No | ||
| pool_name | No | ||
| block_hash | No | ||
| block_height | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It does not disclose read-only status, rate limits, or behavior when multiple parameters are supplied. It only states the return type but lacks important behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, no fluff. It front-loads the core purpose and then adds endpoint mapping. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 optional parameters, no annotations, and an output schema that mitigates the need to describe returns, the description covers input mapping but lacks details on parameter interaction (e.g., mutual exclusivity) and error handling. It is adequate but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 0% description coverage, so the description must add meaning. It does so by linking parameters to endpoints (e.g., pool_name to /v1/pools/{pool_name}, block_hash/height to /v1/blocks). However, it does not fully specify which parameter corresponds to which endpoint or explain address semantics clearly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns 'mining-pool attribution' using block identifiers, pool name, or address. It specifies the resources involved (blocks, pools, wallet trust-safety). However, it does not explicitly differentiate from sibling tools like pool_detail or pool_list, which also relate to pools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit guidance on when to use each parameter (block identifiers, pool name, address) by mapping them to specific endpoints. It does not mention when not to use this tool or alternatives like pool_detail for detailed pool info.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
network_intelligenceAInspect
Return the combined Bitcoin network summary for agents: price, fees, mempool, blocks, and White Rabbit chain-intelligence context. Set include_charts=true only when chart arrays are needed.
| Name | Required | Description | Default |
|---|---|---|---|
| include_charts | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It describes output components but lacks details on read-only nature, rate limits, or response structure beyond chart arrays.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences effectively convey purpose and parameter usage with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Mentions key data categories but lacks detail on timeliness, aggregation, or response format. Given an output schema exists (but not shown), more context would be helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaning to the single parameter by explaining when to use it ('only when chart arrays are needed'), going beyond the minimal schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns a combined Bitcoin network summary covering price, fees, mempool, blocks, and White Rabbit context. This distinguishes it from sibling tools that provide individual aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Gives specific guidance on when to set include_charts=true. Implicitly suggests using this tool for a broad summary vs siblings for specific data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
onchain_statsBInspect
Return the current satoshidata.ai on-chain market and network snapshot.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions returning a 'snapshot' but does not specify if this is a read-only operation, requires authentication, has rate limits, or details the freshness of data. For a tool with zero annotation coverage, this is a significant gap in transparency, though it doesn't contradict any annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence: 'Return the current satoshidata.ai on-chain market and network snapshot.' It is front-loaded with the core action and resource, with no wasted words or redundant information. This makes it highly efficient and easy for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 0 parameters, 100% schema coverage, and an output schema exists, the description is adequate for a simple data-fetching tool. However, it lacks details on behavioral aspects like data freshness or usage context, which could be important for an agent. The presence of an output schema means return values are documented elsewhere, but the description could still benefit from more context about when and why to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters, and the schema description coverage is 100%, so there are no parameters to document. The description does not need to add parameter semantics, but it correctly implies no inputs are required. A baseline of 4 is appropriate as the schema fully covers the lack of parameters, and the description aligns without adding unnecessary details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Return the current satoshidata.ai on-chain market and network snapshot.' It specifies the verb ('Return') and resource ('on-chain market and network snapshot'), making it easy to understand what the tool does. However, it does not explicitly differentiate from sibling tools like 'mempool_stats' or 'bitcoin_price', which might offer overlapping or related data, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools such as 'mempool_stats', 'bitcoin_price', and 'wallet_summary', there is no indication of when this specific snapshot is preferred or what unique insights it offers. This lack of context leaves the agent to guess based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
op_return_decodeBInspect
Decode OP_RETURN protocol markers and ordinals inscription envelopes for a Bitcoin transaction.
| Name | Required | Description | Default |
|---|---|---|---|
| txid | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It only says 'decode' implying a read operation, but fails to disclose behavior for missing txs, edge cases, or output structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single, front-loaded sentence with no redundant words. Efficiently conveys the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While an output schema exists, the description lacks completeness by not specifying what types of transactions are supported or any constraints. It is adequate but minimal.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema coverage, the description must compensate. It mentions 'for a Bitcoin transaction' but doesn't clarify that txid is a transaction hash or format expectations. The parameter name alone is insufficient.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (decode), resource (OP_RETURN protocol markers and ordinals inscription envelopes), and context (for a Bitcoin transaction). It distinctly separates this tool from siblings like bitcoin_price or tx_status.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives, nor any prerequisites or limitations. The description simply states what it does without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pool_detailBInspect
Return free White Rabbit mining-pool detail for a named pool.
| Name | Required | Description | Default |
|---|---|---|---|
| pool_name | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It indicates a read operation ('Return') but does not disclose potential errors, data freshness, or any side effects. Minimal transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence of 12 words with no redundancy. It is front-loaded with the verb and immediately conveys the core function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simplicity of the tool (one parameter, output schema present), the description is adequate but lacks details like what specific details are returned, how to obtain the pool name, and the meaning of 'free White Rabbit'. It leaves gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The single parameter 'pool_name' has no schema description (0% coverage). The tool description does not add any meaning, such as expected format, case sensitivity, or source of the name, leaving the agent to guess.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return'), the resource ('mining-pool detail'), and the condition ('for a named pool'). It distinguishes from sibling tool 'pool_list' by implying detail retrieval for a single pool versus listing all pools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'for a named pool', but does not explicitly state when to use this tool versus alternatives like 'pool_list' or any exclusions. Guidance is implicit.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pool_listAInspect
Return the free White Rabbit mining-pool roster with recent block-share windows.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Lacks annotations, so description must bear full burden. It explains the output includes 'recent block-share windows' but omits details like refresh rate, data recency definition, or any authorization requirements. Minimal but sufficient for a simple list tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, self-contained sentence with no fluff. Every word adds meaning, perfectly front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given zero parameters and existence of an output schema, the description fully covers what the tool does. No missing details for its simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With zero parameters, baseline is 4. The description adds value by specifying the nature of the listed items ('free White Rabbit mining-pool') and additional data ('recent block-share windows'), which is beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Return') and the specific resource ('free White Rabbit mining-pool roster'), which distinguishes it from sibling tools like pool_detail (for individual pool details) or pulse tools (for transaction pulses).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (e.g., pool_detail for specific pool info). The description only states what it does without contextual usage recommendations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pulse_consolidationsCInspect
Return recent consolidation candidates from White Rabbit's live on-chain Pulse feed.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| since | No | ||
| min_btc | No | ||
| min_inputs | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must disclose behavioral traits. It does not mention read-only nature, permissions, rate limits, or any side effects. The description only states what it returns without behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence is concise but too brief, omitting essential parameter information. While front-loaded with purpose, it sacrifices completeness for brevity, making it less helpful.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema, the description fails to provide context for the 4 parameters and does not define what 'consolidation candidates' means. An agent cannot confidently determine when or how to use this tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description does not explain any of the 4 parameters (limit, since, min_btc, min_inputs). Agents must guess their meaning from names alone, which is insufficient for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns 'recent consolidation candidates' from a specific source ('White Rabbit's live on-chain Pulse feed'). Verb 'Return' and resource are distinct, differentiating it from siblings like pulse_dormant and pulse_whales.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. Description does not mention context, prerequisites, or exclusions. The agent must infer usage solely from the name and sibling set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pulse_dormantCInspect
Return recent dormant-coin reactivations from White Rabbit's live on-chain Pulse feed.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| since | No | ||
| min_btc | No | ||
| min_age_years | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description only implies a read operation but omits critical behavioral traits like authentication, rate limits, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is concise but sacrifices necessary information; it does not earn its place given the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is too minimal for a tool with 4 parameters and no annotations; it fails to explain what constitutes a 'dormant-coin reactivation' or how parameters affect results.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description provides no explanation of any parameters (limit, since, min_btc, min_age_years) despite 0% schema coverage, leaving agents to infer meaning from defaults alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'recent dormant-coin reactivations' from a specific source, distinguishing it from siblings like pulse_consolidations or pulse_whales.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., other pulse_* tools) or any context about its application.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pulse_summaryAInspect
Return White Rabbit on-chain Pulse scanner health and 24-hour event totals.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears full responsibility for behavioral disclosure. It mentions health and 24-hour totals but does not explain what 'health' entails, whether the data is real-time or cached, or if the operation has side effects or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. It efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and an output schema exists, the description is largely sufficient. It specifies the 24-hour time range and the two output categories (health and event totals). However, it could improve by clarifying 'health' or mentioning that it aggregates all event types.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, so the baseline is 4. The description adds no parameter details, but none are needed since the tool requires no input.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns 'White Rabbit on-chain Pulse scanner health and 24-hour event totals,' specifying the resource (Pulse scanner) and action (return summary). It distinguishes itself from sibling tools like pulse_consolidations, pulse_dormant, and pulse_whales, which focus on specific subsets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus its siblings (e.g., pulse_whales for whale transactions). The description implies it is for a general overview, but does not explicitly state context or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pulse_whalesCInspect
Return recent large Bitcoin movements from White Rabbit's live on-chain Pulse feed.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| since | No | ||
| min_btc | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It implies a read operation ('Return') but does not disclose any behaviors such as auth needs, rate limits, or data freshness. The brief statement lacks necessary behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no superfluous words. However, it is so brief that it misses important details, making it just adequate rather than well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given that the tool has three parameters and zero schema coverage, the description is incomplete. It does not explain parameter usage or provide context beyond the core purpose. The output schema exists but the description does not reference it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'large' but does not explain the parameters (limit, since, min_btc). The titles in the schema provide minimal clues, but the description adds no extra semantics or usage context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Return') and identifies the resource ('recent large Bitcoin movements' from White Rabbit's live on-chain Pulse feed). It provides a clear purpose and is distinct from other tools, though it could explicitly differentiate from sibling pulse tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives like pulse_consolidations or pulse_dormant. There is no mention of prerequisites, context, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
submit_feedbackCInspect
Submit machine-readable label corrections, missing-label suggestions, data-quality reports, or general feedback.
| Name | Required | Description | Default |
|---|---|---|---|
| asks | No | ||
| reason | No | ||
| address | No | ||
| message | No | ||
| summary | No | ||
| category | No | ||
| endpoint | No | ||
| severity | No | ||
| confidence | No | ||
| source_url | No | ||
| current_label | No | ||
| feedback_type | Yes | ||
| suggested_label | No | ||
| suggested_category | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states what can be submitted but doesn't describe what happens after submission (e.g., confirmation, processing time, side effects), authentication requirements, rate limits, or error conditions. This is a significant gap for a submission tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that lists all feedback types without unnecessary words. It's appropriately sized and front-loaded with the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 parameters, 1 required), no annotations, and 0% schema coverage, the description is incomplete. It doesn't explain parameter usage, behavioral outcomes, or integration with the output schema. For a submission tool with rich parameters, this leaves significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter documentation. The description mentions general feedback types but doesn't explain any of the 8 parameters (like 'feedback_type', 'address', 'confidence', etc.) or their relationships. It adds minimal value beyond the parameter names in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('submit') and the types of feedback being submitted (label corrections, missing-label suggestions, data-quality reports, or general feedback). It provides a specific verb and resource scope, though it doesn't explicitly differentiate from sibling tools (which appear unrelated to feedback submission).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or context for its application. It lists what can be submitted but offers no usage context, timing, or relationship to other tools in the server.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
timestamp_hashBInspect
Submit a SHA-256 digest to White Rabbit's Bitcoin timestamping batch.
| Name | Required | Description | Default |
|---|---|---|---|
| hash_hex | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the action ('Submit') but doesn't explain what happens after submission (e.g., confirmation, cost, rate limits, or irreversible effects). This leaves significant gaps in understanding the tool's behavior beyond the basic operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy to grasp quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema (which handles return values) and no annotations, the description covers the basic purpose adequately. However, as a submission tool with potential irreversible effects and no behavioral details, it lacks completeness in explaining outcomes or constraints, making it minimally viable but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, so the description must compensate. It mentions 'SHA-256 digest' and 'hash_hex' implicitly, adding some meaning beyond the bare schema. However, it doesn't specify format details (e.g., hex string length or validation), leaving room for improvement in clarifying parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Submit') and the resource ('SHA-256 digest to White Rabbit's Bitcoin timestamping batch'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'timestamp_quote' or 'verify_timestamp', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'timestamp_quote' or 'verify_timestamp'. It mentions the context ('Bitcoin timestamping batch') but lacks explicit when-to-use or when-not-to-use instructions, leaving usage unclear relative to siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
timestamp_quoteAInspect
Return the current Bitcoin timestamping preflight quote, including the fixed service fee and estimated anchor-fee share.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that this is a read operation ('Return') and specifies the output content (fee details), which helps the agent understand it's a query tool. However, it doesn't mention potential rate limits, authentication needs, or error conditions, leaving some behavioral gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the tool's purpose and output details without unnecessary words. It is front-loaded with the main action and resource, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (0 parameters) and the presence of an output schema (which handles return values), the description is mostly complete. It clearly states what the tool does and what information it returns. However, it could be slightly more complete by mentioning any prerequisites or typical use cases, though this is minor.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description adds value by confirming no parameters are needed and clarifying the tool's purpose, but doesn't need to compensate for any gaps. Baseline for 0 params is 4, as it appropriately addresses the parameter-less nature.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return') and resource ('current Bitcoin timestamping preflight quote'), including what information it provides ('fixed service fee and estimated anchor-fee share'). It distinguishes from siblings like 'timestamp_hash' (which presumably hashes data) and 'verify_timestamp' (which verifies timestamps).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when a quote for timestamping is needed, but does not explicitly state when to use this tool versus alternatives like 'timestamp_hash' or 'verify_timestamp'. It provides some context (preflight quote) but lacks explicit guidance on exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tx_broadcastBInspect
Broadcast a fully signed raw Bitcoin transaction hex through satoshidata.ai.
| Name | Required | Description | Default |
|---|---|---|---|
| hex | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must fully cover behavioral traits. It only states the action without disclosing side effects (e.g., broadcasting consumes fees, irreversible, rate limits, or authentication needs). The description is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence with no unnecessary words. It efficiently conveys the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool is simple with one parameter and an output schema, which mitigates the need to explain return values. However, the description lacks usage context and behavioral details, making it adequate but not fully complete for an agent to invoke correctly without additional knowledge.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It adds context by specifying 'raw Bitcoin transaction hex', indicating the expected format. However, it does not elaborate on hex requirements (e.g., length, encoding) beyond what the parameter name suggests.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action 'Broadcast' and the resource 'fully signed raw Bitcoin transaction hex', and specifies the platform 'through satoshidata.ai'. This distinguishes it from sibling tools like tx_status or tx_verify, which deal with transactions differently.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like tx_verify or tx_status. It does not mention prerequisites (e.g., transaction must be fully signed, but that is implied) or warn against misuse, such as broadcasting invalid transactions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tx_statusBInspect
Return a narrow Bitcoin transaction state check: unknown, mempool, conflicted, or confirmed.
| Name | Required | Description | Default |
|---|---|---|---|
| txid | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions the tool returns a 'narrow' state check but doesn't disclose behavioral traits like error handling, rate limits, authentication needs, or what 'narrow' specifically entails (e.g., limited data vs. quick check). This leaves significant gaps for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Return a narrow Bitcoin transaction state check') and specifies the possible states. There is no wasted verbiage, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter) and the presence of an output schema (which likely covers return values), the description is reasonably complete for its purpose. However, it lacks details on behavioral aspects and parameter semantics, which slightly reduces completeness for a tool with no annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, but the description adds no parameter information beyond what the schema provides (a single 'txid' parameter). It doesn't explain the format, constraints, or meaning of 'txid'. With low coverage, the description fails to compensate, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Return') and resource ('Bitcoin transaction state check') with explicit enumeration of possible return states ('unknown, mempool, conflicted, or confirmed'). It distinguishes from siblings like 'tx_verify' or 'verify_timestamp' by focusing on transaction status rather than verification or timestamping.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'tx_verify' or 'verify_timestamp', nor does it mention prerequisites or exclusions. It simply states what the tool does without contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tx_verifyBInspect
Verify that a Bitcoin transaction paid enough sats to the expected address with enough confirmations.
| Name | Required | Description | Default |
|---|---|---|---|
| txid | Yes | ||
| min_amount_sats | Yes | ||
| expected_address | Yes | ||
| min_confirmations | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the verification action but doesn't cover critical aspects like error handling, rate limits, authentication needs, or what happens if verification fails (e.g., returns false, throws error). This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without any wasted words. It directly communicates what the tool does, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (verification with 4 parameters) and the presence of an output schema (which likely handles return values), the description is minimally adequate. However, with no annotations and low schema coverage, it should provide more behavioral and usage context to be fully complete for safe agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context beyond the input schema, which has 0% schema description coverage. It explains that parameters relate to verifying payment amount and confirmations, clarifying the purpose of 'min_amount_sats' and 'min_confirmations'. However, it doesn't detail parameter formats (e.g., 'txid' as hex string) or constraints, so it doesn't fully compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Verify that a Bitcoin transaction paid enough sats to the expected address with enough confirmations.' It specifies the verb (verify) and the resource (Bitcoin transaction), and while it doesn't explicitly differentiate from siblings like 'tx_status' or 'verify_timestamp', the focus on payment verification is distinct enough for clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'tx_status' (which might check transaction status without verification) or 'verify_timestamp' (which could involve timestamp verification), leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
verify_timestampBInspect
Verify a detached OpenTimestamps proof against the Bitcoin blockchain.
| Name | Required | Description | Default |
|---|---|---|---|
| proof_base64 | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the action ('verify') but doesn't describe what verification entails (e.g., checks for blockchain inclusion, returns verification status, or handles errors). It lacks details on permissions, rate limits, or response behavior, which are critical for a verification tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that is front-loaded with the core purpose. There is no wasted wording, and it directly communicates the tool's function without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (verification against blockchain), no annotations, and an output schema (which likely covers return values), the description is minimally adequate. It states what the tool does but lacks details on behavioral aspects and parameter semantics, leaving gaps in understanding how to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description doesn't add meaning beyond the input schema, which has 0% coverage (no parameter descriptions). It mentions 'detached OpenTimestamps proof' but doesn't explain the proof_base64 parameter's format or requirements. With one parameter and low schema coverage, the baseline is 3, as the description doesn't compensate for the lack of schema details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('verify') and resource ('detached OpenTimestamps proof against the Bitcoin blockchain'). It distinguishes this from siblings like timestamp_hash or timestamp_quote, which likely create timestamps rather than verify them. However, it doesn't explicitly contrast with tx_verify, which might verify Bitcoin transactions rather than timestamp proofs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context (verifying timestamp proofs), but provides no explicit guidance on when to use this tool versus alternatives like tx_verify or other timestamp-related tools. It doesn't mention prerequisites, such as needing a pre-existing proof, or exclusions for invalid proof formats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_contributorsBInspect
Return White Rabbit contributor depth and category distribution for a single Bitcoin address.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data, implying a read-only operation, but does not specify aspects like rate limits, authentication needs, error handling, or data freshness. This is a significant gap for a tool with no annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and efficiently conveys the core functionality, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description does not need to explain return values. However, with no annotations and low schema coverage, it lacks details on behavioral traits and parameter semantics. The description is minimal but adequate for a simple query tool, though it could be more informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions 'a single Bitcoin address,' which aligns with the 'address' parameter in the schema. However, with 0% schema description coverage, the description does not add details like address format or validation rules. It provides basic context but does not fully compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and the resource ('White Rabbit contributor depth and category distribution for a single Bitcoin address'), making the purpose evident. However, it does not explicitly differentiate this tool from sibling tools like 'wallet_detail' or 'wallet_summary', which might also provide wallet-related information, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention any prerequisites, exclusions, or comparisons to sibling tools such as 'wallet_detail' or 'wallet_summary', leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_detailBInspect
Return grouped White Rabbit label evidence and detail for a single Bitcoin address.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns data, implying a read-only operation, but lacks details on permissions, rate limits, error handling, or what 'grouped' and 'evidence' entail. For a tool with no annotations, this is insufficient to inform safe and effective use.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that efficiently conveys the core purpose without redundancy. It's front-loaded with the key action and resource, making it easy to parse, and every word contributes to understanding the tool's function.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (address analysis with evidence grouping), no annotations, and an output schema present, the description is minimally adequate. It covers the basic purpose but lacks behavioral details and usage context. The output schema may handle return values, but the description doesn't fully address operational nuances for a tool with no annotation support.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter by specifying it's 'for a single Bitcoin address', clarifying that 'address' refers to a Bitcoin address. With 0% schema description coverage and only one parameter, this adequately compensates, though it doesn't detail format (e.g., address type) or constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return') and the resource ('grouped White Rabbit label evidence and detail for a single Bitcoin address'), making the purpose understandable. It distinguishes from siblings like 'wallet_summary' by specifying 'detail' and 'evidence', but doesn't explicitly contrast with 'wallet_contributors' or 'wallet_trust_safety', which may offer related information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance by specifying 'for a single Bitcoin address', implying it's for individual address analysis. However, it offers no explicit when-to-use rules, alternatives (e.g., vs. 'wallet_summary'), prerequisites, or exclusions, leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_flow_graphCInspect
Return a render-ready transaction-flow graph for a Bitcoin wallet address.
| Name | Required | Description | Default |
|---|---|---|---|
| hops | No | ||
| since | No | ||
| address | Yes | ||
| min_btc | No | ||
| direction | No | both | |
| node_limit | No | ||
| include_labels | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It fails to disclose what 'render-ready' means, potential costs, rate limits, or authentication needs. The behavioral context is minimal.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (one sentence), which is good for quick scanning, but it omits essential details, making it insufficiently informative. It front-loads the purpose but lacks substance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters, no schema descriptions, and no annotations, the description is severely lacking. Despite an output schema existing, the description provides no guidance on return format or behavior, leaving users uninformed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%; the description only mentions 'address' but not the other 6 parameters (hops, since, min_btc, etc.). Users cannot understand what each parameter does from the description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns a 'render-ready transaction-flow graph' for a Bitcoin wallet address, which is a specific verb+resource. This distinguishes it from siblings like wallet_detail or wallet_summary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus alternatives (e.g., wallet_detail for detailed info, wallet_trust_safety for safety score). No prerequisites or exclusions mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_summaryBInspect
Return the premium White Rabbit chain intelligence summary for a single Bitcoin address.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'premium' and 'chain intelligence summary', hinting at specialized or enhanced data, but fails to clarify key traits like rate limits, authentication needs, data freshness, or whether this is a read-only operation. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality without any wasted words. It's appropriately sized for the tool's apparent simplicity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has an output schema, the description doesn't need to explain return values, which helps. However, with no annotations and low schema coverage, it lacks details on behavioral traits and parameter nuances. For a tool with siblings like 'wallet_detail', more context on differentiation would improve completeness, but it's minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description must compensate. It specifies that the tool returns a summary 'for a single Bitcoin address', implying the 'address' parameter is a Bitcoin address, but doesn't add details like format requirements or validation rules. This provides minimal semantic value beyond the schema's structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Return') and resource ('premium White Rabbit chain intelligence summary for a single Bitcoin address'), making it easy to understand what it does. However, it doesn't explicitly differentiate from sibling tools like 'wallet_detail' or 'wallet_contributors', which might offer overlapping or related functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'wallet_detail' or 'wallet_contributors', nor does it mention any prerequisites or exclusions. This lack of context could lead to confusion in tool selection among similar siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_trust_safetyBInspect
Return White Rabbit's free Bitcoin wallet trust and safety teaser for a single address, including the examined marker when no clear category matched.
| Name | Required | Description | Default |
|---|---|---|---|
| address | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'return' (implying a read operation) and specifies 'teaser' (suggesting limited or promotional information), but doesn't cover critical aspects like rate limits, authentication needs, error conditions, or what 'teaser' entails (e.g., partial vs. full data). For a tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core purpose and includes all essential elements (action, resource, scope) without redundancy. Every word earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no nested objects) and the presence of an output schema (which handles return values), the description is minimally adequate. However, with no annotations and incomplete behavioral transparency, it lacks context on operational aspects like rate limits or error handling. It meets basic needs but has clear gaps for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description adds meaningful context for the single parameter by specifying it's for 'a single address,' clarifying that the 'address' input should be a Bitcoin wallet address. With 0% schema description coverage and only one parameter, this compensates well—though it doesn't detail format requirements (e.g., address validation), a baseline of 4 is appropriate given the low parameter count and clear semantic addition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Return'), the resource ('White Rabbit's free Bitcoin wallet trust and safety teaser'), and the scope ('for a single address'). It distinguishes this from sibling tools like wallet_detail or wallet_summary by focusing specifically on trust/safety information rather than general details or summaries. However, it doesn't explicitly contrast with all siblings, so it's not a perfect 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like wallet_detail or wallet_summary. It doesn't mention prerequisites, limitations, or specific contexts where this tool is preferred. The agent must infer usage from the purpose alone, which is insufficient for optimal tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
whale_alertsCInspect
Return recent large Bitcoin transfers with label-DB flow_type and flow_direction classification. range is one of 1d, 7d, or 30d; 24h is accepted by REST as a legacy alias for 1d. flow_direction is one of to_exchange, from_exchange, cross_exchange, or unknown.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| range | No | 1d | |
| min_btc | No | ||
| flow_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| ok | Yes | |
| data | No | |
| error | No | |
| endpoint | Yes | |
| status_code | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries full burden. It only states what is returned (large transfers with classification) but does not disclose any behavioral traits like rate limits, authentication, output format, or pagination.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence with no redundant information. However, it could be expanded with essential details without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite having an output schema (not shown), the description does not explain the classification meanings or the output structure. The four parameters lack explanation, making the tool's behavior incomplete for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, yet the description does not explain any of the four parameters (limit, range, min_btc, flow_type). The mention of flow_type and flow_direction in the description is ambiguous and does not clarify their meaning as input parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The verb 'return' and resource 'recent large Bitcoin transfers' clearly state what the tool does. The mention of flow_type and flow_direction classification differentiates it from siblings like pulse_whales, though it lacks an explicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (e.g., pulse_whales). No explicit context, prerequisites, or exclusions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!