Skip to main content
Glama

Server Details

On-chain stablecoin market cap and Bitcoin institutional holdings data.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
xOrbuc/orbuc-mcp-server
GitHub Stars
0
Server Listing
orbuc-mcp-server

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 12 of 12 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no ambiguity. The BTC tools focus on holdings data with different temporal granularities (current, daily, weekly) and segment breakdowns, while the stablecoin tools cover market cap, supply data, chain distribution, and health checks. There is no functional overlap between tools, making selection straightforward.

Naming Consistency5/5

All tool names follow a consistent snake_case pattern with a clear prefix structure: 'orbuc_btc_' for Bitcoin-related tools and 'orbuc_stablecoin_' for stablecoin-related tools. The naming convention is uniform throughout, using descriptive nouns and adjectives to differentiate functions (e.g., 'health', 'current', 'daily', 'segments').

Tool Count5/5

With 12 tools, the server is well-scoped for its dual-domain purpose covering BTC holdings and stablecoin data. The count is appropriate, providing comprehensive coverage without bloat, and each tool serves a distinct, necessary function in the data retrieval and monitoring workflow.

Completeness5/5

The tool set offers complete coverage for both BTC holdings and stablecoin data domains. For BTC, it includes health checks, current holdings, daily/weekly time series, and segment breakdowns. For stablecoins, it covers health, aggregate data, chain distribution, historical and latest coin-specific data, and market cap tracking, leaving no obvious gaps for typical agent workflows.

Available Tools

12 tools
orbuc_btc_healthA
Read-onlyIdempotent
Inspect

Check health of the BTC holdings tracker.

Returns status, database date range, segment count, and record totals.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds valuable context by specifying what information is returned (status, database date range, segment count, record totals), which helps the agent understand the output format and utility beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences that directly state the purpose and return values. Every word earns its place, and it's front-loaded with the core function, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is mostly complete. It explains what the tool does and what it returns, but lacks output schema details (e.g., format of returned data). However, for a simple health check tool, this is sufficient, though not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4 as per the rules. The description doesn't need to explain parameters, and it appropriately focuses on the tool's purpose and return values, adding no redundant information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Check' and the resource 'health of the BTC holdings tracker', specifying exactly what the tool does. It distinguishes itself from sibling tools like orbuc_btc_holdings_current or orbuc_stablecoin_health by focusing on BTC tracker health rather than holdings data or stablecoin health.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it implicitly suggests use for monitoring health, it doesn't specify scenarios like troubleshooting, system checks, or prerequisites. Without explicit when/when-not instructions or named alternatives, it offers minimal usage guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_currentA
Read-onlyIdempotent
Inspect

Get the latest total BTC held by institutions, broken down by segment.

Segments: public_companies, etf_funds, governments, private_companies,
defi_other, exchanges_custodians.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true. The description adds valuable context about what data is returned (total BTC held by institutions, broken down by specific segments) and that it's the 'latest' data, which helps the agent understand the scope and freshness of information beyond the safety profile indicated by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states the purpose and scope, the second lists the segments. Every word earns its place, and the information is front-loaded with the core functionality stated immediately.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a parameterless read-only tool with comprehensive annotations (readOnlyHint, idempotentHint, etc.), the description provides complete context about what data is returned (institutional BTC holdings by segment) and that it's the 'latest' data. The main gap is no output schema, but the description compensates well by specifying the breakdown structure. Could be slightly more complete by mentioning data source or update frequency.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters since none exist, and instead focuses on what data is returned (total BTC holdings by segment), which is helpful context for a parameterless tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'latest total BTC held by institutions', with specific breakdown by segment. It distinguishes from siblings like orbuc_btc_holdings_daily or orbuc_btc_holdings_weekly by specifying 'latest' (implying current snapshot) and listing the exact segments covered.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for current institutional BTC holdings with segment breakdown, but doesn't explicitly state when to use this tool vs. alternatives like orbuc_btc_holdings_daily (time-series) or orbuc_btc_holdings_segments (segment-specific). The 'latest' qualifier provides some context but no explicit guidance on exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_dailyA
Read-onlyIdempotent
Inspect

Get daily BTC holdings time series, optionally filtered by segment.

Args:
    segment: public_companies, etf_funds, governments, private_companies,
             defi_other, exchanges_custodians. Omit for aggregate totals.
    start: Start date YYYY-MM-DD
    end: End date YYYY-MM-DD
ParametersJSON Schema
NameRequiredDescriptionDefault
endNo
startNo
segmentNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds context about optional segment filtering and date ranges, which is useful but does not disclose additional behavioral traits like rate limits, authentication needs, or response format. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by a structured 'Args:' section listing parameters with clear explanations. Every sentence earns its place, with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and 0% schema description coverage, the description compensates well by detailing parameters and usage. However, it lacks information on return values (e.g., data format, time series structure) and does not mention potential errors or constraints like date range limits, leaving minor gaps for a read-only tool with good annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries full burden. It clearly explains all three parameters: 'segment' with specific enum-like values (e.g., public_companies, etf_funds), 'start' and 'end' with date format (YYYY-MM-DD), and notes that omitting segment returns aggregate totals. This adds significant meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get daily BTC holdings time series' which specifies the verb ('Get'), resource ('BTC holdings time series'), and temporal scope ('daily'). It distinguishes from siblings like 'orbuc_btc_holdings_current' (current vs. time series) and 'orbuc_btc_holdings_weekly' (daily vs. weekly), though not all siblings are explicitly differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by mentioning optional filtering by segment and date ranges, but does not explicitly state when to use this tool versus alternatives like 'orbuc_btc_holdings_current' for current data or 'orbuc_btc_holdings_weekly' for weekly aggregates. It provides basic context but lacks explicit comparisons or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_segmentsB
Read-onlyIdempotent
Inspect

Get current BTC holdings per segment.

Each segment includes btc_amount and source.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds minimal behavioral context about what data is returned ('Each segment includes btc_amount and source'), but doesn't explain format, pagination, or other operational details beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences that directly state the tool's function and the structure of returned data. Every word earns its place with zero wasted information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with no parameters and good annotations, the description provides basic purpose and return structure. However, without an output schema, it doesn't fully explain the return format (e.g., array of segments, data types) or potential limitations, leaving some gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4. The description appropriately doesn't discuss parameters since none exist, focusing instead on what the tool returns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get current BTC holdings per segment' with specific verb ('Get') and resource ('BTC holdings per segment'). It distinguishes from some siblings like 'orbuc_btc_holdings_current' by specifying 'per segment' breakdown, though not all sibling distinctions are explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description mentions 'per segment' but doesn't explain when this breakdown is needed compared to other holdings tools like 'orbuc_btc_holdings_current', 'orbuc_btc_holdings_daily', or 'orbuc_btc_holdings_weekly'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_weeklyA
Read-onlyIdempotent
Inspect

Get weekly aggregated BTC holdings, optionally filtered by segment.

Args:
    segment: Segment name (optional)
ParametersJSON Schema
NameRequiredDescriptionDefault
segmentNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide strong behavioral hints (readOnlyHint: true, openWorldHint: true, idempotentHint: true, destructiveHint: false), covering safety and idempotency. The description adds minimal context beyond this, only noting the optional segment filtering. It doesn't contradict annotations, but it lacks details on rate limits, authentication needs, or response format, which would enhance transparency given the absence of an output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the main purpose in the first sentence, followed by a brief parameter explanation. It uses two sentences efficiently with no wasted words, making it easy to scan and understand quickly. The structure is clear and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter), rich annotations, but no output schema, the description is adequate but has gaps. It covers the basic purpose and parameter semantics, but lacks details on return values (e.g., data format, aggregation method) and doesn't leverage annotations to explain behavioral implications. It meets minimum viability but could be more complete for optimal agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, so the description carries the full burden. It adds meaning by explaining that 'segment' is an optional filter for segment name, which clarifies its purpose beyond the schema's basic type definition. However, it doesn't provide examples or constraints (e.g., valid segment values), leaving some ambiguity. With 0% schema coverage, this is a good but not perfect compensation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get weekly aggregated BTC holdings, optionally filtered by segment.' This specifies the verb ('Get'), resource ('weekly aggregated BTC holdings'), and scope ('optionally filtered by segment'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from siblings like 'orbuc_btc_holdings_daily' or 'orbuc_btc_holdings_current', which would be needed for a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning optional filtering by segment, suggesting this tool is for weekly aggregated data. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like the daily or current holdings tools, nor does it mention any prerequisites or exclusions. The usage is implied but not clearly articulated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_aggregateA
Read-onlyIdempotent
Inspect

Get daily aggregate stablecoin data at different levels.

level='total': daily total market cap.
level='symbol': daily breakdown per stablecoin.
level='chain': daily breakdown per chain per symbol.
ParametersJSON Schema
NameRequiredDescriptionDefault
levelNototal
end_dateNo
start_dateNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral hints (readOnlyHint=true, destructiveHint=false, idempotentHint=true, openWorldHint=true), indicating a safe, read-only operation. The description adds valuable context by specifying that data is 'daily aggregate' and available at different levels, which clarifies scope beyond annotations. It doesn't contradict annotations, as 'Get' aligns with read-only behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: a clear purpose statement followed by bullet-like explanations of the 'level' options. Every sentence adds value without redundancy, and it's front-loaded with the main intent. No wasted words or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no output schema), the description is partially complete. It covers the core functionality and 'level' parameter but omits details on date parameters and return values. Annotations provide safety context, but without an output schema, the description could better address what data is returned. It's adequate but has clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description compensates by explaining the 'level' parameter's three options (total, symbol, chain), adding meaning beyond the schema. However, it doesn't cover 'start_date' or 'end_date' parameters, leaving them undocumented. With partial compensation, this meets the baseline for adequate but incomplete coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get daily aggregate stablecoin data at different levels.' It specifies the verb ('Get') and resource ('daily aggregate stablecoin data'), and distinguishes the three aggregation levels. However, it doesn't explicitly differentiate from sibling tools like 'orbuc_stablecoin_mcap' or 'orbuc_stablecoin_latest', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by detailing the three 'level' options (total, symbol, chain), suggesting when to use each based on desired data granularity. However, it lacks explicit guidance on when to use this tool versus alternatives like 'orbuc_stablecoin_mcap' or 'orbuc_stablecoin_latest', and doesn't mention prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_chainsA
Read-onlyIdempotent
Inspect

Get stablecoin supply distribution across chains (Ethereum, Tron, Solana, etc.).

Args:
    days: Number of days of history (optional)
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, open-world, idempotent, and non-destructive behavior. The description adds value by specifying the scope ('across chains') and the optional historical parameter ('days'), which are not covered by annotations. It does not contradict annotations, as 'Get' aligns with read-only hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a concise parameter explanation. Every sentence earns its place, with no wasted words, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter) and rich annotations, the description is adequate but incomplete. It lacks details on output format, error handling, or rate limits, and with no output schema, these gaps are more significant. However, annotations provide good behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the schema provides no parameter details. The description compensates by explaining the 'days' parameter as 'Number of days of history (optional)', adding essential meaning beyond the schema. However, it does not specify default values or constraints, leaving some gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and the resource 'stablecoin supply distribution across chains', with specific examples (Ethereum, Tron, Solana, etc.). It distinguishes this tool from its sibling tools, which focus on BTC holdings, stablecoin aggregates, coins, health, or market cap, rather than chain distribution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving stablecoin supply data across multiple chains, but does not explicitly state when to use this tool versus alternatives like 'orbuc_stablecoin_aggregate' or 'orbuc_stablecoin_mcap'. It provides some context but lacks explicit guidance on exclusions or comparisons with siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_coinA
Read-onlyIdempotent
Inspect

Get historical supply data for a specific stablecoin.

Args:
    symbol: Stablecoin ticker, e.g. USDT, USDC, DAI (required)
    days: Number of days of history
    start_date: Start date YYYY-MM-DD
    end_date: End date YYYY-MM-DD
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
symbolNo
end_dateNo
start_dateNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds context about what data is retrieved (historical supply) and parameter usage (e.g., symbol examples like USDT, USDC, DAI), which is valuable beyond annotations. No contradictions with annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded: the first sentence states the core purpose clearly, followed by a structured parameter list. Every sentence adds value without redundancy, making it efficient for an AI agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (4 parameters, no output schema), the description is largely complete: it covers purpose, parameters, and distinguishes from siblings. However, it lacks details on output format (e.g., time series data structure) and doesn't fully address parameter dependencies (e.g., how 'days' interacts with date ranges), leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description carries full burden. It adds meaningful semantics: it explains 'symbol' as a stablecoin ticker with examples, 'days' as number of days of history, and date parameters with format (YYYY-MM-DD). This compensates well for the lack of schema descriptions, though it doesn't detail parameter interactions or defaults.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('Get') and resource ('historical supply data for a specific stablecoin'). It distinguishes this from sibling tools like 'orbuc_stablecoin_coin_latest' (which likely provides current data) and 'orbuc_stablecoin_aggregate' (which likely provides aggregated data across multiple stablecoins).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving historical data rather than current data (suggesting 'orbuc_stablecoin_coin_latest' as an alternative for latest data). However, it doesn't explicitly state when to use this tool versus other historical tools like 'orbuc_stablecoin_health' or provide clear exclusions or prerequisites for parameter combinations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_coin_latestA
Read-onlyIdempotent
Inspect

Get the latest supply data for a specific stablecoin with chain breakdown.

Args:
    symbol: Stablecoin ticker, e.g. USDT, USDC (required)
ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=true, covering safety and idempotency. The description adds valuable context about 'latest supply data' and 'chain breakdown', which are not covered by annotations, enhancing understanding of what data is returned. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a brief, structured parameter explanation. Every sentence adds value without redundancy, making it efficient and easy to parse for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 parameter, no output schema), the description is reasonably complete. It covers the purpose, parameter meaning, and hints at data structure ('chain breakdown'), though it could benefit from more detail on output format or error handling. Annotations provide good behavioral coverage, reducing the burden on the description.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the schema only defines 'symbol' as a string with a default. The description compensates by explaining that 'symbol' is a 'Stablecoin ticker, e.g. USDT, USDC (required)', adding meaning about its purpose and providing examples, which significantly clarifies beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the latest supply data') and resource ('for a specific stablecoin with chain breakdown'), distinguishing it from sibling tools like 'orbuc_stablecoin_aggregate' or 'orbuc_stablecoin_latest' by focusing on per-coin data with chain-level details. It uses precise terminology that directly conveys the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving latest stablecoin supply data with chain breakdown, but does not explicitly state when to use this tool versus alternatives like 'orbuc_stablecoin_coin' (which might provide historical data) or 'orbuc_stablecoin_latest' (which might be aggregate). It provides basic context but lacks clear exclusions or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_healthA
Read-onlyIdempotent
Inspect

Check health of the stablecoin market cap tracker.

Returns status, latest date, total market cap, and tracked coins/chains.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate read-only, non-destructive, idempotent, and open-world behavior. The description adds value by specifying the return content (status, latest date, total market cap, tracked coins/chains), which helps the agent understand the output structure. No contradictions with annotations exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is highly concise and well-structured: two sentences that efficiently state the purpose and return values. Every word adds value, with no redundancy or fluff, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple, parameterless tool with rich annotations (read-only, idempotent, etc.), the description is mostly complete. It explains the return values, compensating for the lack of an output schema. However, it could improve by clarifying the tool's role relative to siblings or specifying typical use cases.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema coverage, the baseline is 4. The description appropriately doesn't discuss parameters, as none exist, and instead focuses on the tool's output, which is useful given the lack of an output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check health of the stablecoin market cap tracker' - a specific verb ('Check') and resource ('stablecoin market cap tracker'). It distinguishes from siblings like 'orbuc_stablecoin_aggregate' or 'orbuc_stablecoin_mcap' by focusing on health/status rather than data retrieval, though the distinction could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives is provided. The description mentions what it returns but doesn't specify use cases like monitoring system status, verifying data freshness, or troubleshooting. With siblings like 'orbuc_stablecoin_latest' for recent data, context for choosing this health check is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_latestA
Read-onlyIdempotent
Inspect

Get the latest full snapshot of all tracked stablecoins with per-issuer breakdown.

Returns each stablecoin's total_supply_usd, issuer, and per-chain deployment data.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds context by specifying it returns 'the latest full snapshot' and details like total_supply_usd and per-chain deployment data, which clarifies the scope and output format beyond annotations. No contradictions are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds essential details in the second, with no wasted words. Both sentences earn their place by clarifying scope and return values efficiently.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, rich annotations), the description is largely complete. It explains the output format (total_supply_usd, issuer, per-chain data) which compensates for the lack of output schema. However, it could slightly improve by mentioning data freshness or update frequency for full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is 4 as the schema fully documents the absence of inputs. The description adds value by explaining what data is returned (e.g., total_supply_usd, issuer, per-chain deployment), compensating for the lack of output schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get the latest full snapshot') and resources ('all tracked stablecoins with per-issuer breakdown'), distinguishing it from siblings like orbuc_stablecoin_aggregate or orbuc_stablecoin_coin_latest by emphasizing comprehensive, issuer-level data rather than aggregated or single-coin views.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining the most recent stablecoin data with issuer details, but it does not explicitly state when to use this tool versus alternatives like orbuc_stablecoin_aggregate (which might provide aggregated metrics) or orbuc_stablecoin_coin (which focuses on specific coins). No exclusions or prerequisites are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_mcapA
Read-onlyIdempotent
Inspect

Get historical daily total stablecoin market cap across all tracked issuers.

Returns daily total_mcap_usd, daily_change_usd, and daily_change_pct.
Source: Orbuc.
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
end_dateNo
start_dateNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying the return values (daily total_mcap_usd, daily_change_usd, daily_change_pct) and the data source (Orbuc), which are not covered by annotations. No contradiction with annotations is present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by return details and source in subsequent sentences. It is appropriately sized with no wasted words, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (historical data retrieval with 3 optional parameters), annotations cover safety and idempotency well, and the description specifies return values and source. However, the lack of an output schema and no parameter guidance in the description leaves some gaps, though not critical for basic usage. It is mostly complete but could benefit from parameter context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter descriptions. The description does not mention any parameters (days, start_date, end_date), leaving their purpose and usage undocumented. However, with 0 required parameters and a baseline expectation, a score of 3 reflects minimal adequacy given the lack of parameter information in both schema and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('Get') and resource ('historical daily total stablecoin market cap across all tracked issuers'), distinguishing it from sibling tools like orbuc_stablecoin_aggregate or orbuc_stablecoin_latest by specifying historical data across all issuers. It explicitly mentions the source (Orbuc) for additional context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving historical stablecoin market cap data, but does not explicitly state when to use this tool versus alternatives like orbuc_stablecoin_latest (which likely provides current data) or orbuc_stablecoin_aggregate (which might aggregate differently). No clear exclusions or prerequisites are provided, leaving some ambiguity for the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.