Skip to main content
Glama
Ownership verified

Server Details

On-chain stablecoin market cap and Bitcoin institutional holdings data.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

12 tools
orbuc_btc_healthA
Read-onlyIdempotent
Inspect

Check health of the BTC holdings tracker.

Returns status, database date range, segment count, and record totals.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no output schema exists, description adds valuable specificity about returned health metrics (status, date range, segment count, record totals) beyond safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose; second sentence compensates for missing output schema without verbosity—every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a simple diagnostic tool; covers purpose and return values given lack of output schema, though could mention typical use timing (e.g., 'verify before queries').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters present; baseline score applies as schema is self-explanatory and description correctly implies no configuration needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb ('Check') + resource ('health of BTC holdings tracker') clearly distinguishes from sibling data-fetching tools (holdings_current, holdings_daily, etc.) though lacks explicit contrast.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to invoke (e.g., before querying holdings) or when not to use; merely states function.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_currentA
Read-onlyIdempotent
Inspect

Get the latest total BTC held by institutions, broken down by segment.

Segments: public_companies, etf_funds, governments, private_companies,
defi_other, exchanges_custodians.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Lists specific segment enums (public_companies, etf_funds, etc.) that annotations don't provide, giving useful context about return data structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two tight sentences with no filler; purpose front-loaded and segment details follow logically.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate given low complexity, but lacks return value specification (object vs array) despite no output schema existing to document it.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero parameters triggers baseline score of 4; no compensation needed from description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Get') and resource ('latest total BTC held by institutions'), and distinguishes from daily/weekly siblings with 'latest', but overlaps semantically with 'holdings_segments' sibling without clarifying the difference.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this zero-parameter snapshot tool versus the time-series siblings (daily/weekly) or the 'segments' sibling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_dailyA
Read-onlyIdempotent
Inspect

Get daily BTC holdings time series, optionally filtered by segment.

Args:
    segment: public_companies, etf_funds, governments, private_companies,
             defi_other, exchanges_custodians. Omit for aggregate totals.
    start: Start date YYYY-MM-DD
    end: End date YYYY-MM-DD
ParametersJSON Schema
NameRequiredDescriptionDefault
endNo
startNo
segmentNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnly/idempotent hints; description adds no behavioral context beyond annotations (no rate limits, side effects, or auth details).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded purpose statement followed by structured Args section; no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers the simple 3-parameter input space; lacks output description but no output schema exists to supplement.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, description excellently compensates by enumerating valid segment values and specifying date format (YYYY-MM-DD).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get daily BTC holdings time series' with specific verb and resource, distinguishing from siblings (current/weekly) via 'daily' and 'time series'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides filtering guidance ('Omit for aggregate totals') but lacks explicit when-to-use guidance comparing to siblings like weekly or current holdings tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_segmentsB
Read-onlyIdempotent
Inspect

Get current BTC holdings per segment.

Each segment includes btc_amount and source.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds useful output context ('btc_amount and source') beyond annotations, but lacks details on rate limits, authentication requirements, or segment definitions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences with purpose front-loaded and output details following; no extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for invocation given low complexity, but gaps remain regarding segment definitions and usage context versus sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, meeting the baseline requirement; no parameter description needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action ('Get') and resource ('BTC holdings per segment'), with 'per segment' distinguishing it from sibling aggregate endpoints like orbuc_btc_holdings_current.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this segmented view versus the aggregate current/daily/weekly holdings endpoints or what constitutes a 'segment'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_btc_holdings_weeklyC
Read-onlyIdempotent
Inspect

Get weekly aggregated BTC holdings, optionally filtered by segment.

Args:
    segment: Segment name (optional)
ParametersJSON Schema
NameRequiredDescriptionDefault
segmentNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds no behavioral context beyond what annotations already provide (read-only, idempotent); misses opportunity to describe return format or aggregation method.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately brief and front-loaded with the core purpose; Args section is somewhat redundant with schema but doesn't bloat the description.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool, though missing cross-reference to sibling segment-listing tool and return value description given lack of output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates for 0% schema coverage by labeling 'segment' as a segment name and noting it's optional, though lacks enumeration of valid values or semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves weekly aggregated BTC holdings, distinguishing it from 'daily' and 'current' siblings via the time period specifier.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use weekly vs daily aggregations or how to discover valid segment values (likely via orbuc_btc_holdings_segments).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_aggregateA
Read-onlyIdempotent
Inspect

Get daily aggregate stablecoin data at different levels.

level='total': daily total market cap.
level='symbol': daily breakdown per stablecoin.
level='chain': daily breakdown per chain per symbol.
ParametersJSON Schema
NameRequiredDescriptionDefault
levelNototal
end_dateNo
start_dateNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable context about what data is returned at each aggregation level (daily total mcap, per-stablecoin breakdowns) beyond the read-only/idempotent annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Perfectly structured with the core purpose first followed by level-specific mappings; no redundant words or filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately explains the return data structure (implicitly via level descriptions) compensating for the lack of output schema, though date parameter semantics remain uncovered.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates for 0% schema description coverage by thoroughly explaining the 'level' parameter values, but omits semantics for start_date and end_date despite the schema lacking descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool retrieves daily aggregate stablecoin data and distinguishes itself from siblings by detailing the unique aggregation levels (total, symbol, chain).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explains what each level parameter returns, but provides no explicit guidance on when to use this tool versus sibling alternatives like orbuc_stablecoin_latest or orbuc_stablecoin_mcap.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_chainsA
Read-onlyIdempotent
Inspect

Get stablecoin supply distribution across chains (Ethereum, Tron, Solana, etc.).

Args:
    days: Number of days of history (optional)
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only, idempotent, non-destructive behavior. The description adds value by listing specific chains covered and clarifying the temporal nature of the data ('history'). However, it omits rate limits, pagination behavior, or what the default (null) value returns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient two-part structure: core purpose front-loaded in the first sentence, parameter documentation in standard Args format. No redundant or wasted text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for a simple read-only data tool with one optional parameter. Annotations cover behavioral safety; description covers the parameter gap. Minor deduction for not specifying default behavior (current snapshot vs. full history) when days is omitted.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. The description compensates effectively by documenting the single parameter: 'Number of days of history (optional)' explains the semantics clearly, though it could enhance further by describing the default behavior when null.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb (Get) + resource (stablecoin supply distribution) + scope (across chains). Examples (Ethereum, Tron, Solana) clarify the data domain and distinguish from sibling tools like orbuc_stablecoin_coin (single coin) and orbuc_stablecoin_aggregate (totals).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this versus siblings like orbuc_stablecoin_latest (current data) or orbuc_stablecoin_coin (per-coin metrics). While 'history' implies temporal analysis, the description doesn't state selection criteria or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_coinC
Read-onlyIdempotent
Inspect

Get historical supply data for a specific stablecoin.

Args:
    symbol: Stablecoin ticker, e.g. USDT, USDC, DAI (required)
    days: Number of days of history
    start_date: Start date YYYY-MM-DD
    end_date: End date YYYY-MM-DD
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
symbolNo
end_dateNo
start_dateNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds no behavioral context beyond what annotations already specify (read-only, idempotent).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Brief and front-loaded with purpose, though 'Args' section duplicates schema structure without adding significant value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers basic usage but lacks explanation of parameter interactions (days vs date range) and return structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Provides examples and formats, but incorrectly claims 'symbol' is required (schema shows optional with default) and omits relationship between 'days' and date parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves historical supply data for a specific stablecoin, distinguishing from 'latest' variants in sibling names.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus siblings like orbuc_stablecoin_aggregate or date range selection logic.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_coin_latestB
Read-onlyIdempotent
Inspect

Get the latest supply data for a specific stablecoin with chain breakdown.

Args:
    symbol: Stablecoin ticker, e.g. USDT, USDC (required)
ParametersJSON Schema
NameRequiredDescriptionDefault
symbolNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds 'chain breakdown' context not in annotations but does not expand on read-only/idempotent behaviors already specified in structured hints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Brief and front-loaded; the 'Args:' section is slightly informal but every line provides necessary information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool, though without an output schema, it could further clarify what 'chain breakdown' entails in the response.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Compensates effectively for 0% schema description coverage by providing the symbol's semantic meaning, examples (USDT, USDC), and required status.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves latest supply data for a specific stablecoin with chain breakdown, distinguishing it from aggregate and historical siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus orbuc_stablecoin_coin (likely historical) or orbuc_stablecoin_latest (likely all coins).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_healthA
Read-onlyIdempotent
Inspect

Check health of the stablecoin market cap tracker.

Returns status, latest date, total market cap, and tracked coins/chains.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable return value details (status, latest date, tracked coins/chains) beyond what annotations provide, without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, no filler—every clause delivers essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately compensates for missing output schema by enumerating return fields; sufficient for a simple health-check endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Baseline score for zero-parameter tool; description correctly omits parameter discussion as none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it checks 'health' of the stablecoin tracker and distinguishes from data-retrieval siblings (orbuc_stablecoin_latest, etc.) by focusing on operational status.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus other stablecoin tools (e.g., when to check health vs. fetch actual market cap data).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_latestA
Read-onlyIdempotent
Inspect

Get the latest full snapshot of all tracked stablecoins with per-issuer breakdown.

Returns each stablecoin's total_supply_usd, issuer, and per-chain deployment data.
ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes specific return fields (total_supply_usd, issuer, per-chain data) compensating for missing output schema; aligns with read-only/idempotent annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second details return values. No fluff, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers return structure given lack of output schema; appropriate for simple read-only endpoint.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Zero-parameter tool meets baseline; description appropriately omits parameter details as none exist.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear action ('Get latest full snapshot') and scope ('all tracked stablecoins with per-issuer breakdown'), implicitly distinguishing from single-coin siblings but lacking explicit differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use versus sibling tools like orbuc_stablecoin_coin_latest or orbuc_stablecoin_aggregate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

orbuc_stablecoin_mcapB
Read-onlyIdempotent
Inspect

Get historical daily total stablecoin market cap across all tracked issuers.

Returns daily total_mcap_usd, daily_change_usd, and daily_change_pct.
Source: Orbuc.
ParametersJSON Schema
NameRequiredDescriptionDefault
daysNo
end_dateNo
start_dateNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds specific return field names (total_mcap_usd, daily_change_usd, daily_change_pct) which is valuable since no output schema exists, and does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured and front-loaded, though 'Source: Orbuc' is redundant with the tool name prefix.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given it describes return values (compensating for missing output schema), but incomplete due to lack of parameter semantics and usage guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage and description fails to compensate by explaining date parameters (start_date/end_date format, days logic) despite having three optional parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb+resource ('Get historical daily total stablecoin market cap') and distinguishes from siblings by specifying 'historical' (vs orbuc_stablecoin_latest) and 'across all tracked issuers' (vs coin/chain variants).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this versus siblings like orbuc_stablecoin_latest (current data) or orbuc_stablecoin_aggregate, nor any exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources