orbuc-mcp-server
Server Quality Checklist
- Disambiguation4/5
Tools are cleanly separated into BTC holdings and stablecoin domains with clear time-granularity distinctions (current/daily/weekly). Minor overlap exists between btc_holdings_current and btc_holdings_segments (both return current segment data), and between stablecoin_aggregate with level='total' and stablecoin_mcap, but descriptions help differentiate them.
Naming Consistency4/5Consistent 'orbuc_domain_resource' pattern throughout with snake_case. All tools use domain prefixes (btc_ or stablecoin_) appropriately. Minor deviations include 'mcap' abbreviation instead of 'market_cap', and mixing temporal suffixes (current, daily) with structural nouns (segments, chains), but overall predictable.
Tool Count5/5Twelve tools is well-scoped for a financial data server covering two distinct asset classes. The count provides appropriate granularity for BTC holdings (health, current, daily, weekly, segment views) and stablecoins (health, aggregate, chain, specific coin, latest snapshot) without bloat.
Completeness4/5Strong coverage for stablecoin lifecycle data (historical, current, aggregate, per-chain, per-coin) and BTC holdings time-series. Minor gap: BTC tools only provide aggregate segment data (public_companies, governments, etc.) without entity-specific lookups (e.g., individual company holdings), though agents can filter from segment data.
Average 3.7/5 across 12 of 12 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 12 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds 'aggregated' context beyond readOnlyHint annotations, implying computed/summarized data. Does not disclose data freshness, historical range, or rate limits, but safety profile is covered by annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded purpose statement with standard Args docstring. Efficient structure with minimal redundancy, though Args section repeats implied optionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for simple single-parameter tool with good safety annotations. Missing pointer to segment discovery sibling (orbuc_btc_holdings_segments) and lacks aggregation methodology details given no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, description provides basic semantics ('Segment name') and optionality. However, lacks enumeration guidance or reference to segment discovery sibling tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb 'Get' with specific resource 'weekly aggregated BTC holdings'. Temporally distinguishes from siblings (daily, current) via 'weekly', though does not explicitly contrast with segment-listing sibling.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions the filter is optional ('optionally filtered by segment') but provides no guidance on when to prefer this over daily/current variants or how to discover valid segment values.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Since no output schema exists, the description adds valuable behavioral context by listing the specific returned fields: 'total_mcap_usd, daily_change_usd, and daily_change_pct.' This complements the annotations (which cover safety/idempotency) by disclosing what data structures to expect. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of three efficient sentences: purpose declaration, return value specification, and source attribution. Every sentence earns its place with zero redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately compensates by documenting return fields. However, with three completely undocumented parameters and no hints about date formats or pagination, the description leaves significant gaps for a data retrieval tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the three parameters (days, start_date, end_date), the description fails to compensate by explaining parameter semantics. It does not clarify what 'days' represents (lookback period?), date string formats, or how the parameters interact.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'historical daily total stablecoin market cap across all tracked issuers,' using a specific verb and resource. It implicitly distinguishes from sibling tools like 'latest' (current data) and 'coin' (specific issuer) by emphasizing 'historical daily' and 'total,' though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus siblings like 'orbuc_stablecoin_latest' or 'orbuc_stablecoin_aggregate.' It lacks explicit when-to-use conditions, prerequisites (e.g., date format requirements), or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish this is read-only, idempotent, and non-destructive. The description adds domain-specific context that this retrieves supply data, but does not disclose rate limits, data granularity, or behavior when an invalid symbol is provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
The docstring-style formatting with 'Args:' is efficient and front-loaded. The first sentence establishes purpose immediately. Parameter documentation is compact, though the '(required)' notation for symbol contradicts the schema's zero required fields count.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 4-parameter tool with no output schema, the description adequately covers inputs but lacks description of return values (data structure, time granularity) or error conditions. Given the 'openWorldHint' annotation, some mention of external data limitations would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries the full burden and provides critical semantic value: examples for 'symbol' (USDT, USDC, DAI), format hints for dates (YYYY-MM-DD), and semantics for 'days'. It loses a point for not clarifying parameter interactions (whether days and date range are mutually exclusive).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb (Get) and resource (historical supply data) with scope (specific stablecoin), which distinguishes it from sibling 'latest' and 'aggregate' tools. However, it does not explicitly differentiate from siblings like 'orbuc_stablecoin_coin_latest' or mention when to prefer date ranges vs. days parameter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives (e.g., 'orbuc_stablecoin_coin_latest' for current data or 'orbuc_stablecoin_aggregate' for market-wide history). It also fails to explain the relationship between the 'days' parameter and 'start_date/end_date' parameters (mutually exclusive vs. complementary).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds context about what data is retrieved (supply distribution across specific chains) but omits details about rate limits, authentication requirements, or what the tool returns when 'days' is null versus populated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately compact with two distinct sections: a clear purpose statement followed by an Args block. Every sentence provides necessary information without redundancy, making it easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple single-parameter schema and comprehensive annotations, the description covers the essentials but lacks information about the return format (no output schema exists) and doesn't clarify the default behavior when 'days' is omitted (current snapshot vs. all historical data).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting the single 'days' parameter in the Args section, explaining it controls 'Number of days of history' and noting it is optional. This provides sufficient semantic meaning missing from the structured schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool 'Get[s] stablecoin supply distribution across chains' and provides concrete examples (Ethereum, Tron, Solana), effectively distinguishing it from sibling tools like 'orbuc_stablecoin_coin' or 'orbuc_stablecoin_aggregate'. However, it doesn't explicitly contrast usage scenarios with these siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'orbuc_stablecoin_aggregate' or 'orbuc_stablecoin_latest', nor does it explain when to provide the 'days' parameter versus omitting it for current data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and idempotentHint=true, covering safety. The description adds valuable behavioral context not in annotations: the return content ('supply data') and structure ('chain breakdown'). No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Appropriately structured with purpose front-loaded in the first sentence, followed by parameter documentation. The 'Args:' block is slightly formal but efficient given the schema's lack of descriptions. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter read tool: describes what it fetches (supply data with chain breakdown) and the required input. Would benefit from brief return value description since no output schema exists, but acceptable given the simple data retrieval pattern.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Effectively compensates for 0% schema description coverage by providing the parameter's semantic meaning ('Stablecoin ticker'), concrete examples ('USDT, USDC'), and requirement status ('required')—critical context absent from the schema's bare 'title: Symbol'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear verb ('Get') and resource ('latest supply data'), with specific scope ('for a specific stablecoin with chain breakdown'). The 'specific' qualifier distinguishes from sibling aggregate tools like orbuc_stablecoin_latest, though it doesn't explicitly name the alternative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context by specifying 'specific stablecoin,' suggesting use when per-coin data is needed rather than aggregate data. However, lacks explicit when-to-use guidance versus similar siblings like orbuc_stablecoin_coin (without _latest) or orbuc_stablecoin_chains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive, and open-world characteristics. The description adds value by enumerating the specific segments returned (public_companies, etf_funds, etc.), providing necessary context about data granularity not found in the schema or annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first sentence front-loads the core action and resource; the second enumerates the segment breakdowns. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While appropriate for a zero-parameter tool with rich annotations, the description omits clarification regarding the sibling 'orbuc_btc_holdings_segments' relationship and provides no hint about return value structure (totals vs detailed records), which is relevant given no output schema exists.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool accepts zero parameters. Per evaluation guidelines, 0 parameters establishes a baseline score of 4. The description appropriately does not invent parameter documentation where none exists.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get'), resource ('total BTC held by institutions'), and scope ('latest', 'broken down by segment'). However, it does not explicitly differentiate from the sibling tool 'orbuc_btc_holdings_segments', which could create selection ambiguity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The term 'latest' implicitly distinguishes this from time-series siblings ('daily', 'weekly'), suggesting current point-in-time usage. However, it lacks explicit guidance on when to use this versus 'orbuc_btc_holdings_segments', and does not state prerequisites or filter options.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With readOnlyHint and idempotentHint already provided in annotations, the description appropriately focuses on return value structure, noting that segments include 'btc_amount and source.' This compensates for the absence of an output schema by previewing the payload shape and key fields, without contradicting the safety annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of exactly two sentences with zero redundancy. The first sentence states the operation and resource, while the second efficiently documents the output structure. Every word earns its place without filler or repetition of structured metadata.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no inputs) and good annotation coverage declaring it safe and idempotent, the description is functionally complete. It sensibly documents expected output fields (btc_amount, source) to compensate for the missing output schema, though it could briefly clarify what defines a 'segment' to achieve full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4 per evaluation rules. The tool requires no arguments, so no additional parameter semantics are needed in the description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Get') and identifies the resource ('BTC holdings') and granularity ('per segment'). Mentioning 'segment' effectively distinguishes this from temporal siblings (_daily, _weekly) and the aggregate view (_current). However, it does not define what constitutes a 'segment' (e.g., by exchange, wallet type, or custodian), which limits the agent's ability to predict applicability.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus the temporal alternatives (holdings_daily, holdings_weekly) or the total aggregate view (holdings_current). It lacks conditions for selection or prerequisites for use, leaving the agent to infer based on naming conventions alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only, idempotent, safe operations. The description adds valuable behavioral context beyond annotations by detailing the three aggregation granularities (total vs symbol vs chain) and what data each returns (market cap, per-stablecoin breakdown). No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with the main purpose in the first sentence followed by clear, bulleted-style explanations for each level option. Every sentence earns its place; there is no redundancy or unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately explains the aggregation dimensions but omits important context: expected date string formats, return data structure (time-series format), and pagination behavior. For a data aggregation tool with 3 parameters, it meets minimum viability but leaves operational gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description partially compensates by documenting the 'level' parameter's valid values ('total', 'symbol', 'chain') and their specific semantics. However, it fails to document the 'start_date' and 'end_date' parameters (2 of 3 params), leaving critical format and syntax information unspecified despite the schema providing no descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description states a specific verb ('Get') and resource ('daily aggregate stablecoin data') and clarifies scope through the three level options. It implicitly distinguishes from sibling 'latest' and 'health' tools by emphasizing 'daily aggregate' and multi-level breakdowns, though it could explicitly mention this is for historical time-series analysis versus current state.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides implicit usage guidance by explaining what each level value returns ('total' for market cap, 'symbol' for per-coin, 'chain' for per-chain). However, it lacks explicit when-to-use guidance comparing this to siblings like orbuc_stablecoin_latest or orbuc_stablecoin_mcap, and does not mention prerequisites like date format requirements.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable return value disclosure (status, database date range, segment count, record totals) which is crucial given the absence of an output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first establishes purpose ('Check health...'), second details return values ('Returns status...'). Front-loaded with action verb and appropriately scoped for a simple health endpoint.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter health check tool, the description is adequately complete. It compensates for the missing output schema by enumerating the four specific data points returned (status, date range, segment count, records).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, which establishes a baseline of 4. The description correctly implies no configuration is needed for a health check endpoint, matching the empty schema without redundancy.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Check' with resource 'BTC holdings tracker' and clearly distinguishes this health diagnostic tool from sibling data retrieval tools (orbuc_btc_holdings_current, daily, weekly, etc.) by focusing on system status rather than financial data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The term 'health check' implies diagnostic usage versus data retrieval, but there is no explicit guidance on when to use this versus the holdings data tools, no prerequisites mentioned, and no 'when-not-to-use' exclusions provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds valuable behavioral context by disclosing the specific return fields (status, latest date, total market cap, tracked coins/chains), which compensates for the absence of an output schema. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two efficient sentences: the first establishes the operation's purpose, and the second details the return payload. There is no redundant or wasted text; every word serves to clarify the tool's function or output.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters) and rich annotations covering operational safety, the description is appropriately complete. It successfully compensates for the missing output schema by enumerating the returned fields. A minor gap exists in not describing error states or explicit monitoring workflows, but this is sufficient for a health-check endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, establishing a baseline score of 4. With no parameters requiring semantic clarification, the description appropriately focuses on the operation's purpose and return values rather than parameter details.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a specific verb ('Check') + resource ('health of the stablecoin market cap tracker') that clearly identifies this as a health monitoring endpoint. It effectively distinguishes itself from sibling data-retrieval tools like orbuc_stablecoin_mcap and orbuc_stablecoin_latest by explicitly targeting 'health' status rather than current market data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies usage context (monitoring tracker health), it lacks explicit when-to-use guidance such as 'use this to verify data freshness before calling other stablecoin tools' or contrast with alternatives. The distinction from siblings is implicit in the 'health' terminology but not explicitly articulated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already establish read-only, idempotent, non-destructive safety profile. The description adds valuable behavioral context by enumerating valid segment values (public_companies, etf_funds, etc.) and specifying date formats (YYYY-MM-DD), which is crucial given the openWorldHint suggests external data constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Uses an efficient 'Args:' structure that front-loads the purpose in one sentence, then documents parameters. Given the zero schema coverage necessitating manual parameter documentation, the length is appropriate with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers input parameters given schema gaps, but lacks description of return value structure (fields, units, granularity) which would be helpful since no output schema exists. Does not mention pagination, rate limits, or data freshness for the time series.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters5/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description fully compensates by documenting all three parameters: it lists valid segment enum values, explains segment omission behavior, and specifies date string formats for start/end parameters. This is exemplary compensation for schema deficiencies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Description explicitly states 'Get daily BTC holdings time series' with specific verb and resource. The 'daily' and 'time series' qualifiers clearly distinguish this from sibling tools 'orbuc_btc_holdings_current' (point-in-time) and 'orbuc_btc_holdings_weekly' (different granularity), while the segment filtering mention distinguishes it from 'orbuc_btc_holdings_segments'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on segment parameter ('Omit for aggregate totals') clarifying when to use filtering vs. totals. However, it lacks explicit comparison guidelines for when to choose daily vs. weekly granularity or current vs. historical data, though the tool naming makes this somewhat inferable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
While annotations cover the safety profile (readOnly, idempotent, non-destructive), the description adds valuable behavioral context by detailing the specific data fields returned (total_supply_usd, issuer, per-chain deployment data), effectively compensating for the missing output schema. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the core purpose and scope ('latest full snapshot'), while the second efficiently describes the return payload structure. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (zero parameters, read-only operation) and the presence of safety annotations, the description is complete. It appropriately compensates for the absent output schema by describing the returned data structure, though explicit sibling differentiation would make it perfect.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema contains zero parameters, which establishes a baseline score of 4. The description correctly requires no additional parameter clarification since the tool operates as a parameter-free endpoint.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Get') and clearly identifies the resource ('latest full snapshot of all tracked stablecoins') and scope ('per-issuer breakdown'). It effectively distinguishes from sibling tools like 'orbuc_stablecoin_coin_latest' by emphasizing 'all' and 'full snapshot' vs singular coin queries.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context through phrases like 'full snapshot' and 'all tracked stablecoins', implicitly guiding usage toward comprehensive data retrieval rather than specific coin queries. However, it lacks explicit 'when to use' guidance or named alternatives (e.g., 'use X for single-coin data').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/xOrbuc/orbuc-mcp-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server