Skip to main content
Glama

Server Details

HederaOracle - 12 Hedera tools: HCS, HTS tokens, mirror node, consensus, governance.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
ToolOracle/hederaoracle
GitHub Stars
0
Server Listing
HederaOracle

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 3.1/5 across 9 of 9 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool addresses a distinct aspect of the Hedera ecosystem (account info, yields, network stats, etc.), with no overlapping purposes. Descriptions clearly differentiate them.

Naming Consistency5/5

All tools follow a consistent 'hbar_<noun>' pattern, making it easy to infer their domain and purpose at a glance.

Tool Count5/5

With 9 tools, the set is well-scoped for an oracle/info service covering key Hedera areas without being overwhelming or sparse.

Completeness4/5

Covers major areas (accounts, network, protocols, tokens, transactions) but lacks deeper NFT or historical data querying, which would be minor enhancements.

Available Tools

9 tools
hbar_account_infoBInspect

Hedera account balance, key type, token holdings

ParametersJSON Schema
NameRequiredDescriptionDefault
accountYesAccount ID (e.g., 0.0.12345)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description does not disclose any behavioral traits such as read-only nature, rate limits, or authentication requirements. The description implies a read operation but does not explicitly state it.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise—a single line that immediately conveys the tool's purpose with no extraneous words. It is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given one parameter and no output schema, the description lists three pieces of returned info (balance, key type, token holdings). This is adequate but does not specify the return format or any additional details that might be needed for complete understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema already includes a description for the 'account' parameter with format example. The tool description does not add further meaning beyond what the schema provides, so it meets the baseline for 100% schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what data the tool provides (balance, key type, token holdings) for a Hedera account, distinguishing it from sibling tools like 'hbar_network_stats' or 'hbar_overview' which cover broader or different data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. It does not mention that it is for a single specific account, nor does it contrast with other account-related tools like 'hbar_token_info' or 'hbar_transactions'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_defi_yieldsCInspect

Compare DeFi yields across Hedera protocols

ParametersJSON Schema
NameRequiredDescriptionDefault
min_tvl_usdNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only states 'compare yields' without indicating whether the tool is read-only, what it returns, or how it behaves. This is insufficient for transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, concise sentence with no fluff. However, it could be slightly longer to include parameter context while remaining efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lacks details about output format, parameter usage, and behavior. Given the tool's simplicity (1 optional param, no output schema), it fails to provide a complete picture.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has one parameter (min_tvl_usd) with no description coverage. The tool description does not mention or explain this parameter, leaving its purpose and usage entirely undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'compare' and the specific resource 'DeFi yields across Hedera protocols'. This distinguishes it from sibling tools like hbar_account_info or hbar_network_stats, which focus on different aspects.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. There is no mention of prerequisites, use cases, or exclusions, leaving the agent without context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_network_statsBInspect

Network stats: supply, nodes, consensus info

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavioral traits. It only lists output categories, but omits whether it is read-only, requires authentication, or any side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with no wasted words. Slightly more context could be added, but it remains efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given zero parameters and no output schema, the description is minimally complete. It conveys the tool's purpose and output but lacks detail on scope or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100%. The description adds value by listing the output fields (supply, nodes, consensus info), exceeding the baseline of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides network stats (supply, nodes, consensus info). It implicitly differentiates from siblings like hbar_account_info but lacks explicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. Siblings exist but no context is provided for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_overviewAInspect

Hedera ecosystem overview: HBAR price, TVL, supply, Governing Council, protocols

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden of disclosing behavioral traits. It does not indicate data freshness, update frequency, or whether the overview is real-time or cached, leaving significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the tool's purpose and content without unnecessary words. It is front-loaded with the main topic.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists the categories of data covered (price, TVL, supply, etc.) but does not specify the output format, structure, or depth of the overview. For a parameterless tool, it provides adequate but not complete context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

There are no parameters, so the description is the sole source of meaning. It effectively communicates what data the tool returns without needing to explain parameter behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool provides an overview of the Hedera ecosystem, listing specific metrics like HBAR price, TVL, supply, Governing Council, and protocols. It effectively distinguishes itself from sibling tools that focus on more granular details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not explicitly state when to use this tool versus alternatives, but the content implies it is for a general high-level snapshot. No exclusions or alternative guidance is provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_protocol_healthBInspect

Protocol health: TVL, audits, risk grade (SaucerSwap, Stader, HeliSwap...)

ParametersJSON Schema
NameRequiredDescriptionDefault
protocolYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden of behavioral disclosure. It does not mention read-only nature, rate limits, authentication requirements, data freshness, or any side effects. For a tool returning health data, basic transparency about data source or caching is missing.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise—one short sentence with key terms front-loaded ('Protocol health'). It efficiently conveys the purpose without extraneous text. Could be slightly more structured (e.g., separating purpose from examples), but overall it earns its length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple schema (one string param) and no output schema, the description provides the essential elements (TVL, audits, risk grade) and example values. However, it lacks details on return format or whether the tool returns data for one protocol or multiple, leaving some ambiguity for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'protocol' is a string with no schema description. The description compensates by listing example protocol names (SaucerSwap, Stader, HeliSwap), adding practical semantics beyond the schema. However, it does not specify expected format or whether it accepts full names or identifiers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides protocol health metrics including TVL, audits, and risk grade, with specific examples (SaucerSwap, Stader, HeliSwap). It distinguishes from siblings like hbar_protocol_list (listing) and hbar_defi_yields (yields). However, it lacks an explicit verb (e.g., 'retrieve' or 'get'), but the noun phrase is sufficiently clear.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as hbar_protocol_list or hbar_stablecoin_risk. No context on prerequisites or use cases. The description only states what it provides, leaving the agent to infer usage without explicit direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_protocol_listBInspect

All Hedera DeFi protocols ranked by TVL

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, and the description fails to disclose any behavioral traits (e.g., read-only, rate limits). The tool's side effects or safety profile are entirely unaddressed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence without any extraneous content. It is maximally concise while conveying the essential purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with no parameters or output schema, the description is functional but minimal. It does not specify what information is returned per protocol (e.g., name, TVL value), leaving the agent to infer.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has zero parameters and schema description coverage is 100%. The description adds no parameter info, but none is needed; a baseline of 4 is appropriate for a param-less tool.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the tool lists all Hedera DeFi protocols ranked by TVL, which is a specific verb+resource. It clearly distinguishes from siblings like hbar_account_info or hbar_defi_yields.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as hbar_protocol_health or hbar_overview. The description provides no usage context or exclusion conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_stablecoin_riskCInspect

Stablecoin supply and risk on Hedera

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description must disclose behavior; it only gives topic. No mention of data freshness, update frequency, error conditions, or what 'risk' means, leaving the agent blind to output characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is efficient but too terse for the required information. It is front-loaded but could include more context without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description should specify return type (e.g., metrics or indicators). It does not, leaving the agent uncertain about what the tool provides.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so the description adds no param semantics but also needs none. Baseline 4 applies as the schema is empty and covered.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a noun phrase ('Stablecoin supply and risk') rather than a clear verb+resource action (e.g., 'Get stablecoin supply'). It vaguely distinguishes from siblings by topic but does not state what the tool does (list, calculate, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool over siblings like hbar_defi_yields or hbar_protocol_health. The agent receives no exclusions or context for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_token_infoBInspect

HTS token risk analysis: admin/freeze/wipe/supply keys, risk scoring

ParametersJSON Schema
NameRequiredDescriptionDefault
token_idYesToken ID (e.g., 0.0.456858)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does not state whether the tool is read-only, if it makes network calls, or any side effects. The mention of risk analysis and scoring is helpful but insufficient without clarity on data access or mutations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

A single clear sentence that efficiently communicates the tool's purpose. No wasted words, though a slightly more structured format with context could improve readability. Still, it is appropriately sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having one parameter and no output schema, the description lacks details on what the risk analysis entails, how the risk score is presented, or any return format. The tool is likely complex (key analysis), but the description is too brief to be fully complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of parameters with a description for token_id. The tool description adds no additional meaning beyond the schema; the example token ID format is already in the schema. Baseline score is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs risk analysis on HTS tokens, specifically referencing admin/freeze/wipe/supply keys and risk scoring. This differentiates it from sibling tools like hbar_account_info (account info) and hbar_stablecoin_risk (stablecoin-specific risk).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like hbar_stablecoin_risk or when not to use it. The description implies usage for token risk analysis but lacks exclusions or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hbar_transactionsCInspect

Recent transactions for a Hedera account

ParametersJSON Schema
NameRequiredDescriptionDefault
accountYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description must fully disclose behavioral traits. It only states 'Recent transactions' without specifying ordering, pagination, time range, or read-only nature, leaving significant ambiguity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very short (one phrase). While concise, it could be more informative without becoming verbose. It lacks structure like a sentence or bullet points.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, no annotations, and the complexity of blockchain transactions, the description is too sparse. It does not explain what the output contains, limits, or any relevant details needed for an agent to use the tool effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage for the single 'account' parameter. The tool description adds context by stating it is a Hedera account, but does not specify format (e.g., 0.0.x) or other constraints, so it provides only minimal added value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves recent transactions for a Hedera account. It uses a specific verb and resource, but lacks differentiation from sibling tools like hbar_account_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives such as hbar_account_info or hbar_token_info. The description does not mention any prerequisites or context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.