Skip to main content
Glama

Server Details

USDC treasury vaults, streaming payments, and DeFi yield for AI agents

Status
Unhealthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hivebank
GitHub Stars
0
Server Listing
hiveclear

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: create_stream handles payment streams, create_vault sets up vaults, deposit adds funds, get_balance retrieves vault-specific data, and get_stats provides treasury-wide metrics. The descriptions reinforce these distinct roles, making misselection unlikely.

Naming Consistency5/5

All tools follow a consistent 'hivebank_verb_noun' pattern using snake_case, with clear verbs like create, deposit, and get. This predictability makes the set easy to navigate and understand at a glance.

Tool Count5/5

With 5 tools, the set is well-scoped for managing a treasury system, covering key operations like vault creation, funding, and monitoring without being overwhelming. Each tool serves a clear, necessary function in the domain.

Completeness4/5

The tools cover core CRUD-like operations for vaults and streams, including creation, deposit, and querying. A minor gap exists in missing update or delete operations for streams or vaults, but agents can likely work around this for basic workflows.

Available Tools

5 tools
hivebank_create_streamBInspect

Create a programmable per-second payment stream between two agents. Funds flow continuously from sender to receiver over the specified duration.

ParametersJSON Schema
NameRequiredDescriptionDefault
memoNoOptional memo describing the payment purpose
to_didYesReceiver DID
from_didYesSender DID
total_usdcYesTotal USDC to stream over the duration
duration_secondsYesStream duration in seconds
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes the core action of creating a payment stream but omits critical details such as required permissions, whether the stream is reversible or cancellable, rate limits, error conditions, or what happens if funds are insufficient. For a financial transaction tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence and adds operational details in the second. Both sentences earn their place by clarifying the tool's function and mechanism without redundancy or fluff, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a financial streaming tool with no annotations and no output schema, the description is incomplete. It lacks information on return values, error handling, security requirements, and behavioral nuances, which are essential for safe and effective use. The description does not compensate for these gaps adequately.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal value beyond the schema by implying the relationship between parameters (e.g., total_usdc flows over duration_seconds), but it doesn't provide additional syntax, format details, or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Create a programmable per-second payment stream'), identifies the resource ('between two agents'), and specifies the mechanism ('Funds flow continuously from sender to receiver over the specified duration'). It distinguishes this tool from siblings like deposit or get_balance by focusing on streaming payments rather than one-time transfers or queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like hivebank_deposit for lump-sum transfers or hivebank_create_vault for other purposes. It mentions the basic context of creating payment streams but lacks explicit when/when-not instructions or named alternatives, leaving usage decisions ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hivebank_create_vaultBInspect

Create a yield-bearing USDC vault for an autonomous agent. Returns vault ID, DID, and initial balance.

ParametersJSON Schema
NameRequiredDescriptionDefault
owner_didYesDecentralized identifier (DID) of the vault owner
vault_nameYesHuman-readable name for the vault
vault_typeNoVault type (e.g. standard, high-yield)standard
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It mentions the tool creates something and returns specific data (vault ID, DID, initial balance), but doesn't disclose critical behavioral traits: whether this requires authentication, incurs fees, has rate limits, is irreversible, or what happens on failure. For a creation tool with financial implications, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Create a yield-bearing USDC vault for an autonomous agent') and follows with essential return details. Every word earns its place, with no redundancy or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with no annotations and no output schema, the description is incomplete. It doesn't cover authentication needs, error conditions, cost implications, or the meaning of return values beyond listing them. Given the complexity of financial vault creation, more behavioral and operational context is needed for an agent to use this tool safely and effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters (owner_did, vault_name, vault_type). The description adds no parameter-specific information beyond what the schema provides—it doesn't explain format constraints, provide examples, or clarify the purpose of vault_type. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Create'), the resource ('yield-bearing USDC vault'), and the beneficiary ('for an autonomous agent'). It distinguishes this from sibling tools like hivebank_deposit (which adds funds) or hivebank_get_balance (which queries). The specificity of 'yield-bearing USDC vault' differentiates it from hivebank_create_stream, which likely creates a different financial instrument.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when an autonomous agent needs a yield-bearing vault, but provides no explicit guidance on when to use this versus alternatives like hivebank_create_stream. It doesn't mention prerequisites (e.g., needing USDC funds first) or exclusions (e.g., not for personal use). The context is clear but lacks comparative or conditional advice.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hivebank_depositCInspect

Deposit USDC into an agent vault. Returns updated balance and transaction ID.

ParametersJSON Schema
NameRequiredDescriptionDefault
vault_idYesTarget vault ID
amount_usdcYesAmount of USDC to deposit
depositor_didYesDID of the depositor
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the return values (balance and transaction ID) but lacks critical details: whether this is a write operation (implied by 'Deposit'), if it requires authentication, potential fees, transaction finality, or error conditions. For a financial tool with no annotations, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action and outcome. Every word earns its place with no redundancy or fluff, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a financial deposit tool with no annotations and no output schema, the description is incomplete. It omits behavioral aspects like idempotency, side effects, security requirements, and error handling. The return values are mentioned but not detailed (e.g., balance format). Given the complexity, more context is needed.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters. The description adds no additional parameter semantics beyond what's in the schema (e.g., no examples, constraints, or context for 'depositor_did'). Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Deposit USDC') and target ('into an agent vault'), distinguishing it from sibling tools like hivebank_get_balance (read) or hivebank_create_vault (create). However, it doesn't specify whether this is for a specific blockchain or protocol, which could help further differentiate it from similar financial tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention prerequisites like needing an existing vault (vs. hivebank_create_vault) or clarify if this is for initial funding versus top-ups. The description only states what it does, not when to choose it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hivebank_get_balanceCInspect

Get vault balance, yield earned, and deposit history for an agent vault.

ParametersJSON Schema
NameRequiredDescriptionDefault
vault_idYesVault ID to look up
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool retrieves data (balance, yield, history), implying a read-only operation, but does not address key aspects like authentication requirements, rate limits, error conditions, or data freshness. For a financial tool, this omission is significant, as it leaves the agent unaware of potential constraints or risks.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It is front-loaded with the core purpose, making it easy to parse. However, it could be slightly improved by structuring it to highlight key data points more distinctly, but overall, it is concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a financial data retrieval tool with no output schema and no annotations), the description is minimally adequate. It covers what data is retrieved but lacks details on return format, error handling, or dependencies. Without an output schema, the agent must infer the response structure, which could lead to misinterpretation. The description meets basic needs but leaves gaps in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the 'vault_id' parameter clearly documented as 'Vault ID to look up'. The description does not add any extra meaning beyond this, such as format examples or validation rules. Since the schema already provides adequate parameter information, the baseline score of 3 is appropriate, as the description doesn't enhance or detract from the schema's clarity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get vault balance, yield earned, and deposit history for an agent vault.' It specifies the verb 'Get' and the resource 'agent vault' with three specific data points returned. However, it does not explicitly differentiate from sibling tools like 'hivebank_get_stats', which might also retrieve vault-related data, so it doesn't reach a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It does not mention sibling tools such as 'hivebank_get_stats' or explain scenarios where this tool is preferred, such as for detailed financial tracking versus summary statistics. This lack of comparative context limits its utility for an AI agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hivebank_get_statsAInspect

Get treasury-wide statistics: total vaults, deposits, active streams, yield generated, and streamed volume.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It clearly indicates this is a read operation ('Get') and specifies what data is returned, but doesn't disclose behavioral traits like rate limits, authentication requirements, or whether the data is real-time vs cached. The description adds basic context but lacks deeper operational details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get treasury-wide statistics') and then enumerates the specific metrics. Every word earns its place with zero waste, making it highly concise and well-structured for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema, no annotations), the description is complete enough for basic use by specifying what statistics are retrieved. However, it lacks details on output format, data freshness, or error handling, which could be helpful for an agent despite the low complexity. It meets minimum viability but has clear gaps in operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the absence of inputs. The description doesn't need to compensate for any parameter gaps, and it appropriately doesn't mention parameters, maintaining focus on the tool's purpose. Baseline for 0 parameters is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resource ('treasury-wide statistics'), and explicitly lists the specific metrics returned (total vaults, deposits, active streams, yield generated, streamed volume). This distinguishes it from sibling tools like hivebank_get_balance which focuses on individual balances rather than aggregate statistics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it provides 'treasury-wide statistics,' suggesting it should be used for aggregate overview rather than individual operations. However, it doesn't explicitly state when NOT to use it or name alternatives like hivebank_get_balance for individual data, missing full explicit guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.