Hive Wallet
Server Details
HiveWallet MCP Server
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-wallet
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
Each tool has a distinct, non-overlapping purpose: provision creates wallets, info returns metadata, chain walks receipt chains, transfer moves USDC, and verify validates receipts. No ambiguity.
All tools follow the consistent 'wallet_verb' pattern (wallet_chain, wallet_info, wallet_provision, wallet_transfer, wallet_verify), making it predictable.
With 5 tools covering provisioning, info, transfer, verification, and chain auditing, the count is well-scoped for a wallet service without being excessive or sparse.
The tool set covers core operations like wallet creation, info retrieval, USDC transfer, receipt verification, and chain inspection. Missing a balance query is a minor gap but not critical for the defined scope.
Available Tools
5 toolswallet_chainAInspect
Walk the full receipt chain for a DID and return a signed integrity statement. Public, no auth. Returns chain length, chain root (deterministic over the chain hash), latest entry hash, intactness flag, and a signed root statement. Useful as a contract test for any external verifier integrating with HiveWallet.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Agent DID whose chain should be walked (e.g. did:hive:agent:abc123) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool is public, requires no authentication, and returns a specific set of fields including a signed integrity statement. While it does not mention rate limits or side effects, the description is sufficient for a read-only operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The action is stated first, followed by usage context and return values. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists the return fields (chain length, chain root, etc.) and provides usage context. It does not explain terms like 'chain root' or 'intactness flag', but for a tool aimed at external verifiers, this may be adequate. The single parameter is well-documented.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with only one parameter (did). The description adds a concrete example of the DID format ('did:hive:agent:abc123'), which provides clarity beyond the schema's generic description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('walk') and resource ('full receipt chain for a DID') and clearly states the outcome ('return a signed integrity statement'). It lists the return values, making the tool's purpose unmistakable. It is distinct from sibling tools like wallet_info or wallet_transfer, which have different actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states 'Public, no auth' and frames the tool as 'useful as a contract test for any external verifier integrating with HiveWallet'. This gives clear context for when to use it, but does not explicitly state when not to use it or compare it to other tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_infoAInspect
Return HiveWallet metadata: facade version, backend version, HiveDNA verifier public key (b64u), supported proofs (SHOD, SPECTRAL_ZK, CTEF), verify endpoint, and chain endpoint. No payment required — free discovery call.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. States it is free, but does not disclose other behavioral traits like rate limits, authentication, or caching. Adequate for a simple read-only call.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with main purpose. Lists many details but remains readable. Could be slightly more concise but overall effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema but description enumerates all expected return fields. Covers enough context for agent to understand what information is provided. Sufficient for a lightweight info endpoint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters in schema, so no additional parameter meaning needed. Description correctly omits param details, and baseline for zero-param tools is 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the tool returns HiveWallet metadata with a specific list of fields (facade version, backend version, etc.). Differentiates from sibling tools which handle chain info, provisioning, transfers, and verification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly notes 'No payment required — free discovery call,' implying it is safe and cost-free to use as a first step. Does not contrast with siblings but context is sufficient.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_provisionAInspect
Create a fresh agent-DID wallet. The DID is the account holder — there is no human owner. Returns the new DID, an initial chain root, and signing-key cert metadata. Proxies to the live HiveWallet facade.
| Name | Required | Description | Default |
|---|---|---|---|
| hint | No | Optional human-readable label for the wallet (does not appear on-chain) | |
| owner | No | Optional DID of a parent / operator agent — recorded in the cert chain only |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the burden. It discloses that the tool creates a wallet, returns specific data, and proxies to a facade. However, it does not mention side effects, permission requirements, or behavior on re-provisioning (e.g., whether it errors or overwrites).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, with no redundant words. The first sentence states the core action and key constraint, and the second lists returns and implementation detail. Every phrase earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the purpose, usage, return values, and parameter semantics. Given the tool's simplicity (no nested objects, no output schema, two optional params), it is fully complete for an agent to select and invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the two optional parameters. The description adds meaning beyond the schema by clarifying that 'hint' does not appear on-chain and 'owner' is recorded only in the cert chain, providing valuable context for the agent.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool creates a fresh agent-DID wallet, identifies the DID as the account holder with no human owner, and lists return values. It distinguishes itself from siblings like wallet_info or wallet_transfer, which do not create wallets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains what the tool does but provides no explicit guidance on when to use it versus alternatives or when not to use it. While it is the only creation tool among siblings, prerequisites (e.g., whether an agent may already have a wallet) are not addressed.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_transferAInspect
Move USDC between two agent DIDs. Mints a HiveDNA 3-proof receipt (SHOD layers + spectral-ZK ticket + CTEF chain entry, Ed25519-signed canonical body, score 0-1000) and returns the receipt id, score, proofs, signature, verifier_pk_b64u, and verify_url. Real settlement on HiveBank.
| Name | Required | Description | Default |
|---|---|---|---|
| memo | No | Optional memo string (recorded in receipt claims, not on-chain) | |
| to_did | Yes | Destination agent DID | |
| from_did | Yes | Source agent DID (must control signing key — provisioned via wallet_provision) | |
| amount_usdc | Yes | Amount in USDC (USDC on Base, asset address 0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that minting a receipt with proofs occurs and indicates the return values, but does not address potential side effects, authorization requirements (beyond controlling signing key mentioned in schema), or error conditions. The description is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise, consisting of three sentences that efficiently convey purpose, process, and output. It is front-loaded with the main action and avoids unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, no output schema), the description covers the main actions and return values but lacks details on error handling, response interpretation, or confirmation of settlement. It is minimally adequate for an agent to invoke but could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds little beyond the schema; it mentions that from_did requires control of the signing key and references wallet_provision, but does not elaborate on memo or amount_usdc format beyond schema. No additional semantic value is provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Move USDC between two agent DIDs', specifying the verb and resource. It distinguishes itself from sibling tools (wallet_chain, wallet_info, wallet_provision, wallet_verify) by its unique function of transferring funds and generating a receipt.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for transferring USDC between DIDs and mentions 'Real settlement on HiveBank', but it does not explicitly state when to use this tool over alternatives or provide conditions for use or avoidance. No exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
wallet_verifyAInspect
Verify a HiveDNA receipt by id. Public, no auth. Re-runs Ed25519 signature verification, body-hash recompute, and CTEF chain-entry recompute against the canonical body. Returns found, verified, score, proofs, and the signing public key. This is the regulator-grade primitive — anyone with the receipt and the verifier public key can validate offline.
| Name | Required | Description | Default |
|---|---|---|---|
| receipt_id | Yes | HiveDNA receipt identifier (rcpt_<base32>) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses internal operations: re-runs Ed25519 signature verification, body-hash recompute, CTEF chain-entry recompute. Also lists return fields and offline validation capability. With no annotations, the description fully informs about behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a purpose: purpose/auth, internal process, returns/context. No wasted words; front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a single parameter and no output schema, the description covers all necessary aspects: what it does, how it works, what it returns, and usage context (public, offline). Complete for its complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description for 'receipt_id'. The description does not add extra meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Verify a HiveDNA receipt by id', with specific verb and resource. Distinguishes from sibling tools (wallet_chain, wallet_info, etc.) which handle other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context: 'Public, no auth' and 'regulator-grade primitive' indicating when to use. Does not explicitly state when not to use or list alternatives, but the context is sufficient for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!