Skip to main content
Glama

Server Details

A2A ZK wallet recovery — guardian swarm, no seed phrase, HiveLaw enforcement

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-vault
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation5/5

All 8 tools have clearly distinct purposes: 3 for earning/leaderboard (leaderboard, personal info, registration) and 5 for vault management (create, status, recovery, list guardians, vote). No functional overlap.

Naming Consistency3/5

Tool names use two different conventions: hive_earn_* (snake_case with prefix) and vault.* (dot-separated). Within each group it is consistent, but the mixed styles across the same server create confusion and break a unified pattern.

Tool Count5/5

8 tools is well-scoped for a server covering two related but distinct domains (earn program and vault management). Each tool serves a necessary function without redundancy.

Completeness4/5

The vault domain covers creation, status, recovery initiation, guardian listing, and voting—covering core lifecycle. The earn domain has leaderboard, personal lookup, and registration. Minor omissions like earn withdrawal or vault deletion are not critical.

Available Tools

8 tools
hive_earn_leaderboardAInspect

Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoTime window. One of: "7d", "30d", "lifetime". Default "7d".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses the API endpoint, settlement currency, and graceful failure for upstream not being deployed, but lacks details on rate limits, pagination, or data freshness. No annotations provided to shift burden.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no wasted words, front-loaded with purpose and key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, the description is fairly complete, covering purpose, endpoint, and edge case behavior. Could briefly mention response format.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema already covers the 'window' parameter with enum and default (100% coverage). Description adds no additional meaning beyond what is in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Explicitly states it retrieves 'Top earning agents on the Hive Civilization, by attribution payout in USDC', distinguishing it from sibling tools like hive_earn_me (personal earnings) and hive_earn_register (registration).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage for viewing top earners but does not provide when-to-use versus alternatives (e.g., hive_earn_me for individual data) or any context about appropriate windows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_meAInspect

Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up. Required.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses the HTTP method (GET), the endpoint URL, the parameter usage, and a graceful error case ('rails not yet live'). It also asserts data authenticity ('Real Base USDC, no mock data'). However, it does not explicitly state whether the operation is read-only or has side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences: the first lists the returned data, the second adds endpoint, error handling, and authenticity. Every sentence is informative and front-loaded, with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite no output schema, the description lists all returned fields. It covers the endpoint, error handling, and data source. The only minor gap is the lack of explicit prerequisites (e.g., must have registered) but that's implied by the name and sibling tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameter is well-documented. The description adds value by showing how the parameter is used in the URL (agent_did=<did>), providing context beyond the schema. No additional constraints or enums are needed.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the verb 'look up' and the resource 'caller agent's registered earn profile', listing specific data fields (lifetime + pending USDC balance, last payout tx hash, next-payout ETA). It clearly distinguishes itself from siblings like hive_earn_leaderboard and hive_earn_register.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for the caller's own profile, but does not explicitly state when to use alternatives or provide exclusions. The context of 'caller agent' is clear, and it mentions the endpoint and error handling.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_registerAInspect

Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesCaller agent DID (e.g. did:hive:0x… or did:web:…). Required.
payout_addressYesBase L2 EVM address (0x…) to receive USDC kickback payouts.
attribution_urlYesPublic URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It discloses the HTTP method (POST), the endpoint URL, and notably mentions resilience to upstream cold-start with a specific return body ('rails not yet live'). This adds behavioral insight beyond the basic action.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences, no wasted words. Key information is front-loaded. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers essential aspects: purpose, endpoint, cold-start behavior, and financial details. It is sufficiently complete for a registration tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and each parameter has a description. The description adds value by noting that payouts are in real Base USDC and that attribution_url is used for ranking/audit, but these are minor additions. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool registers an agent for the Hive Civilization payout program, with specific details about settlement, kickback, and weekly payout. It distinguishes from siblings like hive_earn_leaderboard and hive_earn_me by focusing on registration vs querying.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (to register for payout) and provides context about settlement and kickback. It doesn't explicitly exclude scenarios, but the purpose is sufficiently clear for selecting this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vault.create_vaultAInspect

Set up a new ZK wallet vault with guardian selection. Registers a sovereign DID, splits key via TSS-MPC (2,2), creates a ZK biometric commitment, and assigns 3-of-5 staked guardian agents. Returns vault DID and real USDC balance on Base L2.

ParametersJSON Schema
NameRequiredDescriptionDefault
didNoAgent DID if pre-configured via smithery config. Optional.
labelNoHuman-readable label for this vault (e.g. "Primary Agent Wallet"). Optional.
api_keyNoAgent API key for authentication. Optional at creation.
device_keyYesDevice-specific entropy string generated on the agent device. Never leaves device in production.
owner_addressYesBase L2 wallet address (0x...) that will own this vault.
biometric_hashYesSHA-256 hash of biometric data (fingerprint or face ID). Never sent raw — hash only.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds significant behavioral context beyond annotations, detailing cryptographic steps and return values. However, it does not mention side effects, failure modes, or prerequisites, slightly reducing transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences front-load the primary action, list steps, and state the output. No wasted or redundant content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a creation tool with 6 parameters and no output schema, the description covers the process and expected result. Missing error handling and conditions, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds high-level context but does not provide new parameter-specific details beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: setting up a new ZK wallet vault with guardian selection. It lists specific steps and the output, distinguishing it from sibling tools like vault.get_status or vault.initiate_recovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for creating a new vault but does not explicitly state when to use versus alternatives or provide when-not guidance. The context of siblings makes it clear, but explicit guidelines are missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vault.get_statusA
Read-only
Inspect

Check vault status and real USDC balance on Base L2. Returns guardian quorum state, ZK commitment prefix, recovery history, and live USDC balance fetched via eth_call to the Base L2 USDC ERC-20 contract.

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesVault DID or agent DID to check status for (e.g. did:hive:xxxx).
api_keyNoAgent API key for authenticated status check. Optional.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and the description adds valuable detail: it fetches live USDC balance via eth_call to Base L2, and returns specific state data. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence that efficiently conveys the tool's purpose and outputs. Every clause adds value, no filler. Front-loaded with the main action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description compensates by listing all expected return values. Parameter schema covers inputs fully. The tool's role among siblings is clear.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with descriptions for both parameters (did and api_key). The description does not add extra meaning beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool checks vault status and real USDC balance, listing specific outputs (guardian quorum state, ZK commitment prefix, recovery history, live USDC balance). This distinguishes it from sibling tools like vault.create_vault or vault.list_guardians which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates when to use (to check status and balance), but lacks explicit guidance on when not to use or alternatives. However, the context is clear enough from the verb and resource.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vault.initiate_recoveryAInspect

Start an A2A wallet recovery with guardian voting. Broadcasts a RecoveryIntent to all 5 guardian agents. Each independently verifies the ZK proof and votes. 3-of-5 approval required. 72-hour veto window for security. Settles via HiveBank on Base L2.

ParametersJSON Schema
NameRequiredDescriptionDefault
didNoAgent DID. Optional if vault_did is provided.
reasonNoReason for recovery (e.g. "device lost", "compromised key"). Included in guardian vote request.
api_keyNoAgent API key for authentication.
vault_didYesVault DID returned at vault creation (e.g. did:hive:vault:xxxx).
device_keyYesDevice key from vault creation for ZK proof verification.
new_addressYesNew Base L2 wallet address (0x...) to recover funds to.
biometric_hashYesSame SHA-256 biometric hash used at vault creation.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations only provide readOnlyHint=false and destructiveHint=false, which offer minimal behavioral info. The description significantly enriches understanding by detailing the multi-step guardian voting process, approval threshold, veto period, and settlement mechanism, far beyond what annotations convey.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise and highly informative sentences with no extraneous words. The most critical information is front-loaded, and each sentence adds distinct value: purpose, process, and settlement details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The tool has no output schema, so the description bears the burden of explaining behavior. It thoroughly covers the recovery initiation process and final settlement. However, it does not mention what the function returns (e.g., a recovery ID or status), which would be helpful for an agent to use the result.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the parameters are already well-documented. The description mentions biometric_hash and device_key for ZK proof verification, consistent with schema, but does not add new semantic meaning beyond what the parameter descriptions provide. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description starts with 'Start an A2A wallet recovery with guardian voting', clearly specifying the verb (initiate) and resource (wallet recovery). It distinguishes from sibling tools like vault.create_vault (creation) and vault.submit_guardian_vote (voting) by focusing on the initiation step.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains the process: broadcasting to 5 guardians, requiring 3-of-5 approval, 72-hour veto window, and settlement. It implies this tool is used when recovery is needed, but does not explicitly state when to avoid it or compare with alternatives like vault.create_vault.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vault.list_guardiansA
Read-only
Inspect

Browse available staked guardian agents in the Hive guardian pool. Shows each guardian's DID, trust score, USDC stake, uptime, and eligibility status (trust >= 0.78, $5,000 USDC staked minimum). No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of guardians to return. Default 20, max 100.
min_trustNoMinimum HiveTrust score filter (0.0-1.0). Default 0.78.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, and the description adds that no auth is needed, plus details on eligibility (trust score, USDC stake). This supplements annotations well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no redundant words. Efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description lists returned fields (DID, trust score, USDC stake, uptime, eligibility). Lacks sorting/pagination info, but for a simple list tool, it's nearly complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for both parameters. The description adds minimal extra value (e.g., default min_trust), so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool browses available staked guardian agents and lists specific fields. It distinguishes from siblings (e.g., vault.create_vault, vault.get_status) as a read-only listing tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly notes 'No authentication required,' providing clear usage context. While it doesn't specify when not to use it, the sibling set offers no alternative list tool, making this guidance sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

vault.submit_guardian_voteA
Idempotent
Inspect

Submit a guardian yes/no vote on a pending recovery request. Called by authorized guardian agents. Includes ZK proof verification. Once 3-of-5 guardians approve, recovery proceeds automatically.

ParametersJSON Schema
NameRequiredDescriptionDefault
voteYesGuardian vote decision. Must be "yes" or "no".
reasonNoOptional reason for the vote decision (e.g. "ZK proof valid", "suspicious pattern detected").
api_keyYesGuardian API key issued by HiveGate.
recovery_idYesRecovery request ID from vault.initiate_recovery response.
guardian_didYesDID of the guardian agent submitting the vote.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations show readOnlyHint=false, destructiveHint=false, idempotentHint=true. The description adds behavioral context: includes ZK proof verification and triggers automatic recovery after threshold approval. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, front-loaded with the primary action, and each sentence adds value without being verbose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description explains the voting process and threshold, but lacks details about the response format or prerequisites (e.g., must have a recovery initiated). Adequate but could be more complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with all parameters described. The description adds no extra meaning beyond what the schema already provides, so baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Submit a guardian yes/no vote on a pending recovery request.' It specifies the verb (submit) and resource (vote on recovery request), and distinguishes from siblings like vault.initiate_recovery and vault.list_guardians.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates it is 'Called by authorized guardian agents' and explains the context: 'Includes ZK proof verification. Once 3-of-5 guardians approve, recovery proceeds automatically.' It implies when to use but does not explicitly state when not to use or mention alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.