Hive Zk Attestation
Server Details
Zero-knowledge attestation for agent capability and trust score claims
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-zk-attestation
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.3/5 across 8 of 8 tools scored. Lowest: 3.5/5.
Tools are cleanly split into two distinct domains (earn and zk attestation), each with well-defined individual purposes. Within each domain, no two tools overlap: leaderboard, me, register are clearly different; anchor, attest, list, query, verify are all distinct operations.
The naming uses consistent snake_case with clear prefixes: 'hive_earn_' for earn tools and 'zk_' for attestation tools. Each group follows a verb_noun pattern (e.g., hive_earn_register, zk_anchor_to_base). The two prefixes are mixed but are semantically justified and consistent within themselves.
With 8 tools, the server is well-scoped. It covers both the earn and attestation functionalities without being bloated. Each tool has a clear purpose and earns its place.
The tool surface covers the core lifecycle for both domains: register, view leaderboard, check own earnings; produce, anchor, query, verify attestations. Minor gaps exist (e.g., no update/withdraw for earn, no revoke for attestations) but these are not essential for the primary use cases.
Available Tools
8 toolshive_earn_leaderboardAInspect
Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.
| Name | Required | Description | Default |
|---|---|---|---|
| window | No | Time window. One of: "7d", "30d", "lifetime". Default "7d". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides behavioral context: it makes a GET call, returns a graceful error if upstream is unavailable. This adds value beyond the schema, though it doesn't mention read-only nature or authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Description is three concise sentences, front-loading purpose, then adding technical details and error handling. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter, no output schema, and no annotations, the description covers purpose, endpoint, parameter, and error handling. However, it omits the return format, which would aid an agent in interpreting the response.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema covers the window parameter with enum values and default. The description mentions the query parameter in the URL but doesn't add meaning beyond the schema. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves 'Top earning agents on the Hive Civilization, by attribution payout in USDC.' It specifies the verb 'top earning' and resource 'leaderboard', but does not explicitly differentiate from siblings like hive_earn_me or hive_earn_register.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. The description only explains what the tool does, not when it should be preferred over other hive_earn tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hive_earn_meAInspect
Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_did | Yes | Agent DID to look up. Required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, but the description fully discloses behavior: it performs a GET call, uses real Base USDC, and returns a specific error message gracefully. This covers the main behavioral traits without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no redundancy. First sentence lists outputs clearly; second adds API details and error behavior. Every sentence is informative and necessary.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple lookup tool with one parameter and no output schema, the description covers all essential info: inputs, outputs, API endpoint, and error handling. It is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (one parameter). The description adds that the DID is for the caller agent, providing slight nuance beyond the schema's 'Agent DID to look up.' This is minimal added value, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states that the tool looks up the caller's earn profile, listing specific return values (lifetime+pending USDC balance, last payout tx hash, next-payout ETA). It distinguishes from siblings like 'hive_earn_register' and 'hive_earn_leaderboard' by focusing on the caller's own profile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It implies usage for checking one's own earn profile and mentions graceful handling if upstream is not deployed, but does not explicitly state when to avoid or compare to alternatives. The sibling names provide context, but the description lacks direct guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hive_earn_registerAInspect
Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_did | Yes | Caller agent DID (e.g. did:hive:0x… or did:web:…). Required. | |
| payout_address | Yes | Base L2 EVM address (0x…) to receive USDC kickback payouts. | |
| attribution_url | Yes | Public URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description fully covers behavioral traits: settlement on Base USDC, 5% kickback, weekly payout, POST endpoint, and cold-start resilience with structured error response.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences with no redundancy. Purpose is front-loaded, each sentence adds unique value (settlement details, endpoint, cold-start behavior). Highly efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, description explains return behavior (cold-start response). Covers purpose, usage, behavioral traits, and parameters adequately for a registration tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter descriptions. The tool description adds overall context but does not enhance parameter meaning beyond what schema already provides, so baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'Register an agent' and resource 'Hive Civilization attribution payout program'. It distinguishes from siblings like hive_earn_leaderboard and hive_earn_me, which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context for when to use the tool (registering for payout program) and mentions resilience to cold-start, but lacks explicit when-not or alternatives beyond implied differentiation from siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zk_anchor_to_baseAInspect
Write an attestation commitment (32-byte hash) to Base via the Hive gateway. Anchors the attestation only; does not bridge value or move state to Aleo. Aleo snarkVM consumes the attestation independently via Leo programs (future hive-leo-circuits repo). Cost: $0.02 USDC + L1 gas. Backend RFC-stage; returns backend_pending until rails land.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_did | Yes | DID of the attesting agent | |
| proof_commitment | Yes | Hex-encoded 32-byte commitment to the proof | |
| verification_key_id | No | Identifier of the verification key referenced by the proof |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description provides full behavioral context: cost ($0.02 USDC + L1 gas), experimental backend status, return behavior ('backend_pending'), and what it does not do. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences, each serving a purpose: (1) action and target, (2) limitations, (3) cost and status. No redundant or vague language. Well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers expected return ('backend_pending') and experimental note. Parameters are well-documented in schema. Sibling tools are distinct enough that no confusion arises. Slightly lower score because it doesn't explain what the commitment is used for after anchoring, but that's acceptable.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline 3. The description adds minimal extra beyond schema: it restates that proof_commitment is hex-encoded but does not elaborate on agent_did or verification_key_id. Schema already explains parameters adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it writes an attestation commitment (32-byte hash) to Base via Hive gateway. It distinguishes from siblings by explicitly noting it does not bridge value or move state to Aleo, which is a key differentiator.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains when to use it (for anchoring only) and what it does not do (no bridging or state movement). It also hints at experimental status ('Backend RFC-stage'), giving context for appropriate use, though lacks explicit 'when not to use'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zk_attest_agent_stateAInspect
Produce a zero-knowledge attestation of an agent state hash + DID. Primary verification target is Aleo snarkVM (Varuna over BLS12-377); native Hive verification is next. Attestation-only — emits a proof, not a token; no value crosses chains. Cost: $0.05 USDC on Base. Backend RFC-stage; returns backend_pending until rails land.
| Name | Required | Description | Default |
|---|---|---|---|
| circuit | No | Circuit identifier; defaults to varuna-bls12377-agent-state-v1 (snarkVM-compatible) | |
| agent_did | Yes | DID of the agent whose state is being attested | |
| state_hash | Yes | Hex-encoded 32-byte hash of the agent state (poseidon or sha256) | |
| public_inputs | No | Optional public inputs as hex strings |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: it emits a proof not a token, no value crosses chains, cost is $0.05 USDC on Base, backend is RFC-stage returning backend_pending. Verification targets (Aleo snarkVM then Hive) are also specified. This is comprehensive and honest.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is compact (4 sentences) and front-loaded with the core action. Each sentence adds essential information without redundancy: purpose, verification targets, attestation nature, cost, and backend state. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (zero-knowledge proof generation) and lack of output schema, the description provides key context: cost, backend status, and return value (backend_pending). It could elaborate on the structure of the returned attestation or proof, but it sufficiently covers the most critical aspects for an in-development tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% description coverage for all 4 parameters, so baseline is 3. The description does not add additional parameter-level context beyond what is in the schema; it focuses on tool behavior. Thus, no extra value is provided for parameter semantics, keeping the score at baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Produce' and the resource 'zero-knowledge attestation of an agent state hash + DID'. It distinguishes from sibling tools like zk_verify_proof and zk_list_circuits by emphasizing it is attestation-only and emits a proof, not a token. The purpose is specific and immediately understandable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool (to attest an agent state) and differentiates it from token emission and cross-chain value transfers. It also notes the backend status and cost. However, it lacks explicit 'when not to use' or direct comparisons to alternatives like zk_query_attestation, so a small gap remains.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zk_list_circuitsAInspect
Enumerate supported circuits and verification key fingerprints. Primary: Varuna over BLS12-377 (Aleo snarkVM-compatible). Research-stage: Groth16, Plonk. Future: Risc0, Plonky2. Free. Read-only.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries the burden of behavioral disclosure. It explicitly marks the tool as read-only and free, which covers key safety and cost aspects for a simple list operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise: three short sentences with no fluff. The main action is front-loaded, and each sentence adds distinct information (purpose, specifics, cost/safety).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (no parameters, no output schema), the description fully covers purpose, items listed, and behavioral traits. No additional documentation is necessary for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
There are no parameters, and schema coverage is 100% (trivially). The description adds value by detailing what the tool returns (circuits and fingerprints), meeting the baseline for zero-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it enumerates supported circuits and verification key fingerprints, listing specific circuit types (Varuna, Groth16, Plonk, etc.), which distinguishes it from sibling tools like zk_anchor_to_base or zk_verify_proof.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes it is 'Free' and 'Read-only,' implying safe usage, and the context of sibling tools indicates this is the primary tool for listing circuits, though no explicit when-to-use or when-not-to-use guidance is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zk_query_attestationAInspect
Fetch a previously-anchored attestation by Base transaction hash. Returns the proof commitment, verification key id, agent DID, and block number. Free. Read-only.
| Name | Required | Description | Default |
|---|---|---|---|
| tx_hash | Yes | Base L2 transaction hash of the anchored attestation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Despite no annotations, describes read-only behavior, zero cost, and lists return fields. Does not address error scenarios or authentication, but for a simple query, this is adequate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences convey purpose, return values, and cost. No filler; every word adds value. Front-loaded with the essential action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter query tool without output schema, the description covers key aspects: what it does, what it returns, and cost. Minor omission of response format or example, but overall complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description restates the parameter purpose without adding new semantics. Baseline 3 is appropriate as schema already documents the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the action (fetch attestation by tx_hash) and the resource type (previously-anchored attestation). Distinguishes from siblings like zk_anchor_to_base (anchoring) and zk_list_circuits (listing circuits).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context (after anchoring) and declares read-only, free nature. Lacks explicit exclusion or alternative tool references, but clarity is high enough for an AI to infer appropriate use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
zk_verify_proofAInspect
Verify a submitted attestation against a known verification key. Aleo snarkVM (Varuna/BLS12-377) is the primary verification target via the snark.verify opcode. Returns boolean validity plus the verification key fingerprint. Free. Read-only — no settlement, no on-chain write.
| Name | Required | Description | Default |
|---|---|---|---|
| proof | Yes | Hex-encoded proof bytes (Varuna; Groth16/Plonk research-stage only) | |
| public_inputs | No | Public inputs the proof was generated against | |
| verification_key_id | Yes | Identifier of the verification key to check against |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description bears full responsibility. It discloses read-only nature, cost, and return type. Lacks details on potential errors or authentication, but core behavior is transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three concise sentences with front-loaded purpose and no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, technical details, return format, and behavioral traits. Without an output schema, the return description is helpful. Minor omission of error scenarios, but overall complete for a verification tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% and the description does not add new information beyond what the schema already provides for each parameter. Baseline score of 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Verify a submitted attestation against a known verification key' with specific technical context (Aleo snarkVM, Varuna/BLS12-377) and distinguishes from siblings like zk_anchor_to_base which have different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear context (free, read-only, no settlement) implying when to use, but does not explicitly name alternatives or exclusions. The mention of primary verification target guides supported proof types.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!