Skip to main content
Glama
Frontier-Compute

Frontier-Compute/zcash-mcp

Server Quality Checklist

83%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are mostly distinct, but get_balance (returns attestation/leaf count rather than currency balance), get_agent_status, and get_anchor_status all retrieve overlapping attestation metadata, requiring careful reading to distinguish the specific scope of each (wallet vs agent vs anchor).

    Naming Consistency5/5

    Excellent consistency throughout: all 12 tools use snake_case with clear verb_noun patterns (attest_event, decode_memo, get_*, lookup_transaction, send_shielded, verify_proof). The verb prefixes (attest, decode, get, lookup, send, verify) are semantically appropriate and uniformly applied.

    Tool Count5/5

    12 tools is well-scoped for a Zcash ZAP1 attestation server. The set covers attestation lifecycle, Merkle tree/anchor management, agent monitoring, and essential Zcash chain integration (blocks, transactions, memos, payment URIs) without bloat or obvious redundancy.

    Completeness4/5

    Strong coverage of the ZAP1 attestation domain: creating attestations, verifying proofs, querying anchors/history, and inspecting chain data. Minor gap: no explicit agent registration tool (though likely handled via attest_event), and transaction creation is limited to URI generation rather than full transaction building/broadcasting.

  • Average 3.7/5 across 12 of 12 tools scored. Lowest: 2.9/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • This repository includes a LICENSE file.

  • Latest release: v0.2.2

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • This repository includes a glama.json configuration file.

  • This server provides 12 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It mentions 'raw' data which hints at the return format, but fails to disclose error conditions (e.g., txid not found), rate limits, caching behavior, or whether this requires specific node permissions.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence that is front-loaded with the verb. No redundant words or wasted space; every token contributes to understanding the core function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Lacks an output schema and provides no description of return values (e.g., raw hex string vs JSON object). Given no annotations and unknown error behaviors, the description is insufficient for robust agent operation without external documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing a baseline of 3. The description mentions 'by txid' which aligns with the required parameter, and 'raw' which loosely connects to the `verbose` parameter's behavior (raw hex vs decoded), but adds no syntax details beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States a specific action (Get), resource (transaction data), and scope (by txid from Zebra). However, it does not differentiate from sibling tools like `decode_memo` or `get_events` which may also access transaction-related data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this tool versus alternatives like `send_shielded` or `decode_memo`. Does not mention prerequisites such as needing a valid txid format or availability in the Zebra node.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure but only identifies the data source ('from Zebra'). It fails to disclose whether the height is cached, the expected latency, error conditions (e.g., if the node is still syncing), or the return value format.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient eight-word sentence that front-loads the action verb. There is no redundant or wasted language; every word contributes to understanding the tool's function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's low complexity (zero parameters) and the absence of an output schema, the description provides adequate domain context by mentioning 'Zcash' and 'Zebra.' However, it lacks information about the return value structure (integer, string, or object), which would be necessary for an agent to use the result correctly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, establishing a baseline score of 4 per the evaluation rubric. The description appropriately implies no filtering or input is needed to retrieve the current global chain state.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves the 'current Zcash chain height' using the verb 'Get' and specifies the data source 'from Zebra.' It effectively identifies the specific resource being accessed, though it does not explicitly differentiate from the sibling get_stats tool which may also return blockchain metadata.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives like get_stats or get_anchor_status. It omits any mention of prerequisites, such as requiring a synced Zebra node, or when this query might fail versus succeed.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations are provided, so the description carries the full burden. It partially compensates by disclosing return values ('lifecycle events, leaf count, and verification links'), which hints at the data structure. However, it lacks information about authentication requirements, rate limiting, caching behavior, or whether this is a real-time vs cached query.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured sentences with zero waste: first establishes purpose and API source, second previews return values. Information is front-loaded and appropriately sized for a single-parameter lookup tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for a simple single-parameter tool, and the preview of return values (lifecycle events, leaf count) partially compensates for the missing output schema. However, given the misleading tool name ('get_balance' vs described attestation functionality), the description should explicitly clarify this mismatch or provide usage context to prevent agent confusion.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage ('Wallet hash or agent ID to look up'), the schema fully documents the single parameter. The description mentions 'wallet hash' in the first sentence but adds no additional semantic context (format constraints, validation rules, or examples) beyond what the schema already provides.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool retrieves 'attestation and anchor status for a wallet hash' and specifies the ZAP1 API source. It distinguishes itself from sibling tools like get_anchor_status (which presumably retrieves only anchor status) and get_agent_status by explicitly mentioning attestation data and lifecycle events.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance provided on when to use this tool versus alternatives like get_anchor_status, get_anchor_history, or get_agent_status. Given the tool's name ('get_balance') suggests financial/account balance retrieval but the description indicates attestation/verification data, explicit usage guidance is particularly needed but absent.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully clarifies that the tool generates a payment URI rather than actually transferring funds (critical given the 'send_shielded' name), but fails to disclose return format, error handling behavior, or whether network connectivity is required.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero redundancy. It is front-loaded with the core action (Generate URI) followed by implementation details (encodes specific fields), making it easy to scan.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 100% schema coverage and simple parameter types, the description adequately covers inputs. However, with no output schema provided, the description should ideally specify what the tool returns (the URI string). It leaves gaps regarding error states and validation behavior.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, establishing a baseline of 3. The description adds minimal semantic value by noting parameters are 'encoded into a scannable URI', but does not elaborate on parameter interactions, format constraints beyond the schema, or the purpose of the 'label' parameter which is omitted from the description text.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Generate', 'Encodes') and clearly identifies the resource as a 'zcash: payment URI for shielded ZEC'. It distinguishes from siblings (which query status, decode memos, or verify proofs) by specifying URI generation functionality.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no guidance on when to use this tool versus alternatives, nor does it mention prerequisites or exclusions. While the functionality is unique among siblings (no other tool generates payment URIs), explicit contextual guidance is absent.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It lists returned fields (event type, wallet hash, etc.) but omits critical behavioral details: definition of 'recent' (time window?), ordering (chronological?), and pagination behavior beyond the limit parameter.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two well-structured sentences with zero waste. Front-loaded with action ('Get recent...'), followed by return value specification. Appropriate length for single-parameter tool.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple retrieval tool with one optional parameter, description adequately compensates for missing output schema by enumerating returned fields. Minor gap: 'recent' lacks temporal definition.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema has 100% description coverage ('Number of events (default 20)'), establishing baseline of 3. Description doesn't add syntax constraints or usage guidance for the limit parameter beyond what's in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb 'Get' + resource 'ZAP1 attestation events' clearly defines scope. Distinguishes from sibling 'attest_event' (likely write operation) by emphasizing retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No explicit when-to-use guidance or alternative recommendations. While implied by the name, it doesn't clarify when to use this vs. lookup_transaction or get_anchor_history for historical data.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions what data is returned (txids and block heights) but omits critical behavioral traits: pagination behavior, response format/structure, rate limits, or whether this is an expensive operation. It also doesn't explain what 'ZAP1' refers to.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of a single, efficient sentence that front-loads the action and resource. There is no redundant or wasted text; every word contributes to understanding the tool's function.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description partially compensates by listing the key data elements returned (txids and block heights). However, without describing the response structure (array vs object), pagination behavior, or data format, the description leaves significant gaps for a data retrieval tool.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters. Per evaluation guidelines, 0 parameters establishes a baseline score of 4, as there are no parameter semantics to clarify beyond the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and resource ('ZAP1 Merkle root anchors'), clarifies scope ('all'), and specifies returned fields ('Zcash txids and block heights'). The word 'all' effectively distinguishes this from sibling 'get_anchor_status', implying historical vs. current state retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies this is for historical data retrieval (contrasted with get_anchor_status), it provides no explicit guidance on when to use this versus alternatives, nor any prerequisites or exclusions for invocation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description carries full burden. It discloses the Merkle tree destination and return value (leaf hash), but omits mutation safety details like irreversibility, confirmation requirements, or idempotency that would help an agent understand blockchain write semantics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: action/purpose front-loaded, mechanism explained, return value specified. No redundant phrases or boilerplate.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Compensates for missing output schema by specifying return value (leaf hash). Mentions the ZAP1 system context. Could improve by noting api_key authentication requirement or required field constraints beyond schema, but adequate for tool complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description adds semantic context by mapping 'lifecycle, governance, or agent' to the event_type parameter values, but does not explain relationships between conditional fields (e.g., when proposal_id vs action_type is required).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verb 'Create' with resource 'ZAP1 attestation event' and platform 'Zcash'. The second sentence clarifies it writes 'lifecycle, governance, or agent event' to the Merkle tree, clearly distinguishing it from read-only siblings like get_events or get_anchor_status.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage through event categories (lifecycle, governance, agent) but lacks explicit when-to-use guidance or comparison to siblings. Does not clarify when to use verify_proof vs this tool, or prerequisites for attestation.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It successfully enumerates the four data categories returned (registration, policies, actions, event history), providing transparency on output content. However, it lacks information on auth requirements, rate limits, or safety characteristics (though 'Get' implies read-only).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is a single, efficiently structured sentence with the action verb front-loaded. The colon-separated list of data categories provides high information density with zero waste words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (single required parameter, no output schema), the description adequately compensates by detailing the return structure through the four listed data categories. For a lookup tool of this scope, listing the attestation components provides sufficient context without needing explicit output schema documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% coverage with 'agent_id' described as 'Agent identifier'. The description mentions 'for a ZAP1 agent' which implicitly maps to the parameter but adds no format constraints, validation rules, or semantic details beyond the schema. Baseline 3 is appropriate given complete schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the specific action ('Get'), resource ('attestation summary for a ZAP1 agent'), and scope (registration, policies, actions, event history). It effectively distinguishes from siblings like 'get_events' (general vs. agent-specific) and 'get_anchor_status' (different entity type).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the specificity of 'ZAP1 agent attestation summary' provides implied context for when to use this (agent-specific lookups vs. general event queries), there are no explicit guidelines distinguishing it from similar read operations like 'get_events' or 'get_anchor_status', nor any prerequisites mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It adds valuable behavioral context by stating it 'returns the proof path', indicating the output. However, it omits critical behavioral details such as error handling (what happens if verification fails), whether the operation is read-only, or if there are side effects.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of two efficient sentences with zero waste. It is front-loaded with the core action ('Verify a ZAP1 Merkle proof') followed by specific implementation details, making it easy to scan and comprehend.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's simplicity (single parameter, no output schema), the description is nearly complete. It compensates for the missing output schema by describing the return value ('returns the proof path'). It could be improved by mentioning validation failure behavior or authentication requirements.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage (defining leaf_hash as hex-encoded, 64 chars), establishing a baseline of 3. The description mentions 'leaf hash' in the context of existence checking but does not add semantic meaning beyond the schema's technical specification (e.g., what the hash represents or example values).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verbs ('Verify', 'Checks') and resources ('ZAP1 Merkle proof', 'attestation tree') to clearly define the tool's function. It effectively distinguishes itself from siblings like 'attest_event' (which likely creates proofs) by positioning this as a verification operation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies usage by clearly stating it verifies proofs rather than creating them, it lacks explicit guidance on when to use this tool versus alternatives like 'attest_event'. No prerequisites, error conditions, or specific scenarios are mentioned.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. It effectively discloses supported decoding standards (ZAP1, ZIP 302, plain text, raw binary), which is crucial behavioral context for a cryptographic decoder. However, it omits error handling behavior, authentication requirements, or return value structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. First sentence states core purpose immediately; second sentence enumerates specific format capabilities. Every word earns its place with no redundant filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriate for a single-parameter utility with no output schema. Describes the decoding operation and supported formats comprehensively. Minor gap: could briefly characterize the return value (e.g., 'returns decoded string or structured data') to compensate for missing output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100% with the parameter already well-documented as 'hex string or base64'. The description mentions 'Zcash shielded memo field' which aligns with the parameter, but primarily relies on the schema for parameter semantics. Baseline score appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action ('Decode') and resource ('Zcash shielded memo field'). Clearly distinguishes from sibling tools like send_shielded (which sends) or lookup_transaction (which retrieves). The mention of specific protocols (ZAP1, ZIP 302) further clarifies scope.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage through supported format enumeration (ZAP1, ZIP 302, etc.), but lacks explicit guidance on when to use versus alternatives or workflow prerequisites (e.g., 'use after retrieving a transaction memo'). No 'when-not-to-use' guidance provided.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description compensates by disclosing the return structure (root hash, unanchored leaves, anchor recommendation) since no output schema exists. However, it omits auth requirements or rate limiting context.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with zero waste: front-loaded action ('Get current ZAP1 Merkle tree state') followed by colon-delimited return value specification. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists, the description adequately documents return values. For a zero-parameter read operation, this is sufficiently complete, though explicit read-only assurance would strengthen it given lack of annotations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has zero parameters, meeting the baseline score of 4. No parameter documentation is required or present.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Get' and resource 'ZAP1 Merkle tree state', and explicitly scopes it to 'current' state, clearly distinguishing it from sibling get_anchor_history.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While the description implies usage through the 'current' qualifier (distinguishing from historical data), it lacks explicit when-to-use guidance or comparison to alternatives like get_anchor_history.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden and successfully discloses the return payload structure (leaves, anchors, type distribution, tree height). It does not mention auth requirements or caching behavior, but 'Get' clearly signals a read-only operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Single sentence with optimal structure: verb and resource front-loaded ('Get ZAP1 protocol stats'), followed by a colon-delimited list of specific return values. No redundant or wasted words.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a zero-parameter tool without an output schema, the description achieves completeness by enumerating the exact statistics returned (total leaves, anchors, type distribution, tree height), fully compensating for the missing structured return type documentation.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema contains zero parameters, establishing a baseline of 4. The description appropriately requires no parameter clarification since the tool takes no inputs.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and resource ('ZAP1 protocol stats'), and distinguishes from siblings by listing unique aggregate metrics (total leaves, tree height) that contrast with specific-entity tools like get_anchor_status or get_balance.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides implied usage context through the listed metrics (suggesting protocol-wide overview), but lacks explicit guidance on when to prefer this over similar query tools like get_anchor_status or get_events.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

zcash-mcp MCP server

Copy to your README.md:

Score Badge

zcash-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Frontier-Compute/zcash-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server