Skip to main content
Glama

zkproofport-ai

Ownership verified

Server Details

Zero-knowledge proof generation MCP server. AI agents can prove identity claims (Coinbase KYC, Country, Google OIDC, Google Workspace, Microsoft 365) without revealing personal data.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.4/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

The three tools have completely distinct purposes with no overlap: get_supported_circuits is for discovery, get_guide is for preparation, and prove is for execution. Each serves a different stage in the proof generation workflow, making them easily distinguishable.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_supported_circuits, get_guide, prove). While 'prove' is a single verb without a noun, it's semantically clear and maintains a clean, predictable naming convention throughout the set.

Tool Count5/5

Three tools is perfectly appropriate for this server's scope of ZK proof generation. The tools cover the essential workflow stages: discovery, preparation, and execution, with no unnecessary duplication or missing core functionality.

Completeness5/5

The tool set provides complete coverage for the ZK proof generation domain: discovery (get_supported_circuits), preparation guidance (get_guide), and execution (prove). There are no obvious gaps in the workflow, and the tools logically guide users through the entire process from start to finish.

Available Tools

3 tools
get_guideAInspect

Get a comprehensive step-by-step guide for preparing all inputs required for a specific circuit. Read this BEFORE attempting proof generation — the guide covers how to compute signal_hash, nullifier, scope_bytes, merkle_root, how to query EAS GraphQL for the attestation, how to RLP-encode the transaction, how to recover secp256k1 public keys, and how to build the Merkle proof.

ParametersJSON Schema
NameRequiredDescriptionDefault
circuitYesCircuit alias to get the guide for.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates this is a read-only informational tool ('Get a comprehensive step-by-step guide') that doesn't perform mutations. However, it doesn't mention potential rate limits, authentication requirements, or response format details that would be helpful for a guide-fetching operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The second sentence provides valuable context about the guide's contents, though it could be slightly more concise by grouping related concepts (e.g., 'how to compute signal_hash, nullifier, scope_bytes, merkle_root' could be 'how to compute required cryptographic inputs').

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a single-parameter tool with no annotations and no output schema, the description provides good context about what the guide contains and when to use it. However, it doesn't describe the return format (e.g., whether it's markdown, JSON, or structured data), which would be important for a guide-fetching tool with no output schema defined.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 100% description coverage, so the baseline is 3. The description doesn't add specific parameter semantics beyond what the schema provides (circuit enum values), though it does mention the guide covers specific technical topics like 'signal_hash' and 'merkle_root' that relate to the circuit parameter's purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Get a comprehensive step-by-step guide') and resources ('preparing all inputs required for a specific circuit'). It distinguishes from sibling tools by focusing on preparation guidance rather than listing circuits (get_supported_circuits) or executing proofs (prove).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance with 'Read this BEFORE attempting proof generation,' creating a clear temporal dependency with the 'prove' sibling tool. It also implicitly suggests this tool should be used for preparation while 'prove' is for execution, establishing a workflow relationship.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_supported_circuitsAInspect

List all ZK circuits supported by ZKProofport. Call this first to discover available circuits before starting proof generation.

AVAILABLE MCP TOOLS (use EXACT names — no other tool names exist):

  1. get_supported_circuits — this tool (discovery)

  2. prove — submit proof inputs (redirects to REST endpoint for long-running proof generation)

IMPORTANT: Do NOT call "generate_proof", "proof_request", or any other tool name. The correct flow is: get_supported_circuits → prove (x402 single-step: POST → 402 → pay → retry)

CIRCUITS:

  1. coinbase_attestation ("coinbase_kyc")

    • Proves the user has passed Coinbase KYC identity verification

    • EAS Schema ID: 0xf8b05c79f090979bf4a80270aba232dff11a10d9ca55c4f88de95317970f0de9

    • Verifier (Base Sepolia, chain 84532): 0x0036b61dbfab8f3cfeef77dd5d45f7efbfe2035c

    • Required inputs: address, signature, scope

    • Use circuit = "coinbase_kyc" in the prove tool

  2. coinbase_country_attestation ("coinbase_country")

    • Proves the user's country of residence from Coinbase attestation is in (or not in) a given country list

    • EAS Schema ID: 0x1801901fabd0e6189356b4fb52bb0ab855276d84f7ec140839fbd1f6801ca065

    • Verifier (Base Sepolia, chain 84532): 0xdee363585926c3c28327efd1edd01cf4559738cf

    • Required inputs: address, signature, scope, countryList, isIncluded

    • Use circuit = "coinbase_country" in the prove tool

CHAIN INFORMATION:

  • Current deployments are on Base Sepolia (chain ID 84532)

  • EAS (Ethereum Attestation Service) on Base: https://base.easpcan.org/graphql

  • EAS on Base Sepolia: https://base-sepolia.easpcan.org/graphql

AUTHORIZED COINBASE ATTESTERS (used for Merkle proof construction):

  • 0x952f32128AF084422539C4Ff96df5C525322E564 (index 0)

  • 0x8844591D47F17bcA6F5dF8f6B64F4a739F1C0080 (index 1)

  • 0x88fe64ea2e121f49bb77abea6c0a45e93638c3c5 (index 2)

  • 0x44ace9abb148e8412ac4492e9a1ae6bd88226803 (index 3)

USDC ADDRESSES (for payment):

  • Base Sepolia (testnet): 0x036CbD53842c5426634e7929541eC2318f3dCF7e

  • Base mainnet: 0x833589fCD6eDb6E08f4c7C32D4f71b54bdA02913

Response fields:

  • circuits (array): List of supported circuits with id, displayName, description, requiredInputs, easSchemaId, verifierAddress

  • chainId (string): Chain ID for verifier addresses

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the tool's behavior as a discovery/listing operation that returns circuit metadata, including required inputs, chain information, and authorized attestation details. However, it doesn't mention potential limitations like rate limits, error conditions, or whether the list is static or dynamically updated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While the description is comprehensive, it includes substantial implementation details (EAS Schema IDs, verifier addresses, USDC addresses, authorized attestation indices) that might be better placed in documentation rather than the core tool description. The first two sentences effectively communicate the purpose and usage, but the subsequent sections add significant bulk that reduces conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (discovery tool with rich metadata) and no output schema, the description provides excellent contextual completeness by detailing the response structure, chain information, and circuit-specific metadata. However, without annotations and with no output schema, it could benefit from explicitly stating the tool's read-only nature and any authentication requirements.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so the baseline would be 4. The description appropriately doesn't discuss input parameters since none exist, but it does provide extensive output semantics by detailing the 'circuits' array structure, 'chainId' field, and all the metadata that will be returned for each circuit.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all ZK circuits supported by ZKProofport' with the specific verb 'List' and resource 'ZK circuits'. It explicitly distinguishes from its sibling 'prove' by stating 'Call this first to discover available circuits before starting proof generation', establishing a clear sequential relationship and differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: 'Call this first to discover available circuits before starting proof generation' establishes when to use it. It also specifies alternatives by naming the 'prove' tool and explicitly warns against incorrect tool names like 'generate_proof' or 'proof_request', providing clear when-not-to-use guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

proveAInspect

Submit proof inputs to generate a ZK proof via the x402 single-step flow. Atomically verifies USDC payment on-chain and runs the Noir circuit in a TEE to produce a Groth16 SNARK proof.

IMPORTANT: MCP tool calls have timeout limitations that make this tool UNSUITABLE for the 30-90 second proof generation process. This tool returns a redirect message. Use the REST endpoint directly: POST https://stg-ai.zkproofport.app/api/v1/prove (staging) POST https://ai.zkproofport.app/api/v1/prove (production)

x402 SINGLE-STEP FLOW:

  1. POST /api/v1/prove with { circuit, inputs } — no payment yet

  2. Server returns 402 with nonce in body

  3. Pay USDC using nonce, get tx hash

  4. Retry POST /api/v1/prove with same body + X-Payment-TX and X-Payment-Nonce headers

REQUEST BODY SCHEMA: { "circuit": "coinbase_kyc" | "coinbase_country", "inputs": { "signal_hash": "", // 0x, 32 bytes: keccak256(abi.encodePacked(address, scope, circuitId)) "nullifier": "", // 0x, 32 bytes: privacy-preserving unique identifier "scope_bytes": "", // 0x, 32 bytes: keccak256 of the scope string "merkle_root": "", // 0x, 32 bytes: Merkle root of authorized attesters "user_address": "", // 0x, 20 bytes: the KYC wallet address "signature": "", // 65-byte hex: eth_sign(signal_hash) by KYC wallet "user_pubkey_x": "", // 32-byte hex: secp256k1 public key X coordinate "user_pubkey_y": "", // 32-byte hex: secp256k1 public key Y coordinate "raw_transaction": "", // 0x-prefixed RLP-encoded EAS attestation TX (padded to 300 bytes by server) "tx_length": , // actual byte length of raw_transaction BEFORE zero-padding "coinbase_attester_pubkey_x": "", // 32-byte hex: Coinbase attester secp256k1 X coordinate "coinbase_attester_pubkey_y": "", // 32-byte hex: Coinbase attester secp256k1 Y coordinate "merkle_proof": ["", ...], // array of 32-byte hex sibling hashes (one per tree level) "leaf_index": , // 0-based index of attester leaf in the Merkle tree "depth": , // number of levels in the Merkle tree (max 8) "country_list": ["", ...], // optional: only for coinbase_country circuit "is_included": // optional: only for coinbase_country circuit } }

VERIFIER ADDRESSES (Base Sepolia, chain ID 84532): coinbase_kyc (coinbase_attestation): 0x0036b61dbfab8f3cfeef77dd5d45f7efbfe2035c coinbase_country (coinbase_country_attestation): 0xdee363585926c3c28327efd1edd01cf4559738cf

ParametersJSON Schema
NameRequiredDescriptionDefault
inputsYesAll circuit inputs required to generate the ZK proof.
circuitYesWhich circuit to use.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly describes the tool's behavior: it's unsuitable for direct proof generation due to timeouts, returns a redirect message, requires a multi-step payment flow (x402), involves on-chain USDC verification, runs in a TEE, and produces a Groth16 SNARK proof. It also includes verifier addresses and network details, adding critical context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, important note, REST endpoints, flow steps, request body, verifier addresses) and is appropriately sized for a complex tool. However, it includes some redundancy (e.g., repeating schema details) and could be more front-loaded; the critical 'unsuitable for MCP' warning is early but buried in a dense paragraph. Every sentence adds value, but minor trimming could improve efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's high complexity (involves cryptography, payments, and multi-step flows), no annotations, and no output schema, the description provides exceptional completeness. It covers purpose, usage constraints, behavioral details, parameter context, verifier addresses, and network specifics. This fully compensates for the lack of structured metadata, ensuring an agent has all necessary context to understand and invoke the tool correctly, despite its intricacies.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters comprehensively. The description includes a detailed request body schema section that largely repeats the input schema information, adding minimal extra semantic value (e.g., clarifying padding for raw_transaction). This meets the baseline of 3, as the schema does the heavy lifting, but the description doesn't significantly enhance parameter understanding beyond what's already structured.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Submit proof inputs to generate a ZK proof via the x402 single-step flow. Atomically verifies USDC payment on-chain and runs the Noir circuit in a TEE to produce a Groth16 SNARK proof.' This is specific (verb: submit, generate; resource: proof) and distinguishes it from sibling tools like get_guide or get_supported_circuits, which are informational rather than proof-generation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when NOT to use this tool: 'MCP tool calls have timeout limitations that make this tool UNSUITABLE for the 30-90 second proof generation process. This tool returns a redirect message. Use the REST endpoint directly.' It also specifies alternatives (direct REST endpoints for staging/production) and outlines the multi-step flow required for successful proof generation, making it clear when and how to use it versus other methods.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources