Skip to main content
Glama

Server Details

MCP server: NEED + YIELD + CLEAN-MONEY gates with EIP-3009 attestations · Hive Civilization

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-evaluator
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 7 of 7 tools scored. Lowest: 3.5/5.

Server CoherenceA
Disambiguation5/5

All tools have clear, distinct purposes. The 'evaluator_' and 'hive_earn_' groups address separate domains with no overlap, and within each group, each tool covers a unique action (e.g., submit vs. get vs. attest).

Naming Consistency5/5

All tool names follow a consistent 'prefix_verb_noun' pattern using snake_case. The prefixes 'evaluator_' and 'hive_earn_' clearly separate the two functional groups, and verbs like 'submit', 'get', 'attest', 'register' are standard.

Tool Count4/5

7 tools is a reasonable number for a server covering two distinct subdomains. The count is not excessive, but the dual-purpose scope (evaluator + earn) makes the server slightly broader than the name suggests.

Completeness4/5

The evaluator group covers the key lifecycle: submit, retrieve status, check fees, and finalize attestation. The earn group covers registration, personal lookup, and leaderboard. Minor gaps like job cancellation or payout history exist but are not critical for basic workflows.

Available Tools

7 tools
evaluator_attest_jobAInspect

Trigger settlement and emit the on-chain attestation for a completed job. Settles to the Hive Safe Treasury on the chain selected at submission. Requires EIP-3009 signature for Base/Ethereum.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID returned from evaluator_submit_job
signatureNoEIP-3009 signature (required for EVM chains)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses the mutation (trigger settlement) and on-chain effect, but omits details on reversibility, permissions, costs, or error states. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with action, no filler. Efficient and clear.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers main purpose, destination, and a key requirement. Missing return value info and error conditions, but given no output schema, it is reasonably complete for a simple trigger.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description adds meaning: job_id comes from evaluator_submit_job, and signature is conditionally required for EVM chains, which clarifies the schema's optional flag.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool triggers settlement and emits on-chain attestation for a completed job, specifying destination and chain. It distinguishes from siblings like evaluator_submit_job (submission) and read-only tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The context of use (after job completion) is implied, and the EIP-3009 signature requirement provides chain-specific guidance. However, it lacks explicit when-not-to-use or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

evaluator_get_feesAInspect

Get the live evaluator fee schedule (3 tiers, settlement currencies, recipient addresses, ERC-8183 / Virtuals ACP v2.0 spec). No auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It reveals the tool is read-only ('live'), idempotent, and free (no auth). It also specifies the spec version (ERC-8183 / Virtuals ACP v2.0), adding valuable behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that packs essential information: purpose, content, spec version, and auth requirement. Every word adds value, no waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description sufficiently explains what the tool returns (3 tiers, currencies, addresses, spec). For a read-only fee schedule retrieval with no parameters, it is fully adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, so the description adds meaning beyond the schema by detailing what the output contains (fee tiers, currencies, addresses). Baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the live evaluator fee schedule, listing specific contents (3 tiers, settlement currencies, recipient addresses, spec). It distinguishes from sibling tools like evaluator_attest_job and hive_earn_*, which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes 'No auth required,' simplifying usage context. While it lacks explicit when-to-use vs alternatives, the purpose is self-contained and isolated from siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

evaluator_get_jobAInspect

Retrieve evaluation status, verdict, and attestation for a previously-submitted job.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idYesJob ID returned from evaluator_submit_job
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided; the description indicates a read operation ('retrieve') but lacks details on side effects, rate limits, or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

One sentence, concise and front-loaded, no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple retrieval tool with one parameter and no output schema; could mention error handling but not required.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Single parameter 'job_id' with schema coverage 100%. Description adds no significant meaning beyond the schema description, baseline of 3 applies.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'retrieve' and the resource 'evaluation status, verdict, and attestation' for a previously-submitted job, distinguishing it from sibling tools like evaluator_submit_job.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage after job submission but provides no explicit when-to-use or when-not-to-use guidance, nor alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

evaluator_submit_jobAInspect

Submit a job for evaluation. Choose tier (simple, evaluation, arbitration). Job value is quoted in USDC; fee = max($0.05, value * tier_bps / 10000). Returns job_id and quoted fee.

ParametersJSON Schema
NameRequiredDescriptionDefault
tierYes'simple' (0.5%), 'evaluation' (1.0%), or 'arbitration' (2.0%)
contextNoFree-form context for the evaluator (max 4 KB)
subject_didYesDID of the agent or output being evaluated
submitter_didYesDID of the submitting agent
job_value_usdcYesNotional job value in USDC
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so description carries full burden. Discloses creation of job, fee calculation, and return of job_id/quoted fee. Lacks details on side effects, auth requirements, or constraints (e.g., min/max job value).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with three sentences, front-loaded with verb 'Submit'. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers main behavior and returns despite no output schema. Minor gap: no constraints on job_value_usdc (e.g., minimum) or rate limits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description adds marginal value. It reiterates tier percentages and fee formula already in schema descriptions, but doesn't provide deeper semantics beyond what schema offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states submission of a job for evaluation with tier selection, fee computation, and return values. Distinguishes from siblings which are about attestation, fee lookup, or retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear usage context: selecting tier and quoting job value with fee formula. Does not explicitly state when not to use, but siblings imply alternatives for post-submission actions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_leaderboardAInspect

Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoTime window. One of: "7d", "30d", "lifetime". Default "7d".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must carry behavioral info. It discloses the external API call and graceful error handling for upstream unavailability. However, it does not mention idempotency, caching, or data freshness. Adequate but not comprehensive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is two sentences: first defines purpose, second provides technical detail and error handling. Front-loaded and efficient with no unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, so description should explain return format. It says 'top earning agents...' implying a list but lacks details on structure (e.g., fields, pagination). Adequate for a simple tool but could be more explicit.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, with clear enum and description. The description adds the URL template but no new semantics beyond what the schema already provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns top earning agents by attribution payout in USDC. It names the resource and metric. It is distinct from siblings like evaluator_* and hive_earn_me, but does not explicitly differentiate.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for viewing leaderboard data but does not explicitly state when to use this tool over alternatives or provide exclusions. Siblings are different functionality, so lack of guidance is less critical.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_meAInspect

Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up. Required.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses key behaviors: it performs a GET request, uses real Base USDC, and returns an error message if upstream is not deployed. However, it does not mention authentication requirements, rate limits, or what happens if the agent is not registered.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, immediately states the purpose, and includes key technical details without any extraneous text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description outlines return values (balance, payout hash, ETA) and error handling. It is mostly complete but misses prerequisites like needing to be registered first.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The sole parameter 'agent_did' is fully described in the schema, and the description adds value by indicating it should be the caller's own DID and showing how it's used in the endpoint URL.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves the caller agent's earn profile with specific data fields (balance, payout tx, next-payout ETA). It distinguishes itself from siblings like 'hive_earn_register' and 'hive_earn_leaderboard' by focusing on a self-lookup profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for the caller's own profile but does not explicitly state when to use versus alternatives or when not to use. No mention of prerequisites like registration or comparison with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_registerAInspect

Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesCaller agent DID (e.g. did:hive:0x… or did:web:…). Required.
payout_addressYesBase L2 EVM address (0x…) to receive USDC kickback payouts.
attribution_urlYesPublic URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses HTTP endpoint, resilience to cold-start with structured error response, and payout details. Without annotations, the description covers key behavioral aspects well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose. Each sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, payout details, endpoint, and error handling. No output schema, but mentions the response structure briefly. Sufficient for a registration tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with clear descriptions. The description adds overall context but does not significantly augment parameter meaning beyond what schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool registers an agent for the Hive Civilization attribution payout program. Differentiates from sibling tools like hive_earn_leaderboard and hive_earn_me.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context on settlement, kickback, and payout timing, implying when to use. Lacks explicit when-not or alternative tool guidance, but overall clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.