Skip to main content
Glama

Server Details

MCP server: solver auction across io.net / Akash / Render with signed receipts · Hive Civilization

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-compute-grid
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 3.9/5 across 14 of 14 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct action in the compute grid or hive earn domains, with no overlapping purposes. The verbs clearly differentiate operations like booking, solving, verifying, and earning.

Naming Consistency5/5

All tool names follow a consistent pattern: 'computegrid_' or 'hive_earn_' prefix followed by a descriptive verb (e.g., list, get, solve, verify). No mixing of naming conventions.

Tool Count5/5

14 tools cover the compute grid and earn system well without excess. The number is appropriate for the scope, supporting core operations without unnecessary bloat.

Completeness4/5

The tool set covers primary workflows (listing, booking, auctions, verification, earning). Minor gaps exist, such as missing update/delete for bookings or a direct cancel tool, but the core functionality is solid.

Available Tools

14 tools
computegrid_auditAInspect

Read recent compute_grid_auction entries from the canonical receipt ledger — cost_usdc telemetry per cleared auction. Useful for treasury reconciliation and provenance audits.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of recent entries (default 20)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description indicates a read-only operation ('Read recent... from the canonical receipt ledger'), which implies no destructive side effects. Since no annotations are provided, this disclosure is sufficient, though it could explicitly state its safety and absence of mutations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no redundancy. The first sentence immediately conveys the action and target, and the second adds context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read tool with one optional parameter and no output schema, the description covers purpose and outcome adequately. It could be enhanced by contrasting with sibling tools, but overall it is complete enough for an agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema covers 100% of the single parameter 'limit' with a clear default and range. The description does not add extra meaning beyond the schema, so it meets the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Read'), specific resource ('compute_grid_auction entries from the canonical receipt ledger'), and data fields ('cost_usdc telemetry per cleared auction'). It also provides use cases, effectively distinguishing it from sibling tools that perform other operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions use cases ('treasury reconciliation and provenance audits') but does not explicitly guide when to use this tool vs alternatives like computegrid_book or computegrid_status. It implies a read-only audit context but lacks direct comparisons.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_bookAInspect

Reserve compute with the chosen provider after a /solve. Returns 503 with the missing-key error if the upstream provider key is not configured (real rails only — never fake a booking).

ParametersJSON Schema
NameRequiredDescriptionDefault
quoteYesChosen Quote object from /solve.chosen_quote
providerYes
auction_idYes
deadline_unixYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description notes a specific error (503 for missing key) and the 'real rails' constraint, but lacks details on success behavior, side effects, or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with main action, no redundancy. Highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 4 parameters and no output schema, the description omits success behavior and full parameter explanation, leaving gaps for autonomous use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 25% and description does not elaborate on parameters like auction_id or deadline_unix, missing opportunity to compensate for low schema detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool's action ('Reserve compute') and context ('after a /solve'), distinguishing it from siblings like computegrid_quote and computegrid_solve.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description explicitly places usage after a /solve, providing sequential context. However, it does not mention when not to use or compare to alternatives like computegrid_release.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_get_capacityAInspect

Read-only capacity view from the Capacity Listener fleet. Per spec section 8: NO bids, NO hedges, NO positions, NO derivatives — pure read-only telemetry. Optional refresh=true triggers one upstream pull.

ParametersJSON Schema
NameRequiredDescriptionDefault
nNoTop-N rows to return (default 32, max 256)
refreshNoForce a read-only refresh from the upstream Compute service
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly states read-only nature, no side effects, and explains refresh behavior. Lacks details on permissions, latency, or caching, but sufficient for a simple read-only tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences front-load key information: read-only, scope, exclusions, and refresh parameter. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, so description lacks return format. For a simple capacity telemetry tool, this is acceptable but could be improved with a brief note on expected fields. Otherwise complete given tool complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers both parameters with descriptions (100% coverage). Description adds context for refresh ('triggers one upstream pull') beyond schema, but n parameter is adequately explained in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Read-only capacity view' and specifies it is telemetry from Capacity Listener fleet. Distinguishes from siblings by explicitly stating it does NOT perform bids, hedges, positions, or derivatives, making purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides guidance on optional refresh parameter and explicitly states exclusions (no bids/hedges/positions/derivatives). However, does not explicitly recommend when to use this tool versus alternatives like computegrid_quote or computegrid_solve.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_list_agentsAInspect

List the 15-agent compute grid fleet across all 6 driver types (cross_pool_auction, workload_decomposition, verification_fleet, capacity_listener, qvac_mesh_orchestrator, settlement_reporting). Returns agent type, count, and revenue model. No auth required.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden for behavioral transparency. It discloses that no authentication is required, which is useful. However, it omits other traits such as data freshness, caching behavior, or whether the list is live or cached, leaving some gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, both essential. The first sentence states the main action and scope, and the second adds security context ('No auth required') and return fields. No waste, well front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple parameterless list tool, the description covers the essential aspects: what it lists, the scope, and authentication. Without an output schema, it hints at return fields. It could be slightly more precise about the response format, but it's largely sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters (schema coverage 100%), so baseline is 4. The description adds significant meaning by enumerating the six driver types and the data returned (agent type, count, revenue model), going well beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists the compute grid fleet across all 6 driver types, specifying the returned fields (agent type, count, revenue model). It effectively distinguishes itself from sibling tools like computegrid_list_providers by focusing on agents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. While it mentions 'No auth required,' it does not indicate contexts where this tool is preferred over siblings like computegrid_status or computegrid_get_capacity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_list_providersAInspect

List enabled provider adapters (io.net, Akash, Render) and their upstream health. Returns required env vars for reservation paths. Real probes, no mocks.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that it performs real probes (not mocking) and returns health and required env vars, which goes beyond the tool name. However, with no annotations, it misses potential side effects or failure modes.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence with no filler. Front-loaded with the main action 'List enabled provider adapters'. Every word adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no parameters, no output schema, and no annotations, the description covers what is listed, the output (health and env vars), and the real-time behavior. It is sufficient for an agent to understand and use the tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the description adds value by specifying the exact provider adapters listed and that it returns health and env vars, enriching the meaning beyond the empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool lists enabled provider adapters (specifically naming io.net, Akash, Render) and their upstream health, with a specific verb 'List' and resource 'provider adapters'. This distinguishes it from siblings like computegrid_list_agents.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. The phrase 'Real probes, no mocks' implies it is for live data, but no when-not-to or alternative tool names are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_quoteAInspect

Gather quotes across enabled compute providers WITHOUT running the auction (no proof, no ledger write). Useful for live price discovery on a workload.

ParametersJSON Schema
NameRequiredDescriptionDefault
ram_gbNoRAM in GB (default 4)
gpu_typeYescpu | rtx_4090 | rtx_3090 | rtx_a6000 | a100 | h100 | l40s
cpu_coresNoCPU cores (default 1)
budget_usdNoOptional budget cap in USD
duration_hoursYesJob duration in hours (>0)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It clearly states the tool does not run auctions, generate proofs, or write to ledger, indicating a read-only, non-destructive operation. However, it does not disclose other behaviors like rate limits or data freshness.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences efficiently convey purpose, scope, and behavioral constraints with no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 5 parameters, no output schema, and no annotations, the description adequately covers purpose and behavioral constraints. It could mention return format, but overall it is sufficient for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 5 parameters have descriptions in the input schema (100% coverage). The description adds context about quote gathering but does not significantly elaborate on individual parameters beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool gathers quotes without running an auction, specifying the resource (compute providers) and distinguishing from siblings like computegrid_book. It also mentions the use case of live price discovery.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates when to use (live price discovery) and implies when not to use (by noting 'without running the auction'), but does not explicitly name alternative tools or specify exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_releaseAInspect

Release a provider reservation. Same gating as /book — 503 with documented missing-key error when the provider is not fully wired.

ParametersJSON Schema
NameRequiredDescriptionDefault
providerYes
booking_idYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses a specific error condition (503 missing-key) and relates to /book's gating. However, without annotations, it lacks details on side effects (e.g., what happens to the release), authentication needs, or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences, no fluff, and every word adds value. It is well front-loaded with the purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having only 2 parameters, the lack of parameter explanations and output schema makes the description insufficient. The tool's behavior and return value are not fully described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, and the description does not explain the parameters ('provider' enum, 'booking_id'). The agent is left without guidance on their purpose or how to obtain them.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action 'Release a provider reservation' with a specific verb and resource. It distinguishes itself from sibling tools like 'computegrid_book' which creates reservations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions 'Same gating as /book' which implies similar usage conditions, but it does not explicitly state when to use this tool versus alternatives, nor does it provide prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_solveAInspect

Run the cross-pool auction. Returns chosen provider, all collected quotes, Groth16-shaped selection proof, signed verification receipt, settlement path. Persists cost_usdc telemetry to the Hive ledger by default.

ParametersJSON Schema
NameRequiredDescriptionDefault
job_idNo
ram_gbNo
persistNoWrite auction telemetry to ledger (default true)
gpu_typeYescpu | rtx_4090 | a100 | h100 | l40s | rtx_3090 | rtx_a6000
cpu_coresNo
budget_usdNo
deadline_unixNoOptional unix-second deadline; must be in the future
submitter_didNoDID of the submitting agent (e.g. 'did:hive:agent:foo')
duration_hoursYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the burden. It transparently mentions default persistence of telemetry and lists all return components. However, it omits potential side effects like cost deduction or authorization requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences front-load the core action and outputs with no wasted words, making it easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description is detailed for a complex auction tool but lacks parameter guidance and context on when to use required vs optional fields, given no output schema or annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is only 44%, and the description adds no parameter-level details beyond the schema, failing to compensate for undocumented parameters like job_id or budget_usd.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it runs a cross-pool auction and lists specific outputs (provider, quotes, proof, receipt, etc.), distinguishing it from sibling tools like computegrid_quote or computegrid_book.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage as an auction solver but lacks explicit guidance on when to choose this over siblings, such as prerequisites or context like needing quotes first.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_statusAInspect

Poll a provider booking by booking_id. Returns 503 until /book is wired with a real provider key. The 503 body documents which env var is required.

ParametersJSON Schema
NameRequiredDescriptionDefault
booking_idYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Clearly discloses that the endpoint returns 503 until a real provider key is set and that the body documents the required env var. Does not cover normal success response, but the 503 disclosure is helpful.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states exact purpose, second explains the known failure mode. No filler. Front-loaded with essential info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, no annotations, and 0% parameter coverage. Description omits success response format, polling behavior (e.g., interval), and error handling beyond 503. Incomplete for a status checking tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%; description only says 'by booking_id' giving loose context. No extra meaning about format, constraints, or source of the ID. Fails to compensate for missing schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description starts with a specific verb 'Poll' and resource 'provider booking' by 'booking_id', clearly distinguishing it from sibling tools like computegrid_book or computegrid_release. No ambiguity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage: polling a booking. Mentions a condition (503 until /book is wired) but no explicit when-to-use or when-not-to-use compared to alternatives. Could state when to poll vs wait.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_verify_proofAInspect

Submit a Groth16 proof envelope + public inputs to the Verification Fleet (4 agents). Validates structure, signs an EIP-191 receipt with the Evaluator wallet. $0.001/proof in USDC.

ParametersJSON Schema
NameRequiredDescriptionDefault
claimNoOptional claim metadata (model_id, job_id, output_hash, ...)
proofYesGroth16 proof envelope (dict {a,b,c} or 8-element list of BN254 field elements)
proof_systemNoProof system identifier (default 'groth16')
public_inputsYesList of BN254 field elements (hex strings or ints)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations exist, so the description must carry the burden. It discloses validation, EIP-191 receipt signing, and cost, but omits details like idempotency, rate limits, or failure behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences that front-load the primary action and fleet, followed by key outcomes and cost. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Without an output schema, the description should explain return values or success/failure indicators, but it does not. Also, the optional 'claim' parameter is not contextualized. Important gaps exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema covers all parameters with descriptions (100% coverage). The description adds no extra meaning beyond summarizing the proof and inputs, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action: submit a Groth16 proof and public inputs for verification by a specific fleet. It distinguishes from sibling tools that perform other operations like audit or booking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (to verify a proof), but does not explicitly state when not to use or mention alternatives. No guidance on prerequisites or scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

computegrid_verify_selectionAInspect

Independently verify a selection proof envelope (e.g. from a prior /solve response). Returns the verifier’s signed receipt. Lets external auditors confirm an auction selection without trusting the broker.

ParametersJSON Schema
NameRequiredDescriptionDefault
proofYesGroth16 envelope from /solve.proof_envelope
public_inputsYesFrom /solve.proof_public_inputs
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description carries the full burden. It states the tool returns the verifier's signed receipt, implying a read-only verification operation. It is transparent about the output but does not detail potential error conditions or prerequisites beyond the inputs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise, with just two sentences. The first sentence immediately states the core action and output, and the second adds the use case and trustless nature. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with two parameters and no output schema, the description covers the core action, inputs, output (signed receipt), and use case. It could be more complete by explaining the receipt's structure or verification steps, but it is sufficient for the agent to understand the tool's role.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with brief descriptions for each parameter. The main description adds context by linking the inputs to a prior /solve response, reinforcing their origin. However, it does not provide additional format or validation details beyond what the schema already offers.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it independently verifies a selection proof envelope, specifically from a prior /solve response, and returns a signed receipt. It distinguishes itself from the sibling tool 'computegrid_verify_proof' by focusing on selection proofs, providing a specific verb and resource.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains when to use the tool: after receiving a /solve response to verify selection without trusting the broker. It implies a trustless auditing use case but does not explicitly state when not to use it or compare to alternatives like 'computegrid_verify_proof'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_leaderboardAInspect

Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoTime window. One of: "7d", "30d", "lifetime". Default "7d".
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the HTTP method (GET), the endpoint URL, and that it gracefully returns a message if upstream is not deployed. However, it does not mention any authentication requirements, rate limits, or data freshness, which limits transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences that front-load the core purpose and provide the endpoint and error behavior. Every sentence adds value, with no redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool with a single optional parameter and no output schema, the description explains the purpose, endpoint, and error handling. However, it fails to describe the successful response format (e.g., JSON array of agents), which would be helpful given the absence of an output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage, describing the window parameter with enum values and default. The description only mentions the parameter in the endpoint URL without adding any new semantics (e.g., explanation of each window meaning). Baseline 3 is appropriate since schema already covers the parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves top earning agents by attribution payout in USDC. It specifies the endpoint and provides context ('Real Base USDC settlement'). While it differentiates from sibling tools by focusing on leaderboard instead of personal or registration actions, it doesn't explicitly distinguish from similar list tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for viewing overall earnings leaderboard, but does not provide explicit guidance on when to use this tool versus alternatives like hive_earn_me (personal earnings) or hive_earn_register (registration). No when-not or alternative tool names are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_meBInspect

Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up. Required.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It mentions graceful error handling for 'rails not yet live' and uses real USDC, but does not disclose authentication needs, rate limits, or failure behavior (e.g., if agent_did does not match caller). Lacks important behavioral context for a mutation-free lookup.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single paragraph with three sentences, front-loads main purpose. Could be more structured (e.g., bullet points for data fields), but no redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema and no annotations. Missing details: authentication requirements, whether agent_did must match caller, exact error codes, and what happens if caller not registered. Only partially covers the needed context for safe invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a clear parameter description. The tool description adds the endpoint URL and context (e.g., uses agent_did from caller), but does not add meaning beyond the schema's basic description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool looks up the caller's earn profile, listing specific data fields (lifetime + pending USDC balance, payout tx hash, next payout ETA). It distinguishes from siblings like hive_earn_register and hive_earn_leaderboard by focusing on the caller's own profile.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus siblings (register, leaderboard). The description implies it's for the caller's own profile, but does not state prerequisites (e.g., must be registered) or alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_registerAInspect

Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesCaller agent DID (e.g. did:hive:0x… or did:web:…). Required.
payout_addressYesBase L2 EVM address (0x…) to receive USDC kickback payouts.
attribution_urlYesPublic URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description discloses the POST call, resilience to cold-start, and business details (USDC settlement, kickback, weekly payout). This adds significant behavioral context beyond the input schema, though missing authentication requirements or idempotency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences each delivering distinct information (purpose, business terms, backend endpoint, error resilience). Information is front-loaded and no superfluous content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema or annotations, the description covers purpose, behavior, and a key error case, but lacks detail on the success response structure. Still reasonably complete for a registration tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with adequate descriptions; the description does not add meaningful new details about parameters beyond restating their roles. Baseline 3 is appropriate as no extra semantics are provided.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states the verb 'register' and the resource 'agent for the Hive Civilization attribution payout program', including settlement details and backend endpoint. This distinguishes it from sibling earn tools like leaderboard or me.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly implies use for registration, but does not explicitly state when to use or avoid this tool versus alternatives (e.g., if already registered or for other earn actions). However, the purpose is specific enough that an agent can infer the correct context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.