Hive Mining
Server Details
Tether MOS telemetry + Boltz BTC↔USDC atomic swap broker MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-mining
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.6/5 across 5 of 5 tools scored. Lowest: 2.3/5.
Tools are mostly distinct: bitcoin_fee_advice and next_block_forecast both deal with fees but one is general fee landscape and the other is a next-block forecast, which could cause confusion. The remaining tools (payouts, sites, swap) are clearly separate.
Naming is inconsistent: bitcoin_fee_advice, next_block_forecast, and payout_btc_to_usdc use underscore, while mos.query_payouts and mos.query_sites use dot notation. There is no uniform verb or pattern.
5 tools is well-scoped for a mining-focused server. Each tool serves a distinct purpose without being too few or too many.
The toolset covers core operations: fee advice, payouts, sites, forecasting, and a swap. A minor gap is the lack of a tool to initiate a mining operation or manage settings, but the surface is adequate for the advertised domain.
Available Tools
5 toolsbitcoin_fee_adviceCInspect
Bortlesboat-attributed Bitcoin mempool fee landscape + nextblock candidate pool. Tier 2 — $0.02 USDC. Real Base USDC settlement of the upstream cdp-bitcoinsapi-com calls — every response carries a consume_tx hash for audit.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description mentions a cost of $0.02 USDC and a 'consume_tx hash for audit', but it fails to disclose whether the operation is read-only, destructive, or other safety aspects. With no annotations provided, this is insufficient for an agent to infer behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively short but includes unnecessary jargon and extraneous details about payment settlement. It front-loads the core function but could be more streamlined and focused.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description should explain return values comprehensively. It mentions 'fee landscape' and 'candidate pool' but does not specify structure or format, leaving the tool's output ambiguous.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, and schema coverage is 100%. The description adds some meaning by describing the output (fee landscape and candidate pool), but it does not explain the significance of the lack of parameters or how the tool uses context. Baseline 3 is appropriate as the description adds minimal value beyond the empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description mentions 'Bitcoin mempool fee landscape + nextblock candidate pool', but it is muddled with cryptic terms like 'Bortlesboat-attributed' and details about USDC settlement. The core purpose is unclear, especially as it does not distinguish from similar sibling tools like 'next_block_forecast'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'next_block_forecast' or other mining-related tools. There is no mention of prerequisites, context, or when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mos.query_payoutsAInspect
Get an operator's pending USDC payout balance from the Hive earn rails ledger. Read-only. Tier 1 — $0.001 USDC. Returns pending_usdc, settle_chain=base, settle_asset=USDC, payout_threshold_usdc. Partner: Tether MOS. Backend: GET /v1/mining/orchestrate/payouts.
| Name | Required | Description | Default |
|---|---|---|---|
| operator_did | Yes | Operator DID registered with Hive earn rails (e.g. did:hive:miner-us-east-1 or did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK). Must match the DID used when registering with hive_onboard or orchestrate/operator/register. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It states the tool returns specific fields (pending_usdc, settle_chain, settle_asset, payout_threshold_usdc) and mentions the backend endpoint. However, it does not disclose authentication needs, rate limits, or error handling behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, providing key information (purpose, tier, return fields, backend) without extraneous text. It is front-loaded with the main action and context.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple query tool with one parameter and no output schema, the description adequately covers purpose, return fields, and implementation detail. It could be improved by noting common error scenarios or usage within a workflow, but overall it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% for the only parameter 'operator_did', and the description repeats the schema description without adding additional meaning or usage nuance. Baseline 3 is appropriate since the schema already fully defines the parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'an operator's pending USDC payout balance' from the Hive earn rails ledger. It distinguishes itself from siblings like 'payout_btc_to_usdc' by specifying the query purpose and return fields.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for checking pending USDC balance with a tier and amount ($0.001 USDC), but does not explicitly state when to use this tool versus alternatives, nor when not to use it. The usage context is implied through the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mos.query_sitesAInspect
Query an operator's registered Tether MOS sites and latest site telemetry via the Hive orchestration layer. Read-only. Tier 1 — $0.001 USDC. Returns site list, worker count, and telemetry snapshots. Partner: Tether MOS. Backend: GET /v1/mining/orchestrate/sites.
| Name | Required | Description | Default |
|---|---|---|---|
| operator_did | Yes | Operator DID registered with Hive MOS (e.g. did:hive:miner-us-east-1 or did:key:z6MkhaXgBZDvotDkL5257faiztiGiC2QtKLGpbnnEGta2doK). Obtain via your MOS fleet dashboard. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Explicitly states 'Read-only' indicating no side effects, and mentions cost tier ($0.001 USDC). No annotations provided, so description covers basic behavioral safety and cost. Missing details on error handling or auth requirements beyond operator_did parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences carrying essential information: purpose, read-only, cost, return summary, and backend. No redundancy, front-loaded with key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-param tool with no output schema, description covers purpose, behavior, cost, and return summary. Missing details on edge cases like invalid operator_did, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with a detailed parameter description including example and source. Description adds return context but no extra parameter semantics. Baseline score of 3 applies as schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool queries registered Tether MOS sites and telemetry via Hive orchestration. It specifies verb 'Query', resource 'operator's registered Tether MOS sites and latest site telemetry', and context. Differentiates from sibling 'mos.query_payouts' which handles payouts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use vs alternatives. Purpose is implied by name and return summary (sites vs payouts), but no direct exclusions or when-not-to-use advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
next_block_forecastAInspect
Combines Hive telemetry with Bortlesboat /ai/fees/advice to forecast the next-block fee window. Tier 2 — $0.03 USDC. Useful for fee-sensitive BTC tx submitters timing their submissions.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, and description only mentions cost ($0.03 USDC) and combination of sources. Does not disclose rate limits, auth needs, or return format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with action and context. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, description covers purpose and cost but lacks details on output format or example forecast.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters to describe; schema coverage is 100%. Description adds no parameter info, but baseline is 4 due to zero parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool forecasts the next-block fee window using auction state and external advice, with a specific use case. It distinguishes from sibling tools like bitcoin_fee_advice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly mentions usefulness for fee-sensitive submitters timing transactions, providing clear context. Does not mention when not to use or alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
payout_btc_to_usdcAInspect
Boltz reverse swap: deposit BTC at the returned HTLC address, recipient gets USDC on Base. Atomic — Hive never custodies BTC. Partner: Boltz. 40bps Hive spread fee taken from the swap output. Returns 503 with rail_inactive when HIVE_BOLTZ_ENABLED=0.
| Name | Required | Description | Default |
|---|---|---|---|
| btc_amount_sats | Yes | ||
| recipient_usdc_address | Yes | 0x… EVM address that receives the swap output |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses the swap flow, fee (40bps), and error return (503). It implies a write action (deposit) but does not detail response format or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences, front-loaded with core purpose, followed by fee and error condition. No superfluous text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given limited parameters and no output schema, the description covers the swap mechanism, fee, and error. Could specify that the tool returns an HTLC address, but overall adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50%: recipient_usdc_address has a schema description. The tool description does not add detail for btc_amount_sats beyond type and exclusiveMinimum, and does not clarify units or format beyond what schema implies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it's a 'Boltz reverse swap' and explains that deposit BTC yields USDC on Base. It distinguishes from sibling tools which are mining or Bitcoin fee advice.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the atomic swap and provides conditions (fee, error when disabled). It does not explicitly mention when to use vs alternatives, but no sibling tool is similar.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!