Skip to main content
Glama

Server Details

MCP server: cross-venue spread + yield drivers, ROI radar telemetry · Hive Civilization

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-trade
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 8 of 8 tools scored.

Server CoherenceA
Disambiguation3/5

Tools are split into two distinct domains (earn and trade). Within each domain, purposes are clear, but the duality could confuse an agent about the server's primary function.

Naming Consistency3/5

Two different prefixes are used (hive_earn_ vs trade_), indicating a lack of unified naming. Within each group, naming is consistent and descriptive.

Tool Count5/5

With 8 tools, the count is well-scoped for the combined functionalities. Each tool serves a clear purpose without redundancy.

Completeness3/5

The earn domain covers leaderboard, profile, and registration but lacks account updates or unregistration. The trade domain supports invoice creation, retrieval, disputes, and fees but misses invoice updates or cancellations.

Available Tools

8 tools
hive_earn_leaderboardAInspect

Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
windowNoTime window. One of: "7d", "30d", "lifetime". Default "7d".
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so the description carries full burden. It discloses the HTTP method (GET), endpoint URL, and graceful error handling ('rails not yet live'). Missing details on authentication or rate limits, but overall transparent about core behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second gives technical details (endpoint and error behavior). No wasted words, front-loaded with key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple endpoint with one optional parameter, the description covers purpose, technical call, and error handling. It lacks notes on authentication or pagination, but these are not critical for basic usage. Output is not described (no output schema), but the return type ('top earning agents ... in USDC') is implied.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (the enum parameter has a description). The description adds context by showing the URL pattern and mentioning the default. It adds minimal extra meaning beyond the schema, but the combination is effective.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool returns 'Top earning agents on the Hive Civilization, by attribution payout in USDC', which is a specific verb+resource. It distinguishes from siblings like hive_earn_me (personal earnings) and hive_earn_register (registration).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies the tool is for viewing the leaderboard, not personal earnings or registration. However, it does not explicitly state when to use or not use this tool versus alternatives, though the context signals and sibling names provide differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_meAInspect

Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesAgent DID to look up. Required.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden. It discloses that data is real Base USDC (no mock), uses a specific GET endpoint, and handles missing upstream gracefully. It does not explicitly state read-only or authentication needs, but adds significant context beyond the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no redundancy: the first presents the primary function, the second emphasizes authenticity, the third details the API call and error handling. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a single parameter and no output schema, the description fully covers input purpose, output fields, and error behavior. It is complete for a simple lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with a single required parameter already described. The description reiterates 'agent_did' as the caller's DID but does not add new semantic value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('look up'), the resource ('earn profile'), and the exact data returned (lifetime + pending USDC balance, last payout tx hash, next-payout ETA). It distinguishes itself from sibling tools like hive_earn_register and hive_earn_leaderboard by focusing on profile lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies use for checking the caller's own earn profile. It does not explicitly list when not to use or name alternatives, but the sibling tools cover registration and leaderboard, making context clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

hive_earn_registerAInspect

Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_didYesCaller agent DID (e.g. did:hive:0x… or did:web:…). Required.
payout_addressYesBase L2 EVM address (0x…) to receive USDC kickback payouts.
attribution_urlYesPublic URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit.
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full behavioral transparency. It discloses the exact endpoint called (POST https://hivemorph.onrender.com/v1/earn/register), that it acts on the caller's behalf, and how it handles upstream cold-start by returning a structured error body. This exceeds expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is compact with no wasted words. It front-loads the primary purpose and includes essential details (URL, kickback percentage, error handling) in a structured manner, earning its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (3 parameters with full schema descriptions, no output schema), the description provides sufficient context: registration purpose, program details, endpoint, and cold-start behavior. Missing success response format, but the described failure mode adds value.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, and the description does not add any extra semantics beyond the schema's parameter descriptions. The baseline of 3 is appropriate as the description adds no additional value for parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Register an agent for the Hive Civilization attribution payout program' with a specific verb and resource. It clearly distinguishes from sibling tools like hive_earn_leaderboard and trade_* operations, which are unrelated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides context on the program (settlement on USDC, kickback, weekly payout) and cold-start resilience. However, it lacks explicit when-to-use or when-not-to-use guidance relative to alternatives, but the purpose is clear enough for an agent to infer appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_create_invoiceAInspect

Create a cross-border invoice. Buyer (SMB) and supplier exchange DIDs; invoice amount + currency + chain selected at creation. Fee auto-computed from tier.

ParametersJSON Schema
NameRequiredDescriptionDefault
memoNoFree-form invoice memo (PO number, goods description)
buyer_didYesDID of the SMB buyer
amount_usdYesInvoice amount in USD (max $250,000 for MVP)
settle_chainYesbase, ethereum, or solana
supplier_didYesDID of the overseas supplier
settle_currencyYesUSDT or USDC
supplier_payout_addressYesSupplier's receiving address on the chosen chain
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description partially covers behavior: mentions fee auto-computation from tier. However, it omits side effects, required preconditions, success/failure responses, and potential limitations beyond amount cap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences covering the essential purpose and mechanics. No verbose or redundant phrases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 params, no output schema, no annotations), the description is too brief. It lacks info on post-creation outcome, prerequisites (e.g., registration, DID verification), and any error states.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so baseline is 3. The description adds context about the process (DID exchange) but largely restates schema information. Value added is marginal.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Create a cross-border invoice' and specifies the key elements: buyer/supplier DIDs, amount, currency, chain, and fee computation. It distinguishes from sibling tools like trade_get_invoice and trade_dispute_invoice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for cross-border invoicing but does not explicitly state when to use this tool versus alternatives, nor does it provide preconditions or when-not scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_dispute_invoiceAInspect

Open a dispute on an invoice. Routes to HiveLaw arbitration if buyer and supplier cannot resolve. Settlement is held in escrow until resolution.

ParametersJSON Schema
NameRequiredDescriptionDefault
reasonYesDispute reason
invoice_idYesInvoice ID
claimant_didYesDID of the disputing party (buyer or supplier)
evidence_urlNoOptional URL to supporting documents
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are present, so the description carries the full burden. It discloses escrow holding and arbitration routing, which are key behaviors. However, it omits side effects like invoice status changes, reversibility, or authorization needs, leaving notable gaps for a dispute tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core action, and every sentence provides essential context. No redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description explains the dispute workflow but does not specify return values (e.g., dispute ID), prerequisites (e.g., invoice must exist), or post-conditions. It is adequate but not fully complete for agent decision-making.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description adds no extra meaning beyond what the schema provides; it does not clarify parameter formats, constraints, or usage examples.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's action: 'Open a dispute on an invoice.' It specifies the verb (open) and resource (invoice dispute), and the context of arbitration and escrow helps distinguish it from sibling tools like trade_create_invoice or trade_get_invoice, though it could explicitly contrast with them.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions routing to arbitration if unresolved, but gives no explicit when-to-use or when-not-to-use guidance. No alternatives or prerequisites are listed, leaving the agent to infer usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_get_feesAInspect

Get the cross-border invoice fee schedule and SWIFT wire comparison (Hive vs. typical SMB wire fees, FX spreads, settlement times).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description implies a read-only query by listing outputs, but does not explicitly state it has no side effects. Without annotations, a direct 'Read-only' or 'No modifications' statement would be more transparent.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the primary action, followed by specifics. No redundant or verbose phrasing.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description enumerates all returned information types (fee schedule, wire comparison, FX spreads, settlement times) despite lacking an output schema. For a simple read with no inputs, this is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters and 100% schema coverage, the description adds value by listing the specific content (fee schedule, wire comparison) without needing parameter details. Baseline for 0 params is 4.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it retrieves fee schedules and wire comparisons, using specific nouns like 'cross-border invoice fee schedule' and 'SWIFT wire comparison'. This distinguishes it from sibling tools that handle invoice creation/disputes or earnings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives, but the unique subject matter (fees vs. invoices/earnings) makes its context clear. A brief note about use cases would improve this dimension.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_get_invoiceAInspect

Retrieve invoice status, settlement transaction hash, and dispute history.

ParametersJSON Schema
NameRequiredDescriptionDefault
invoice_idYesInvoice ID returned from trade_create_invoice
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description discloses that the tool retrieves three specific pieces of data, indicating a read-only operation. However, with no annotations, it does not mention error handling, authentication needs, or what happens if the invoice_id is invalid.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with the core action. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simplicity of the tool and no output schema, the description adequately lists the three return values. It could mention possible error responses, but overall is complete for a basic retrieval.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the schema already describes the invoice_id parameter as 'Invoice ID returned from trade_create_invoice'. The description adds no additional meaning beyond this.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action (retrieve) and the specific resources (invoice status, settlement transaction hash, dispute history). It distinguishes from siblings like trade_create_invoice and trade_dispute_invoice.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives. It is implied that it should be used after creating an invoice, but no prerequisites or exclusions are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_get_listingAInspect

Get the public listing metadata (target user, fee schedule, settlement currencies/chains, cumulative volume, council origin).

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It indicates the tool returns public data, implying no side effects. However, it doesn't disclose potential rate limits, size of response, or whether it requires any permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, front-loaded sentence that effectively communicates the tool's purpose. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists what is returned, which is helpful. However, it does not clarify if the listing is for a specific user or global, or how to interpret 'cumulative volume' or 'council origin'. Slightly more detail would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters with 100% description coverage. The description adds value by enumerating the contents of the listing metadata, which is not present in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (Get) and the resource (public listing metadata), listing specific fields returned (target user, fee schedule, etc.). Distinct from siblings like trade_create_invoice or hive_earn_me.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool vs. alternatives. However, as a simple getter for public listing data, its use case is self-evident. Could mention it's read-only or that it doesn't require authentication.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.