Hive Swap
Server Details
Cross-chain token swaps for autonomous agents on Base L2 and partner rails
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-swap
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 8 of 8 tools scored.
The tools are clearly divided into two distinct groups: hive_earn* for the earn program and swap.* for swap operations. Within each group, each tool has a unique purpose with no overlap in functionality.
Tools follow a prefix convention (hive_earn_ and swap.) that groups them logically. However, the hive_earn group mixes nouns (leaderboard, me) and verb (register), while swap tools consistently use verb_noun pattern. Minor inconsistency but still clear.
With 8 tools covering both earn registration and swap operations, the server is well-scoped. Each tool serves a distinct function without unnecessary bloat or missing critical features.
The earn group covers registration, balance lookup, and leaderboard, though unregister is missing. The swap group covers listing, quoting, executing, adding liquidity, and pool stats, but lacks remove liquidity or swap history. Core workflows are present.
Available Tools
5 toolshive_earn_leaderboardAInspect
Top earning agents on the Hive Civilization, by attribution payout in USDC. Real Base USDC settlement. Calls GET https://hivemorph.onrender.com/v1/earn/leaderboard?window=. Returns "rails not yet live" gracefully if upstream is not yet deployed.
| Name | Required | Description | Default |
|---|---|---|---|
| window | No | Time window. One of: "7d", "30d", "lifetime". Default "7d". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description bears the full burden. It discloses graceful error handling ('rails not yet live') but does not mention authentication, rate limits, data freshness, or whether the tool requires specific permissions. The API endpoint and error behavior are noted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (two sentences) and front-loads the main purpose. Every sentence adds value: the first defines the tool, the second provides the API call, the third notes graceful failure. No unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with one optional parameter and no output schema, the description is nearly complete. It explains the resource (top earning agents), metric (attribution payout in USDC), and error behavior. Minor missing context: no mention of result format (e.g., list of objects) or pagination, but not critical given simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the baseline is 3. The description adds no additional meaning beyond the schema's description of the 'window' parameter, which already includes enum values and default. The description simply restates the endpoint with the parameter but no new semantic context.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool returns the top earning agents on the Hive Civilization by attribution payout in USDC, distinguishing it from siblings like hive_earn_me (likely user-specific) and hive_earn_register (registration). The verb 'earning' and resource 'leaderboard' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implicitly suggests using this tool to retrieve the leaderboard of top earners, but it lacks explicit guidance on when to use it versus alternatives or when not to use it. No exclusions or alternative tool mentions are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hive_earn_meAInspect
Look up the caller agent's registered earn profile, lifetime + pending USDC balance, last payout tx hash, and next-payout ETA. Real Base USDC, no mock data. Calls GET https://hivemorph.onrender.com/v1/earn/me?agent_did=. Returns "rails not yet live" gracefully if upstream is not yet deployed.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_did | Yes | Agent DID to look up. Required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses the HTTP method (GET), endpoint URL, that it uses real Base USDC, and graceful error handling for deferred upstream. However, it does not explicitly state read-only or idempotent nature, nor auth requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the tool's purpose and key returns, followed by context on data authenticity and error handling. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple read tool with one parameter and no output schema, the description covers key return fields, behavior, and error case. It lacks explicit details on output format but naming fields compensates. Missing subtle context like authentication or session requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a clear description of agent_did as required. The description adds that the parameter is used in the URL query string and is for the caller agent, but does not provide additional semantic depth beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool looks up the caller agent's earn profile and lists specific data returned (lifetime + pending USDC balance, last payout tx hash, next-payout ETA). It also mentions the endpoint and distinguishes from sibling tools (leaderboard, register) by focusing on self-profile retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies this is for the caller's own profile but does not explicitly state when to use versus alternatives. No exclusion criteria or prerequisites (e.g., must be registered) are mentioned, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
hive_earn_registerAInspect
Register an agent for the Hive Civilization attribution payout program. Settlement on real Base USDC. 5% kickback on attributed traffic, weekly payout. Calls POST https://hivemorph.onrender.com/v1/earn/register on behalf of the caller. Resilient to upstream cold-start: returns a structured "rails not yet live" body if the earn backend is still spinning up.
| Name | Required | Description | Default |
|---|---|---|---|
| agent_did | Yes | Caller agent DID (e.g. did:hive:0x… or did:web:…). Required. | |
| payout_address | Yes | Base L2 EVM address (0x…) to receive USDC kickback payouts. | |
| attribution_url | Yes | Public URL of the agent / page driving attributed traffic to Hive. Used for ranking + audit. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description covers key behavioral traits: POST request, cold-start resilience, and return behavior ('rails not yet live'). Lacks explicit mention of write nature or idempotency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Concise and well-structured: purpose first, then mechanics, then endpoint and resilience. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers registration program details, parameters, and response behavior despite no output schema. Provides sufficient context for an agent to invoke correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and description adds meaningful context: agent DID format, Base EVM address for payouts, URL purpose. Enhances understanding beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it registers an agent for the Hive Civilization attribution payout program, with details on settlement, kickback, and endpoint. Distinguishable from siblings like hive_earn_leaderboard.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Describes the tool's purpose and context (registration for a payout program), but does not explicitly state when not to use it or compare to alternatives. Still clear overall.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swap_route.executeAInspect
Construct a swap transaction the agent signs themselves. Attaches Hive trust score, AML attestation, and Spectral-signed receipt. Charges 5 bps trust layer. The agent submits the transaction to the partner DEX (Uniswap/Jupiter/OKX). Wallet keeps custody. Hive provides trust plumbing only.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Agent DID for trust score lookup. | |
| tokenIn | Yes | Input token symbol. | |
| amountIn | Yes | Amount of input token. | |
| tokenOut | Yes | Output token symbol. | |
| selectedDex | No | Partner DEX to route to. Defaults to best quote. | |
| minAmountOut | No | Minimum acceptable output (slippage guard). | |
| walletAddress | Yes | Agent wallet address that will sign the tx. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations are all false, so the description adds value by disclosing behavioral traits: agent signs themselves, 5 bps fee, trust attachments, and wallet custody. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is four sentences, each adding distinct information with no redundancy. It efficiently conveys purpose, trust layer, fee, and custody details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description omits what the tool returns (e.g., transaction object for signing). It covers process well but lacks return value info, which is important for invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not elaborate on individual parameters beyond the schema, but provides useful context about the overall process.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool constructs a swap transaction that the agent signs themselves, specifying dependencies like trust score, AML attestation, and a fee. It distinguishes from siblings like swap_route.quote by emphasizing execution and custody.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for executing swaps after a quote, but does not explicitly state when to use versus swap_route.quote or when not to use. No alternative tools or conditions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
swap_route.quoteARead-onlyInspect
Get the best swap route across Uniswap v3 (Base), Jupiter (Solana), and OKX DEX. Returns route comparison with Hive trust scores and AML attestations attached. Hive charges 5 bps trust + receipt fee. Actual liquidity is from partner DEXes. No authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| chain | No | Preferred chain: base (Uniswap/OKX) or solana (Jupiter). Omit to quote all. | |
| tokenIn | Yes | Input token symbol. One of: ETH, WETH, USDC, USDT, SOL. | |
| amountIn | Yes | Amount of input token. Must be greater than 0. | |
| tokenOut | Yes | Output token symbol. One of: ETH, WETH, USDC, USDT, SOL. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=false. The description adds valuable behavioral context: 'No authentication required,' the fee structure (5 bps), the source of liquidity (partner DEXes), and what is returned (route comparison with trust scores and AML attestations). This goes beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three sentences long, front-loading the purpose. Every sentence adds value: purpose, return value, fees and security. No fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters, no output schema, and good annotations, the description sufficiently covers the return format (route comparison with trust scores and AML attestations), fees, and security. It is complete for the tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The main description adds minimal extra meaning beyond existing param descriptions. It mentions chain preference and token symbols, but those are already covered in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get the best swap route across Uniswap v3 (Base), Jupiter (Solana), and OKX DEX.' This is a specific verb-get and resource-swap route, and it distinguishes itself from the sibling tool swap_route.execute by focusing on quoting rather than execution.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives clear context: 'No authentication required' and 'Hive charges 5 bps trust + receipt fee.' It implicitly contrasts with swap_route.execute by being a quote-only tool. However, it lacks explicit when-not-to-use or alternative scenarios, though the sibling tool provides differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!