Hive Compute
Server Details
HiveCompute MCP Server — decentralized inference router for AI agents
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-compute
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
Each tool has a distinct purpose: chat for inference, embed for embeddings, estimate_cost for cost estimation, get_usage for usage history, list_models for model discovery. No overlap.
All tools use a consistent 'compute.' prefix followed by a descriptive verb or noun (chat, embed, estimate_cost, get_usage, list_models), forming a clear pattern.
5 tools cover the core operations of a compute service (inference, embeddings, cost estimation, usage tracking, model listing) without being excessive or insufficient.
The tool set covers all essential aspects of the compute domain: running tasks (chat/embed), pre-checking costs (estimate_cost), reviewing activity (get_usage), and discovering options (list_models). No obvious gaps.
Available Tools
5 toolscompute.chatAInspect
Run inference via Hive's OpenAI-compatible router. Submit a prompt or message array to any available model. Billed per input+output token in USDC on Base L2. Hive routes to the cheapest available model meeting your latency and quality spec.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Agent DID (e.g. did:hive:xxxx). USDC billed to this agent's Hive wallet. | |
| model | No | Specific model to use (e.g. gpt-4o, claude-3-5-sonnet, llama-3-70b). Omit to let Hive auto-route to the cheapest qualifying model. | |
| api_key | Yes | Agent API key issued by HiveGate. Required for authenticated inference. | |
| messages | Yes | OpenAI-compatible messages array. Each item must have role (system|user|assistant) and content (string). | |
| max_tokens | No | Maximum tokens to generate in the response. Default 512. | |
| temperature | No | Sampling temperature between 0.0 and 2.0. Default 0.7. | |
| max_cost_usdc | No | Hard cap on USDC spend for this inference call. Request rejected if estimated cost exceeds this. Default 0.05. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false (modifying state), and the description adds context about per-token billing, routing to cheapest models, and cost limits. No contradictions, but could detail side effects like wallet deduction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences long, front-loaded with the primary action, and every sentence adds substantive information (purpose, billing, routing). No redundant or unnecessary text.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the rich schema and absence of output schema, the description covers the tool's purpose, billing, and routing behavior. It could mention that responses follow OpenAI format, but overall it is sufficient for correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all 7 parameters. The description adds high-level context about OpenAI compatibility and routing, but does not provide additional meaning beyond what the schema already conveys for individual parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool runs inference via an OpenAI-compatible router, specifying the action and resource. It contrasts with siblings like compute.embed and compute.list_models by focusing on chat completions and billing.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for chat inference and mentions billing, but does not explicitly state when to use this tool over alternatives like compute.embed or compute.estimate_cost. The context is clear enough given sibling names.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute.embedAIdempotentInspect
Generate vector embeddings via Hive's embedding router. Billed per 1K input tokens in USDC on Base L2. Returns a float array suitable for semantic search, clustering, or RAG pipelines.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Agent DID. Billing is per 1K tokens. | |
| input | Yes | Text to embed. Pass a string for a single embedding or use the batch endpoint for multiple inputs. | |
| model | No | Embedding model to use. Defaults to text-embedding-3-small. Options: text-embedding-3-small, text-embedding-3-large, embed-multilingual-v3. | |
| api_key | Yes | Agent API key issued by HiveGate. | |
| dimensions | No | Desired embedding dimensions. Must be supported by the selected model. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds context beyond annotations: billing per 1K tokens in USDC on Base L2, return type (float array), and usage via Hive's embedding router. Annotations already indicate idempotency and non-destructive nature, so the description complements without repeating. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise: two sentences that front-load the main action ('Generate vector embeddings'), followed by essential billing and usability notes. Every sentence adds value without redundancy or filler.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the 5 parameters, no output schema, and annotations covering idempotency/non-destructiveness, the description covers purpose, billing, and return type. It omits mention of idempotency and batch vs single input (handled in schema), which is acceptable. For a tool with rich schema, it is sufficiently complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema coverage, baseline is 3. The description does not add parameter-specific details beyond what is in the schema, but it frames the overall purpose which helps contextualize parameters. No substantial additional meaning is provided for individual parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool generates vector embeddings via Hive's embedding router, specifies billing details, and mentions the return type (float array) and use cases (semantic search, clustering, RAG). This is specific and distinguishes it from sibling tools like compute.chat (chat completions) and compute.list_models (model listing).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide explicit guidance on when to use this tool versus alternatives. It mentions suitability for semantic search etc., but fails to contrast with siblings like compute.chat or compute.estimate_cost. No 'when not to use' or prerequisites are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute.estimate_costARead-onlyInspect
Estimate the USDC cost for a prompt before running inference. Returns cost breakdown by input tokens, output tokens, and routing fee. Helps agents budget before committing a payment.
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | Model to estimate cost for. Use compute.list_models to browse available models and their per-token prices. | |
| prompt | Yes | The prompt text to estimate cost for. Tokenized to determine input token count. | |
| max_output_tokens | No | Assumed maximum output tokens for cost estimation. Default 512. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate readOnlyHint=true. The description adds behavioral context: it estimates cost without executing inference, returns a breakdown, and is for pre-commitment budgeting. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loads the purpose, and every word adds value. No redundancy or unnecessary details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema, the description provides a reasonable summary of the return value (cost breakdown by input tokens, output tokens, routing fee). Parameters are well-documented. Could potentially mention that it does not execute the prompt, but overall sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema documents all parameters. The description does not add parameter-specific meaning beyond what is in the schema, but it clarifies the overall purpose.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'estimate' and resource 'USDC cost for a prompt', distinguishing it from sibling tools like compute.chat (inference) and compute.list_models (browse models). It also specifies the output breakdown and use case for budgeting.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (before inference, for budgeting) and hints at alternatives (compute.chat for actual inference). However, it does not explicitly state when not to use or mention alternatives beyond context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute.get_usageARead-onlyInspect
Get an agent's compute usage history — total tokens consumed, total USDC spent, breakdown by model, and inference call log with timestamps.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Agent DID to fetch usage history for. | |
| limit | No | Number of recent inference calls to return. Default 20, max 200. | |
| since | No | ISO 8601 timestamp to filter usage from (e.g. 2025-01-01T00:00:00Z). Optional. | |
| api_key | Yes | Agent API key for authentication. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, indicating no side effects. The description adds meaningful context about what data is returned (totals, breakdown, logs), which is beyond the annotation. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the purpose and lists key outputs. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description adequately explains the return value (totals, breakdown, call log). It covers the essential aspects for a usage history tool with 4 parameters. Minor gap: no mention of ordering or pagination.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so the schema already documents all parameters. The description does not add extra semantics beyond what the schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Get an agent's compute usage history' and lists the specific data included (total tokens, USDC spent, breakdown by model, inference call log). This distinguishes it from sibling tools like compute.chat or compute.estimate_cost.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving historical compute usage but does not explicitly state when to use this tool versus alternatives like compute.estimate_cost or compute.list_models. No guidance on when not to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compute.list_modelsARead-onlyInspect
Browse all models available through the Hive inference router — including per-token pricing in USDC, context window size, latency tier, and provider. No authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| family | No | Filter by model family. One of: gpt-4, claude-3, llama-3, mistral, gemini, embed. | |
| max_price_per_1m | No | Filter to models priced below this amount per 1M tokens in USDC. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds value beyond readOnlyHint annotation by stating 'No authentication required' and specifying the data returned (pricing, context window, etc.). It is consistent with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no wasted words. Front-loaded with verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a list tool with no output schema, description covers key aspects. Could mention sorting or pagination, but overall sufficient for agent selection.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so description does not need to add much. Description does not mention parameters, but baseline is 3 due to high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Browse all models' and lists the information included (pricing, context window, latency, provider). It distinguishes from sibling tools like compute.chat and compute.embed.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies usage for discovery ('Browse all models'), but does not explicitly mention when not to use or point to alternatives. However, it provides clear context for its use case.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!