Skip to main content
Glama

Server Details

HiveCompute MCP Server — decentralized inference router for AI agents

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srotzin/hive-mcp-compute
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: chat for inference, embed for embeddings, estimate_cost for cost estimation, get_usage for usage history, list_models for model discovery. No overlap.

Naming Consistency5/5

All tools use a consistent 'compute.' prefix followed by a descriptive verb or noun (chat, embed, estimate_cost, get_usage, list_models), forming a clear pattern.

Tool Count5/5

5 tools cover the core operations of a compute service (inference, embeddings, cost estimation, usage tracking, model listing) without being excessive or insufficient.

Completeness5/5

The tool set covers all essential aspects of the compute domain: running tasks (chat/embed), pre-checking costs (estimate_cost), reviewing activity (get_usage), and discovering options (list_models). No obvious gaps.

Available Tools

5 tools
compute.chatAInspect

Run inference via Hive's OpenAI-compatible router. Submit a prompt or message array to any available model. Billed per input+output token in USDC on Base L2. Hive routes to the cheapest available model meeting your latency and quality spec.

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesAgent DID (e.g. did:hive:xxxx). USDC billed to this agent's Hive wallet.
modelNoSpecific model to use (e.g. gpt-4o, claude-3-5-sonnet, llama-3-70b). Omit to let Hive auto-route to the cheapest qualifying model.
api_keyYesAgent API key issued by HiveGate. Required for authenticated inference.
messagesYesOpenAI-compatible messages array. Each item must have role (system|user|assistant) and content (string).
max_tokensNoMaximum tokens to generate in the response. Default 512.
temperatureNoSampling temperature between 0.0 and 2.0. Default 0.7.
max_cost_usdcNoHard cap on USDC spend for this inference call. Request rejected if estimated cost exceeds this. Default 0.05.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=false (modifying state), and the description adds context about per-token billing, routing to cheapest models, and cost limits. No contradictions, but could detail side effects like wallet deduction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long, front-loaded with the primary action, and every sentence adds substantive information (purpose, billing, routing). No redundant or unnecessary text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the rich schema and absence of output schema, the description covers the tool's purpose, billing, and routing behavior. It could mention that responses follow OpenAI format, but overall it is sufficient for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions for all 7 parameters. The description adds high-level context about OpenAI compatibility and routing, but does not provide additional meaning beyond what the schema already conveys for individual parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool runs inference via an OpenAI-compatible router, specifying the action and resource. It contrasts with siblings like compute.embed and compute.list_models by focusing on chat completions and billing.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for chat inference and mentions billing, but does not explicitly state when to use this tool over alternatives like compute.embed or compute.estimate_cost. The context is clear enough given sibling names.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compute.embedA
Idempotent
Inspect

Generate vector embeddings via Hive's embedding router. Billed per 1K input tokens in USDC on Base L2. Returns a float array suitable for semantic search, clustering, or RAG pipelines.

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesAgent DID. Billing is per 1K tokens.
inputYesText to embed. Pass a string for a single embedding or use the batch endpoint for multiple inputs.
modelNoEmbedding model to use. Defaults to text-embedding-3-small. Options: text-embedding-3-small, text-embedding-3-large, embed-multilingual-v3.
api_keyYesAgent API key issued by HiveGate.
dimensionsNoDesired embedding dimensions. Must be supported by the selected model.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond annotations: billing per 1K tokens in USDC on Base L2, return type (float array), and usage via Hive's embedding router. Annotations already indicate idempotency and non-destructive nature, so the description complements without repeating. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise: two sentences that front-load the main action ('Generate vector embeddings'), followed by essential billing and usability notes. Every sentence adds value without redundancy or filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the 5 parameters, no output schema, and annotations covering idempotency/non-destructiveness, the description covers purpose, billing, and return type. It omits mention of idempotency and batch vs single input (handled in schema), which is acceptable. For a tool with rich schema, it is sufficiently complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, baseline is 3. The description does not add parameter-specific details beyond what is in the schema, but it frames the overall purpose which helps contextualize parameters. No substantial additional meaning is provided for individual parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates vector embeddings via Hive's embedding router, specifies billing details, and mentions the return type (float array) and use cases (semantic search, clustering, RAG). This is specific and distinguishes it from sibling tools like compute.chat (chat completions) and compute.list_models (model listing).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description does not provide explicit guidance on when to use this tool versus alternatives. It mentions suitability for semantic search etc., but fails to contrast with siblings like compute.chat or compute.estimate_cost. No 'when not to use' or prerequisites are given.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compute.estimate_costA
Read-only
Inspect

Estimate the USDC cost for a prompt before running inference. Returns cost breakdown by input tokens, output tokens, and routing fee. Helps agents budget before committing a payment.

ParametersJSON Schema
NameRequiredDescriptionDefault
modelYesModel to estimate cost for. Use compute.list_models to browse available models and their per-token prices.
promptYesThe prompt text to estimate cost for. Tokenized to determine input token count.
max_output_tokensNoAssumed maximum output tokens for cost estimation. Default 512.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate readOnlyHint=true. The description adds behavioral context: it estimates cost without executing inference, returns a breakdown, and is for pre-commitment budgeting. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loads the purpose, and every word adds value. No redundancy or unnecessary details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, the description provides a reasonable summary of the return value (cost breakdown by input tokens, output tokens, routing fee). Parameters are well-documented. Could potentially mention that it does not execute the prompt, but overall sufficient.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema documents all parameters. The description does not add parameter-specific meaning beyond what is in the schema, but it clarifies the overall purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'estimate' and resource 'USDC cost for a prompt', distinguishing it from sibling tools like compute.chat (inference) and compute.list_models (browse models). It also specifies the output breakdown and use case for budgeting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (before inference, for budgeting) and hints at alternatives (compute.chat for actual inference). However, it does not explicitly state when not to use or mention alternatives beyond context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compute.get_usageA
Read-only
Inspect

Get an agent's compute usage history — total tokens consumed, total USDC spent, breakdown by model, and inference call log with timestamps.

ParametersJSON Schema
NameRequiredDescriptionDefault
didYesAgent DID to fetch usage history for.
limitNoNumber of recent inference calls to return. Default 20, max 200.
sinceNoISO 8601 timestamp to filter usage from (e.g. 2025-01-01T00:00:00Z). Optional.
api_keyYesAgent API key for authentication.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, indicating no side effects. The description adds meaningful context about what data is returned (totals, breakdown, logs), which is beyond the annotation. No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that front-loads the purpose and lists key outputs. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of output schema, the description adequately explains the return value (totals, breakdown, call log). It covers the essential aspects for a usage history tool with 4 parameters. Minor gap: no mention of ordering or pagination.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents all parameters. The description does not add extra semantics beyond what the schema provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Get an agent's compute usage history' and lists the specific data included (total tokens, USDC spent, breakdown by model, inference call log). This distinguishes it from sibling tools like compute.chat or compute.estimate_cost.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving historical compute usage but does not explicitly state when to use this tool versus alternatives like compute.estimate_cost or compute.list_models. No guidance on when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compute.list_modelsA
Read-only
Inspect

Browse all models available through the Hive inference router — including per-token pricing in USDC, context window size, latency tier, and provider. No authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
familyNoFilter by model family. One of: gpt-4, claude-3, llama-3, mistral, gemini, embed.
max_price_per_1mNoFilter to models priced below this amount per 1M tokens in USDC.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Description adds value beyond readOnlyHint annotation by stating 'No authentication required' and specifying the data returned (pricing, context window, etc.). It is consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no wasted words. Front-loaded with verb and resource.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a list tool with no output schema, description covers key aspects. Could mention sorting or pagination, but overall sufficient for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so description does not need to add much. Description does not mention parameters, but baseline is 3 due to high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Browse all models' and lists the information included (pricing, context window, latency, provider). It distinguishes from sibling tools like compute.chat and compute.embed.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies usage for discovery ('Browse all models'), but does not explicitly mention when not to use or point to alternatives. However, it provides clear context for its use case.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.