Hive Log
Server Details
Tamper-evident audit log service for agent-to-agent transactions
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- srotzin/hive-mcp-log
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 6 of 6 tools scored.
Each tool has a clearly distinct purpose: ingest, retention query/set, search, tail, and daily aggregate. No overlap.
All tools follow a consistent 'log_verb' pattern in snake_case, making them predictable and easy to distinguish.
Six tools is well-scoped for a logging server, covering essential operations without unnecessary bloat.
Core CRUD-like operations for log management are covered (ingest, retention, query, tail, summary). Minor gap: no explicit tool for deletion of individual log entries, but retention policies cover automatic cleanup.
Available Tools
6 toolslog_ingestAInspect
Ingest a structured log batch (NDJSON, max 4 MB, max 10k lines per call). Returns ingestion receipt with byte count, line count, and x402 charge. $0.0001/line + retention surcharge per tier.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | Calling DID, e.g. did:hive:agent-foo. | |
| lines | Yes | Array of log objects. Common fields: ts, severity, tag, msg. | |
| tx_hash | No | Optional Base L2 USDC tx hash for synchronous verification. | |
| retention_class | No | Retention tier for this batch. Default 1d (free). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses behavioral traits: max batch size (4 MB, 10k lines), return format (receipt with counts and charge), and pricing. No contradictions with annotations as none exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently convey purpose, constraints, and return details. No redundant information. Front-loaded with the main action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description adequately explains the return (receipt with counts and charge). Pricing and constraints are covered. Could mention error handling or rate limits, but the core is complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, already documenting all parameters. The description adds constraints (max lines, size) and explains the return receipt, which provides meaning beyond the schema. Could add more detail on how the array should be structured.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Ingest a structured log batch' with format (NDJSON) and constraints, distinguishing itself from sibling tools like log_search or log_retention_get which handle retrieval/retention.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
While not explicitly stating when to use vs alternatives, the description's focus on ingestion and the sibling tool names (which cover search, retention, tail, today) make it obvious that this is the only tool for adding logs. More explicit guidance could improve.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_retention_getAInspect
Return current retention tier and bytes stored for the calling DID. Tier 0, free.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It describes a read operation returning data, which is consistent with the tool's name, but could explicitly state it is safe and read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence with no filler, front-loading the verb and resource.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple get tool with one parameter and no output schema, the description is adequate but lacks details on the return format and the relationship between 'calling DID' and the required 'did' parameter.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description does not explain the 'did' parameter beyond a vague 'for the calling DID'. It fails to compensate for the missing schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns the current retention tier and bytes stored, specifying 'for the calling DID' and noting 'Tier 0, free'. This distinguishes it from sibling tools like log_retention_set or log_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The purpose is implied (check current retention), but no explicit guidance on when to use or not use this tool versus alternatives like log_retention_set or log_search.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_retention_setAInspect
Set retention tier for the calling DID (1d/7d/30d). Takes effect immediately for new ingests.
| Name | Required | Description | Default |
|---|---|---|---|
| did | Yes | ||
| tier | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must carry the full burden. It reveals that the change affects only new ingests, but it does not disclose whether the operation is destructive (e.g., overwriting existing settings), whether it is reversible, or any authentication requirements beyond 'calling DID'.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence containing all essential information without redundancy. It is front-loaded with the action and values.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description covers the basic action and immediate effect, but lacks information on idempotency, return value, or whether existing retention settings are overwritten. For a mutation tool with no output schema and no annotations, additional detail would help completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0% description coverage, so the description should compensate. It adds context that 'did' refers to the calling DID's own identifier, and 'tier' is explained by the listed enum values. This is adequate but does not provide format or constraints beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Set retention tier'), the resource ('for the calling DID'), and the valid values (1d/7d/30d). It distinguishes this tool from siblings like log_retention_get and log_ingest by specifying the setting operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions that the effect is immediate 'for new ingests', providing a usage condition. However, it does not explicitly tell when to use this tool versus alternatives, nor does it mention any prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_searchAInspect
Search logs for the calling DID by time range, severity, tag, and free-text query. Tier 0 own-DID. Returns rows + cursor for pagination.
| Name | Required | Description | Default |
|---|---|---|---|
| q | No | Free-text substring against msg or payload. | |
| did | Yes | ||
| tag | No | ||
| limit | No | ||
| cursor | No | ||
| severity | No | ||
| since_ms | No | ||
| until_ms | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so description carries full burden. It discloses that the tool returns rows and cursor for pagination, and implies read-only behavior via 'search'. However, it does not mention rate limits, auth requirements, or potential side effects beyond the read operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no fluff, front-loaded with verb and key filters, and includes critical pagination info. Every word contributes value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides high-level overview of filters and pagination but lacks details on required parameter 'did', parameter units (ms epoch), and enum values for severity. Given 8 parameters and no output schema, more context would help correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is low (13%). Description compensates by listing filter types (time range, severity, tag, free-text) and pagination (cursor). But it omits explaining the required 'did' parameter and does not detail limit, since_ms/until_ms format, or severity enum values, leaving gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states verb 'search' and resource 'logs', and identifies key filters (time range, severity, tag, free-text). However, it says 'for the calling DID' while the schema requires a 'did' parameter, creating slight ambiguity about whether it uses the caller's identity or an explicit DID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Mentions 'Tier 0 own-DID', hinting at scope, but does not explicitly state when to use this tool versus siblings like log_tail or log_today, nor provides exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_tailAInspect
Return last N log lines for the calling DID. Tier 0, free, own-DID only.
| Name | Required | Description | Default |
|---|---|---|---|
| n | No | How many lines to return (max 500, default 50). | |
| did | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description must carry burden. Discloses it's free and own-DID only, implying a read operation. However, does not mention rate limits, response format, or whether it is destructive (likely not, but not explicit).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence of 14 words, front-loaded with the action. Every word adds value; no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tail tool with 2 parameters and no output schema, the description covers core purpose, scope, and cost. It could mention that lines are returned in reverse chronological order or the format, but not essential given sibling tools cover other aspects.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 50% (did has no description). The description adds meaning by stating 'calling DID', clarifying that the DID parameter must match the caller. This provides context beyond the schema. Parameter 'n' is already well-described in schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it returns last N log lines for the calling DID. The verb 'Return' and resource 'log lines' are specific, and the scope 'for the calling DID' distinguishes it from siblings like log_search or log_today.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Indicates 'Tier 0, free, own-DID only', which sets clear usage context and constraints. Could be improved by explicitly contrasting with siblings (e.g., 'Use log_search for filtered queries over all DIDs').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
log_todayAInspect
Today aggregate — ingests, lines, bytes, charge_usd. Tier 0, free, read-only.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It discloses that the tool is read-only, free, and Tier 0, which are key behavioral traits. However, it does not describe other aspects like data freshness or access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise: two sentences that convey purpose, output fields, and cost/access tier. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no parameters, no output schema, and a simple operation, the description provides sufficient context (output fields, read-only, free). It does not specify time range precision or real-time behavior, but it is adequate for the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has no parameters, so the description naturally adds no parameter details. Baseline 4 is appropriate since schema coverage is 100% and no parameters exist.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it returns today's aggregate data (ingests, lines, bytes, charge_usd) and is read-only. This distinguishes it from sibling tools like log_ingest (writing) and log_search (searching).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool vs alternatives. It does not mention when not to use it or provide context for choosing between siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!