Skip to main content
Glama

Server Details

Verified registry of third-party-system knowledge — the external-dependency layer for agent memory.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsB

Average 3.3/5 across 3 of 3 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool serves a distinct function: reporting outcomes, searching knowledge entries, and submitting new entries. No functional overlap is apparent.

Naming Consistency5/5

All tools follow a consistent 'runlog_<verb>' pattern (report, search, submit), providing a clear and predictable naming convention.

Tool Count3/5

With only 3 tools, the server feels minimal but not unreasonably so. The scope appears focused on knowledge entry management, though a few more tools for retrieval and management would be expected.

Completeness2/5

The tool set lacks basic operations like retrieving a single entry, updating, or deleting entries. The report tool implies a retrieval mechanism that is not exposed, leaving a significant gap.

Available Tools

3 tools
runlog_reportBInspect

Report whether a retrieved entry succeeded or failed in the caller's context.

Outcome telemetry drives the stub confidence update (v0.1) and will feed the full decay/correlation engine in Phase 3.

ParametersJSON Schema
NameRequiredDescriptionDefault
outcomeYes
entry_idYes
error_contextNo
session_manifestNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The second sentence discloses important side effects: outcome telemetry updates stub confidence and feeds a future engine. With no annotations, this adds significant behavioral context beyond a simple report, though it omits details like idempotency or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. The purpose is front-loaded, and the telemetry explanation is efficient. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 4 parameters with no schema descriptions and no annotations, the description lacks guidance on how to use the tool effectively. It does not explain parameter input formats or constraints, and despite having an output schema, it does not indicate what the output contains.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 4 parameters with 0% schema description coverage, and the description provides no explanations for any parameter (e.g., valid values for outcome, meaning of error_context, session_manifest). This forces reliance on parameter names only.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reports an outcome (succeeded/failed) for a retrieved entry. This differentiates it from siblings runlog_search (search) and runlog_submit (submit), though not explicitly. The verb 'report' and resource 'entry outcome' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Usage is implied via 'retrieved entry' suggesting it follows a retrieval operation, but no explicit when-to-use or alternatives are given. The description does not contrast with siblings or provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

runlog_submitBInspect

Contribute a new knowledge entry about an external-dependency behaviour.

The entry is validated against schema/entry.schema.yaml, checked for scope (public-only domain tags) and contamination (credentials, PII), then embedded and stored. When verification_signature is supplied the bundle is cryptographically verified against the calling key's registered Ed25519 pubkey ([F24] prereq #2); unsigned submits still land at status='unverified' as before.

ParametersJSON Schema
NameRequiredDescriptionDefault
entryYes
verification_signatureNo
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description discloses validation, scope/contamination checks, embedding, and storage. It also explains the optional verification_signature behavior. However, it omits potential side effects like permissions or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is one paragraph of moderate length, front-loading the purpose. It is reasonably concise but could benefit from bullet points or clearer separation of steps.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The presence of an output schema reduces the need to explain return values. However, given the complexity of validation and optional cryptographic verification, the description omits prerequisites and detailed parameter structure, leaving gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, so the description must compensate. It only briefly mentions 'entry' as a knowledge entry and 'verification_signature' for cryptographic verification. The structure of 'entry' is not detailed, leaving the agent with limited parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Contribute a new knowledge entry about an external-dependency behaviour,' which is a specific verb and resource. It also mentions validation and storage steps, distinguishing it from siblings like runlog_report and runlog_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives (runlog_report, runlog_search) or when not to use it. The description focuses on process but lacks usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources