Runlog
Server Details
Verified registry of third-party-system knowledge — the external-dependency layer for agent memory.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.3/5 across 3 of 3 tools scored.
Each tool serves a distinct function: reporting outcomes, searching knowledge entries, and submitting new entries. No functional overlap is apparent.
All tools follow a consistent 'runlog_<verb>' pattern (report, search, submit), providing a clear and predictable naming convention.
With only 3 tools, the server feels minimal but not unreasonably so. The scope appears focused on knowledge entry management, though a few more tools for retrieval and management would be expected.
The tool set lacks basic operations like retrieving a single entry, updating, or deleting entries. The report tool implies a retrieval mechanism that is not exposed, leaving a significant gap.
Available Tools
3 toolsrunlog_reportBInspect
Report whether a retrieved entry succeeded or failed in the caller's context.
Outcome telemetry drives the stub confidence update (v0.1) and will feed the full decay/correlation engine in Phase 3.
| Name | Required | Description | Default |
|---|---|---|---|
| outcome | Yes | ||
| entry_id | Yes | ||
| error_context | No | ||
| session_manifest | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The second sentence discloses important side effects: outcome telemetry updates stub confidence and feeds a future engine. With no annotations, this adds significant behavioral context beyond a simple report, though it omits details like idempotency or error handling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The purpose is front-loaded, and the telemetry explanation is efficient. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with no schema descriptions and no annotations, the description lacks guidance on how to use the tool effectively. It does not explain parameter input formats or constraints, and despite having an output schema, it does not indicate what the output contains.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 4 parameters with 0% schema description coverage, and the description provides no explanations for any parameter (e.g., valid values for outcome, meaning of error_context, session_manifest). This forces reliance on parameter names only.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool reports an outcome (succeeded/failed) for a retrieved entry. This differentiates it from siblings runlog_search (search) and runlog_submit (submit), though not explicitly. The verb 'report' and resource 'entry outcome' are specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Usage is implied via 'retrieved entry' suggesting it follows a retrieval operation, but no explicit when-to-use or alternatives are given. The description does not contrast with siblings or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
runlog_searchBInspect
Find knowledge entries relevant to an external-dependency problem.
All parameters except query are optional. Authentication is required (Authorization: Bearer ).
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return, 1–50 (default 10). | |
| query | Yes | Natural-language description of the problem. | |
| domain | No | Domain tags to narrow the search (e.g. ["stripe", "python"]). | |
| version_constraints | No | Version filters (echoed back; not applied in v0.1). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Without annotations, the description carries the full burden. It mentions authentication but does not disclose whether the operation is read-only, potential rate limits, or error behavior. The read-only nature is implied but not explicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loads the purpose, and has no redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With an output schema present, return values need not be described. However, the description omits details like result ordering, pagination, and potential empty results, which are important for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description adds value by explaining query as a natural-language description, domain as tags to narrow search, and noting that version_constraints are not applied in v0.1.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool finds knowledge entries relevant to external-dependency problems, which distinguishes it from sibling tools like runlog_report and runlog_submit. However, the term 'knowledge entries' is somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes that all parameters except query are optional and mentions authentication, but it does not specify when to use this tool over siblings or provide exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
runlog_submitBInspect
Contribute a new knowledge entry about an external-dependency behaviour.
The entry is validated against schema/entry.schema.yaml, checked for
scope (public-only domain tags) and contamination (credentials, PII), then
embedded and stored. When verification_signature is supplied the
bundle is cryptographically verified against the calling key's registered
Ed25519 pubkey ([F24] prereq #2); unsigned submits still land at
status='unverified' as before.
| Name | Required | Description | Default |
|---|---|---|---|
| entry | Yes | ||
| verification_signature | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description discloses validation, scope/contamination checks, embedding, and storage. It also explains the optional verification_signature behavior. However, it omits potential side effects like permissions or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is one paragraph of moderate length, front-loading the purpose. It is reasonably concise but could benefit from bullet points or clearer separation of steps.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The presence of an output schema reduces the need to explain return values. However, given the complexity of validation and optional cryptographic verification, the description omits prerequisites and detailed parameter structure, leaving gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 0%, so the description must compensate. It only briefly mentions 'entry' as a knowledge entry and 'verification_signature' for cryptographic verification. The structure of 'entry' is not detailed, leaving the agent with limited parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Contribute a new knowledge entry about an external-dependency behaviour,' which is a specific verb and resource. It also mentions validation and storage steps, distinguishing it from siblings like runlog_report and runlog_search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives (runlog_report, runlog_search) or when not to use it. The description focuses on process but lacks usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!