Agent Memory Bridge
Server Quality Checklist
- Disambiguation5/5
The tools are perfectly distinct: 'store' handles write operations while 'recall' handles read/poll operations, with no ambiguity in their boundaries.
Naming Consistency5/5Both tools follow a consistent pattern of simple imperative verbs describing memory operations, with uniform casing and style.
Tool Count3/5Two tools is borderline thin for a memory management system; while covering basic read/write, it lacks supporting operations typically needed for robust memory handling.
Completeness2/5The surface only supports creating (store) and reading (recall) entries, with significant gaps including no update, delete, or list operations, limiting agents' ability to manage memory lifecycle.
Average 2.5/5 across 2 of 2 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.2.1
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
This repository includes a glama.json configuration file.
- This server provides 2 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It mentions 'poll for new signals' which hints at temporal/polling behavior (supported by the 'since' parameter), but fails to declare safety properties (read-only vs. destructive), idempotency, rate limits, or error behaviors that would be essential for a bridge/entry retrieval system.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness3/5Is the description appropriately sized, front-loaded, and free of redundancy?
The single sentence is front-loaded and contains no wasted words; however, given the tool's complexity (9 parameters, dual operation modes, output schema), the extreme brevity constitutes underspecification rather than efficient communication. The sentence earns its place but insufficient sentences are present.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters, complex filtering capabilities (tags_any, kind, correlation_id), and an output schema, the description is materially incomplete. It fails to explain the 'bridge' concept, parameter interactions, or the structure of returned data, despite having an output schema that reduces the need for return value explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, leaving 9 parameters undocumented. The description provides no parameter names, expected formats, or relationships between parameters (e.g., how 'since' enables polling, or what 'namespace' constrains). The vague phrase 'matching entries' loosely implies the 'query' parameter exists but offers no syntax or usage guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose3/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('recall', 'poll') and identifies the resource ('entries', 'signals'), but relies on jargon ('the bridge') without explanation. It implicitly distinguishes from sibling 'store' by focusing on retrieval, though the dual-purpose phrasing ('or') creates slight ambiguity about the primary function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description hints at two usage modes—querying existing entries versus polling for new signals—but provides no explicit guidance on when to prefer this over sibling 'store' or other alternatives. No prerequisites, error conditions, or 'when-not-to-use' guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It mentions 'one' entry (excluding bulk) and hints at two entry types (memory/signal), but fails to disclose critical mutation behaviors: overwrite logic, durability guarantees, idempotency, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness4/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely concise single sentence with no redundancy. However, given the high complexity (9 parameters, 0% schema coverage), this brevity becomes under-specification rather than efficient communication.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for a 9-parameter mutation tool. Despite having an output schema (excusing return value documentation), the description lacks domain context ('bridge' undefined), parameter guidance, and behavioral constraints necessary for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate significantly. It fails to do so: no explanation of the 7 optional parameters (tags, session_id, actor, etc.), valid values for 'kind' (only implied as memory/signal), or namespace scoping rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description provides a clear verb ('Store') and resource ('shared memory or signal entry'), and specifies scope ('one', 'in the bridge'). It effectively distinguishes from sibling 'recall' through the opposing verb.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus 'recall' or other alternatives. No mention of prerequisites, required permissions, or use-case scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/zzhang82/Agent-Memory-Bridge'
If you have feedback or need assistance with the MCP directory API, please join our Discord server