Rememb
Server Quality Checklist
- Disambiguation5/5
Each tool has a clearly distinct purpose: clear deletes all entries, delete removes by ID, edit modifies by ID, init initializes storage, read loads entries, search finds by content/tags, and write saves new entries. There is no overlap in functionality, making tool selection straightforward for an agent.
Naming Consistency5/5All tools follow a consistent 'rememb_verb' pattern (e.g., rememb_clear, rememb_delete, rememb_edit). This uniform naming convention enhances predictability and readability across the tool set.
Tool Count5/5With 7 tools, the server is well-scoped for a memory management system. Each tool serves a specific role in the CRUD lifecycle (create, read, update, delete) and includes essential utilities like initialization, search, and bulk operations, making the count appropriate.
Completeness5/5The tool set provides complete coverage for memory management: write (create), read (retrieve), edit (update), delete (remove), clear (bulk delete), init (setup), and search (query). There are no obvious gaps, enabling agents to handle all core workflows without dead ends.
Average 3.2/5 across 7 of 7 tools scored.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v0.1.0
Tools from this server were used 18 times in the last 30 days.
This repository includes a glama.json configuration file.
- This server provides 7 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Remove' implies a destructive mutation, but it doesn't specify whether the deletion is permanent or reversible, what permissions are required, if there are side effects (e.g., cascading deletions), or error handling for invalid IDs. This leaves significant gaps for a tool that performs deletions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero wasted words. It's front-loaded with the core action and target, making it easy to parse quickly. Every part of the sentence earns its place by conveying essential information without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a deletion tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., permanence, permissions), error responses, or what happens post-deletion. While the schema covers the single parameter well, the overall context for safe and effective use is insufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the parameter 'entry_id' fully documented in the schema as 'Entry ID to delete'. The description adds minimal value beyond this, only restating that removal is 'by ID' without providing additional context like ID format or examples. Baseline 3 is appropriate since the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Remove') and target ('a memory entry by ID'), which is specific and unambiguous. It distinguishes from siblings like 'rememb_clear' (which likely removes all entries) and 'rememb_edit' (which modifies rather than deletes). However, it doesn't explicitly mention the resource type (e.g., 'memory' vs. 'database entry'), leaving slight room for interpretation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing entry ID), exclusions (e.g., not for bulk deletion), or comparisons to siblings like 'rememb_clear' (for removing all entries) or 'rememb_edit' (for updates). Usage is implied but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. 'Modify' implies a mutation operation, but the description doesn't specify whether this requires specific permissions, if changes are reversible, what happens to unspecified fields, or any rate limits. It lacks critical behavioral context for a mutation tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 4 parameters, no annotations, and no output schema, the description is incomplete. It doesn't cover behavioral aspects like permissions, side effects, or response format, and while the schema documents parameters, the overall context for safe and effective use is lacking.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, meaning all parameters are documented in the schema. The description adds no additional parameter semantics beyond what's in the schema (e.g., it doesn't explain the meaning of 'section' or 'tags' further). Baseline score of 3 is appropriate when the schema handles parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Modify') and resource ('an existing memory entry'), making the purpose understandable. However, it doesn't differentiate this tool from its siblings like 'rememb_write' or 'rememb_delete', which would require more specific language about what modification entails versus creation or deletion.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'rememb_write' (for creation) or 'rememb_delete' (for removal). It mentions 'by ID' which implies a prerequisite of having an existing entry ID, but offers no explicit when-to-use or when-not-to-use instructions compared to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions 'Initialize' which suggests a setup or creation action, but doesn't disclose behavioral traits such as whether this is idempotent, what permissions are needed, if it modifies existing files, or what happens on failure. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and efficiently conveys the core action, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of an initialization tool with no annotations and no output schema, the description is insufficient. It doesn't explain what 'Initialize' entails (e.g., creates files, sets up configurations), potential side effects, or expected outcomes. For a tool that likely modifies the directory state, more context is needed to ensure safe and correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with one optional parameter 'project_name' documented as 'Optional project name'. The description doesn't add any meaning beyond this, such as explaining the impact of providing or omitting the project name. Baseline score of 3 is appropriate since the schema adequately covers the parameter semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Initialize') and the target resource ('rememb memory store in current directory'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'rememb_clear' or 'rememb_write', which might also involve initialization or setup operations, leaving some ambiguity about uniqueness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance, implying usage for initial setup in a directory context, but lacks explicit when-to-use instructions, prerequisites, or alternatives. It doesn't clarify if this should be used before other sibling tools or in what scenarios it's necessary versus optional.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Save a new memory entry,' which implies a write operation, but doesn't cover critical aspects like permissions needed, whether the operation is idempotent, error handling, or what happens on success/failure. This leaves significant gaps for an agent to understand the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is highly concise with two short sentences that are front-loaded and waste no words. Every part contributes to understanding the tool's purpose and usage, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a write operation with no annotations and no output schema, the description is insufficient. It doesn't explain what the tool returns, error conditions, or behavioral traits like side effects. For a tool that modifies state, more context is needed to ensure safe and correct usage by an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description doesn't add any semantic details beyond what's in the schema (e.g., it doesn't explain the purpose of 'section' or 'tags' in context). Baseline 3 is appropriate as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Save') and resource ('a new memory entry'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate this from its siblings like rememb_edit or rememb_init, which likely also involve memory operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage guidance with 'Use this when you learn something worth remembering,' which implies a context for application. However, it doesn't specify when to use this tool versus alternatives like rememb_edit (for updates) or rememb_init (for initialization), nor does it mention any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions 'semantic similarity' as the search method, which adds some behavioral context beyond basic search. However, it lacks details on permissions, rate limits, output format, or error handling. For a search tool with zero annotation coverage, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste—it directly states the tool's purpose and method. It's appropriately sized and front-loaded, making it easy to parse quickly without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 2 parameters with full schema coverage and no output schema, the description is adequate but incomplete. It covers the basic purpose and method but lacks details on behavioral aspects like permissions or output structure. For a search tool with no annotations, it should provide more context to be fully helpful, but it meets minimum viability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear descriptions for both parameters (query as 'natural language or keywords', top_k as 'maximum number of results'). The description adds minimal value beyond the schema, mentioning 'content or tags' which relates to the query parameter but doesn't specify syntax or format. Baseline 3 is appropriate as the schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search') and target resource ('memory entries'), specifying search criteria ('by content or tags using semantic similarity'). It distinguishes from siblings like rememb_read (likely direct retrieval) and rememb_write (creation), but doesn't explicitly contrast them. Purpose is specific but sibling differentiation is implied rather than explicit.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context—searching when you have content or tags to match—but doesn't explicitly state when to use this versus alternatives like rememb_read (which might retrieve by ID) or rememb_edit (modification). No guidance on prerequisites, exclusions, or named alternatives is provided, leaving usage context partially inferred.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool reads memory entries and can filter by section, but lacks details on permissions, rate limits, pagination, or what 'memory entries' entail (e.g., format, size). For a read operation with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and well-structured in two sentences: the first states the purpose, and the second provides usage guidelines. Every sentence earns its place by adding clear value, with no wasted words or redundancy, making it easy for an agent to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is somewhat complete but has gaps. It covers purpose and usage well, but without annotations or output schema, it lacks details on behavioral traits (e.g., read safety, response format) and what 'memory entries' contain. This makes it adequate but not fully comprehensive for an agent to use confidently.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage, with the 'section' parameter fully documented via enum and description. The description adds minimal value beyond the schema by mentioning 'filter by section,' which is already covered. Since schema coverage is high, the baseline score of 3 is appropriate, as the description doesn't provide additional semantic context (e.g., what each section means or how filtering works).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Read all memory entries or filter by section.' It specifies the verb ('Read') and resource ('memory entries'), and distinguishes it from siblings by focusing on reading rather than writing, clearing, deleting, editing, initializing, or searching. However, it doesn't explicitly differentiate from 'rememb_search' in terms of filtering vs. searching, which slightly reduces clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines5/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit usage guidance: 'Use this at the start of every session to load context.' This tells the agent when to use the tool (at session start) and implies its role in context loading, which is helpful for distinguishing it from other tools like 'rememb_search' or 'rememb_write' that might be used later or for different purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It clearly indicates this is a destructive operation ('Delete ALL memory entries') and includes a safety warning ('Use with caution'), which are important behavioral traits. However, it doesn't specify whether this operation is reversible, what permissions are required, or what happens after deletion completes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise - two short sentences that each earn their place. The first sentence states the core functionality, the second provides crucial safety guidance. No wasted words, front-loaded with the most important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description provides the minimum viable information. It identifies the destructive nature and includes a caution, but doesn't explain what 'memory entries' are in this context, what confirmation actually does, or what the expected outcome/response looks like. Given the high-stakes nature of deleting ALL data, more context would be beneficial.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage with a single 'confirm' parameter that's well-documented in the schema. The description doesn't add any parameter-specific information beyond what's in the schema, but with only one parameter and complete schema coverage, the baseline is high. The description's cautionary tone reinforces the significance of the confirm parameter.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and target resource ('ALL memory entries'), making the purpose immediately understandable. However, it doesn't differentiate this from sibling tools like 'rememb_delete' - we can infer this deletes everything while 'rememb_delete' likely deletes specific entries, but this distinction isn't explicitly stated in the description.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines4/5Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear contextual guidance with 'Use with caution' which signals this is a high-impact operation. It doesn't explicitly state when to use this versus 'rememb_delete' or other alternatives, but the 'ALL' qualifier and caution warning provide strong implicit guidance about appropriate usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/LuizEduPP/Rememb'
If you have feedback or need assistance with the MCP directory API, please join our Discord server