add_to_memory_bank
Store permanent user facts for persistent memory in AI coding tools, enabling long-term context retention.
Instructions
Store permanent facts about the user
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Memory content |
Store permanent user facts for persistent memory in AI coding tools, enabling long-term context retention.
Store permanent facts about the user
| Name | Required | Description | Default |
|---|---|---|---|
| content | Yes | Memory content |
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool stores 'permanent facts', implying persistence and mutation, but fails to detail critical aspects such as required permissions, whether storage is irreversible, rate limits, or the response format. This is a significant gap for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste, clearly front-loaded with the tool's purpose. It is appropriately sized for a simple tool with one parameter, earning a perfect score for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no annotations and no output schema, the description is incomplete. It lacks details on behavioral traits (e.g., permanence implications, error handling), usage context relative to siblings, and output expectations, making it inadequate for safe and effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'content' documented as 'Memory content'. The description adds no additional meaning beyond this, such as format examples or constraints. Given the high schema coverage, the baseline score of 3 is appropriate as the schema handles the parameter documentation adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Store permanent facts about the user' clearly states the tool's function with a specific verb ('store') and resource ('permanent facts about the user'), distinguishing it from siblings like 'delete_memory' or 'search_memory'. However, it doesn't explicitly differentiate from 'update_memory' or 'record_response' in terms of purpose, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'update_memory' or 'record_response', nor does it mention prerequisites or exclusions. It lacks explicit usage context, leaving the agent to infer based on tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/roampal-ai/roampal-core'
If you have feedback or need assistance with the MCP directory API, please join our Discord server