Skip to main content
Glama

remember

Store user preferences, decisions, and project context in long-term memory for recall across future sessions. Preserve technical specifications, coding conventions, and personal choices to maintain consistency.

Instructions

Persist a piece of information to long-term memory so it can be recalled in future sessions. Use this whenever the user states a preference, makes a decision, or shares context that should survive beyond the current conversation.

When to call: after learning the user's tech stack, coding conventions, project constraints, architectural decisions, or personal preferences.

Returns a confirmation message with the stored content preview.

Examples of good memories:

  • 'User prefers TypeScript strict mode with no implicit any'

  • 'Database: PostgreSQL 16 with pgvector on Railway, connection via asyncpg'

  • 'Never use any() type in this codebase — team policy'

  • 'Deployed on 2024-03-15: migrated auth from JWT to Supabase sessions'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesThe information to store. Write in a self-contained, specific way so it remains useful without conversation context. Good: 'API rate limit is 100 req/min per key'. Bad: 'the limit we discussed'.
memory_typeNoCategory of the memory: - episodic: a specific past event or decision (e.g. 'Deployed v2 on 2024-03-10') - semantic: a general fact, preference, or project truth (e.g. 'User prefers tabs over spaces') - procedural: a how-to, pattern, or repeatable process (e.g. 'To deploy: run npm run build then railway up') Defaults to 'semantic' when unsure.semantic
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It effectively describes key behaviors: the tool persists information across sessions, returns a confirmation message with preview, and provides concrete examples of appropriate content. However, it doesn't mention potential limitations like storage capacity, retention policies, or error conditions that might be relevant for a memory tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose in the first sentence. Each subsequent section (when to call, return value, examples) adds specific value without redundancy. The examples are concrete and illustrative, earning their place by clarifying appropriate usage. No sentence is wasted or repetitive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with 2 parameters, 100% schema coverage, and no output schema, the description provides strong contextual completeness. It covers purpose, usage guidelines, behavioral expectations, and practical examples. The main gap is the lack of output schema, but the description compensates by explicitly stating what the tool returns ('confirmation message with stored content preview'). A perfect score would require more detail about potential edge cases or limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema by providing concrete examples of good memory content ('User prefers TypeScript strict mode...') and explaining the practical application of memory types through usage examples. This helps the agent understand how to format content appropriately, though it doesn't provide additional technical details about parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('persist', 'recalled') and resource ('piece of information to long-term memory'). It distinguishes from sibling tools by focusing on storage rather than retrieval (recall), removal (forget), or injection (inject_context). The first sentence provides a complete, unambiguous statement of function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool: 'whenever the user states a preference, makes a decision, or shares context that should survive beyond the current conversation.' It offers specific examples of appropriate contexts (tech stack, coding conventions, project constraints, etc.) and distinguishes from alternatives by focusing on persistence for future sessions rather than immediate recall or context injection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Daftgoldens/Kronvex'

If you have feedback or need assistance with the MCP directory API, please join our Discord server