Skip to main content
Glama

memory_save

Save learned patterns like test commands and failure diagnostics to persistent memory for future project runs, preventing duplicate entries to maintain efficiency.

Instructions

Persist a learned pattern to forge's project or global memory for future recall. Patterns are stored as JSONL entries with category, pattern text, confidence, and timestamp. Duplicate patterns (same category + same text, case-insensitive) are rejected on write to prevent memory bloat from repeatedly saving the same lesson across runs.

Behaviour:

  • MUTATION. Appends a new JSON line to .forge/memory/<scope>.jsonl. Dedup check reads the existing file first; if a matching (category, pattern) already exists, the save is skipped and a "duplicate skipped" message is returned.

  • Idempotent on (category, pattern): calling twice with the same values produces one entry, not two.

  • No authentication, no network, no rate limits.

  • Appends are atomic on POSIX filesystems, so parallel workers can save concurrently without corrupting the file.

Use when:

  • Phase 5 (Learn) at the end of a forge run — the orchestrator records test commands that worked, conventions discovered by the planner, and failure patterns surfaced by the debugger.

  • A debugger agent has diagnosed a non-obvious root cause and wants to make sure the next run doesn't re-learn it from scratch.

  • A reviewer agent has identified a convention (naming, file layout, test framework) the project consistently follows and wants future workers to match it automatically.

Do NOT use for:

  • Ephemeral session state — use session_state instead. Memory is for knowledge that should outlive the run.

  • Module retry history — that is tracked automatically by iteration_state and validate.

  • Run-specific commentary or event logs — those belong in forge_logs, which is written automatically on every tool call.

  • Huge blobs of text (>1 KB) — memory entries are meant to be compact lessons, not dumps.

Returns: Confirmation string — either "Saved to memory []: " on new insert, or "Duplicate pattern already in memory, skipped." on dedup hit.

Example: memory_save({ pattern: "pnpm vitest --run for CI; watch mode hangs", category: "test_command", scope: "project", confidence: 0.9 }) → "Saved to project memory [test_command]: pnpm vitest --run..."

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
patternYesThe pattern or learning to save
categoryYesCategory of the learning. `success_pattern` is used by orchestrator Phase 5 to record run-shape calibration data (module count, tier depth, total time) for future planning.
confidenceNoConfidence level 0-1 (default: 0.7)
scopeNoSave to project memory (this project only) or global (all projects). Default: project
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and delivers rich behavioral details: it discloses this is a MUTATION tool (file append), describes deduplication logic, idempotency, atomic writes for concurrency, and operational characteristics (no auth/network/rate limits). It also explains the return format and duplicate handling behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Behaviour, Use when, Do NOT use, Returns, Example) and front-loads the core purpose. While comprehensive, some sentences could be more concise (e.g., the duplicate explanation is slightly verbose). Overall, most content earns its place by adding value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a mutation tool with no annotations and no output schema, the description provides complete context: it explains the tool's purpose, behavioral characteristics, usage guidelines, parameter context, return values, and includes a concrete example. This gives the agent everything needed to correctly select and invoke this tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful context beyond the schema: it explains that patterns are stored as JSONL entries with timestamp (not in schema), clarifies duplicate detection is case-insensitive, and provides concrete examples of pattern usage (test commands, conventions, failure patterns) that help understand parameter semantics in practice.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('persist', 'store') and resources ('learned pattern', 'JSONL entries'), distinguishing it from siblings like session_state (ephemeral) and forge_logs (event logs). It explicitly defines what constitutes a pattern and how it's stored.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit 'Use when' scenarios (Phase 5 Learn, debugger root cause, reviewer conventions) and 'Do NOT use for' exclusions (ephemeral state, retry history, logs, large blobs), with clear alternatives named (session_state, iteration_state, forge_logs). This gives comprehensive guidance on when to choose this tool over siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/TT-Wang/forge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server