Skip to main content
Glama

forget

Delete memories by ID or trigger auto-forgetting based on importance, heat, and age. Preserves caveat, goal, and pinned memories to avoid accidental loss.

Instructions

Explicitly delete a memory by id, OR run auto-forgetting across all memories based on forgettingRisk (importance + heat + age). Caveat-layer, goal-layer, and pinned (importance>=0.9) memories are always preserved. Prefer update_memory for corrections — forget is destructive.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
memory_idNo
dry_runNoReport what would be deleted without actually deleting.

Implementation Reference

  • handleForget function: the main handler for the 'forget' tool. Handles explicit memory deletion by ID (with protection checks for protected/pinned memories) and auto-sweep across all non-protected memories with importance < 0.9, using decideForgetting() to determine which memories to drop. Runs SQL DELETE in a transaction for auto-sweep.
    function handleForget(args: any): string {
      if (args.memory_id) {
        // Explicit delete by id — respect protected AND pinned (importance >= 0.9)
        const target = db.prepare('SELECT id, layer, importance, protected FROM memories WHERE id = ?').get(args.memory_id) as any;
        if (!target) {
          return JSON.stringify({ ok: false, error: `memory_id ${args.memory_id} not found` });
        }
        if (target.protected === 1 || target.importance >= 0.9) {
          const isLayerProtected = target.protected === 1;
          return JSON.stringify({
            ok: false,
            preserved: true,
            reason: isLayerProtected ? `${target.layer}-layer is auto-protected` : 'pinned (importance>=0.9)',
            hint: isLayerProtected
              ? `${target.layer} memories are permanently protected (the whole point — pain lessons must not be lost). If you truly need to delete, copy its content to another layer via remember() first, then drop the DB row manually via a SQLite client.`
              : 'Use update_memory to lower importance below 0.9 first, then forget.',
          });
        }
        const res = db.prepare('DELETE FROM memories WHERE id = ?').run(args.memory_id);
        return JSON.stringify({ ok: true, deleted: res.changes, memory_id: args.memory_id });
      }
    
      // Auto-sweep — also respect pin (importance >= 0.9) as protection
      const rows = db
        .prepare(`SELECT id, layer, importance, access_count, last_accessed_at, protected
                  FROM memories
                  WHERE protected = 0 AND importance < 0.9`)
        .all() as any[];
    
      const now = Math.floor(Date.now() / 1000);
      const actions: { id: number; action: string }[] = [];
    
      for (const r of rows) {
        const daysSince = (now - r.last_accessed_at) / 86400;
        const heat = computeHeat({
          accessesLast30d: daysSince < 30 ? r.access_count : 0,
          accessesLast90d: daysSince < 90 ? r.access_count : 0,
          daysSinceLastAccess: daysSince,
          totalAccesses: r.access_count,
          baseImportance: r.importance,
        });
        const action = decideForgetting({
          daysSinceLastAccess: daysSince,
          importance: r.importance,
          heatScore: heat.score,
          protected: r.protected === 1,
          layer: r.layer,
        });
        if (action !== 'keep') actions.push({ id: r.id, action });
      }
    
      const toDropIds = actions.filter((a) => a.action === 'drop').map((a) => a.id);
    
      if (!args.dry_run) {
        const del = db.prepare('DELETE FROM memories WHERE id = ?');
        const tx = db.transaction((ids: number[]) => { for (const id of ids) del.run(id); });
        tx(toDropIds);
      }
    
      return JSON.stringify({
        ok: true,
        dry_run: !!args.dry_run,
        scanned: rows.length,
        to_drop: toDropIds.length,
        to_compress: actions.filter((a) => a.action === 'compress').length,
        sample_ids_to_drop: toDropIds.slice(0, 10),
      });
    }
  • Tool registration entry for 'forget' in the TOOLS array. Describes the tool for explicit delete by memory_id or auto-forgetting based on forgettingRisk. Input schema accepts optional memory_id (number) and dry_run (boolean).
      name: 'forget',
      description:
        'Explicitly delete a memory by id, OR run auto-forgetting across all memories based on forgettingRisk (importance + heat + age). Caveat-layer, goal-layer, and pinned (importance>=0.9) memories are always preserved. Prefer update_memory for corrections — forget is destructive.',
      inputSchema: {
        type: 'object',
        properties: {
          memory_id: { type: 'number' },
          dry_run: { type: 'boolean', default: false, description: 'Report what would be deleted without actually deleting.' },
        },
      },
    },
  • Input schema for the 'forget' tool: memory_id (number, optional) and dry_run (boolean, default false).
      name: 'forget',
      description:
        'Explicitly delete a memory by id, OR run auto-forgetting across all memories based on forgettingRisk (importance + heat + age). Caveat-layer, goal-layer, and pinned (importance>=0.9) memories are always preserved. Prefer update_memory for corrections — forget is destructive.',
      inputSchema: {
        type: 'object',
        properties: {
          memory_id: { type: 'number' },
          dry_run: { type: 'boolean', default: false, description: 'Report what would be deleted without actually deleting.' },
        },
      },
    },
  • Forgetting curve helper library: defines ForgettingInput interface, forgettingRisk() function (Ebbinghaus-based calculation with heat/importance/time factors), COMPRESS_THRESHOLD (50) and DROP_THRESHOLD (200) constants, and decideForgetting() which returns 'keep', 'compress', or 'drop'.
    // Ported from linksee-app/setup-learning-box.cjs:62-99 (Ebbinghaus forgetting curve).
    // Implements Michie's memory principle 6: active forgetting.
    // Protected memories (caveat layer) and goal layer bypass decay.
    
    export interface ForgettingInput {
      daysSinceLastAccess: number;
      importance: number; // 0.0-1.0
      heatScore: number;  // 0-100
      protected: boolean;
      layer: string;
    }
    
    // Returns true if this memory should be forgotten (compressed to summary or deleted).
    // Higher forgettingRisk → more likely to forget.
    export function forgettingRisk(input: ForgettingInput): number {
      if (input.protected) return 0;
      if (input.layer === 'goal') return 0; // Goals are WHY-anchors, never auto-forget while active
    
      // Original formula from setup-learning-box.cjs:
      //   daysSinceContact * (heatScore/100) * (1 + daysSinceContact/30)
      // We INVERT: high heat = low risk (hot memories should be kept).
      const heatFactor = 1 - (input.heatScore / 100); // 0.0 = keep, 1.0 = drop
      const importanceFactor = 1 - input.importance;
      const timeFactor = input.daysSinceLastAccess * (1 + input.daysSinceLastAccess / 30);
    
      return heatFactor * importanceFactor * timeFactor;
    }
    
    // Risk threshold above which memory is compressed (→ learning layer summary) and the original deleted.
    export const COMPRESS_THRESHOLD = 50;
    
    // Risk threshold above which memory is entirely dropped.
    export const DROP_THRESHOLD = 200;
    
    export type ForgettingAction = 'keep' | 'compress' | 'drop';
    
    export function decideForgetting(input: ForgettingInput): ForgettingAction {
      const risk = forgettingRisk(input);
      if (risk >= DROP_THRESHOLD) return 'drop';
      if (risk >= COMPRESS_THRESHOLD) return 'compress';
      return 'keep';
    }
  • MCP CallToolRequestSchema switch case wiring: routes 'forget' tool calls to handleForget() function.
    case 'forget': text = handleForget(args); break;
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses that the tool is destructive, specifies preserved memory types (caveat-layer, goal-layer, pinned), and mentions the dry_run capability via schema. However, the description does not detail the exact forgettingRisk formula or that deletion is irreversible, but the dry_run parameter is well-described in schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences long, with the main action stated first. It is concise and free of unnecessary detail, efficiently communicating key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (two modes, preservation logic, destructive nature), the description covers essential aspects. It explains when to use each mode, what is preserved, and the destructive intent. No output schema is provided, but the description does not need to discuss return values as the tool's effect is primary.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds semantic context for memory_id (explicit deletion) but does not explicitly mention dry_run. However, it enriches the understanding by explaining the two operational modes and preservation rules, which go beyond the schema's parameter descriptions. Schema coverage is 50%, but the description compensates with broader context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states two distinct modes: explicit deletion by ID and auto-forgetting based on forgettingRisk. It also specifies preservation of caveat-layer, goal-layer, and pinned memories. This distinguishes it from siblings like update_memory and remember.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises to prefer update_memory for corrections, indicating when not to use forget and providing an alternative. Implicitly guides when to use: for destructive deletion or auto-forgetting.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/michielinksee/linksee-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server