Skip to main content
Glama

Import memories from JSON

memory_import

Import memories, decisions, and pitfalls from a JSON file produced by memory_export. Resolve conflicts with skip (preserve) or overwrite (replace) strategy. Embeddings are processed asynchronously.

Instructions

Import memories, decisions, and pitfalls from a JSON file produced by memory_export (server-local path, not a URL). Conflict handling via strategy: skip (default, safe to re-run) keeps existing rows on id collision; overwrite replaces them — use for authoritative restores. Side effects: inserts/updates rows in memories, decisions, pitfalls. Embeddings are queued asynchronously.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYesAbsolute path on the server's filesystem to a JSON file produced by `memory_export` (e.g. `/tmp/memento-backup.json`).
strategyNo`skip` (default) preserves existing rows on id collision; `overwrite` replaces them.skip

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesSummary line with counts of rows inserted / updated / skipped per table (memories, decisions, pitfalls).

Implementation Reference

  • The main handler function `handleMemoryImport` that reads a JSON file from the filesystem, validates schema version, and imports memories into the database with a configurable skip/overwrite strategy.
    export async function handleMemoryImport(
      db: Database.Database,
      params: { path: string; strategy?: "skip" | "overwrite" },
    ): Promise<string> {
      const strategy = params.strategy ?? "skip";
    
      let raw: string;
      try {
        raw = readFileSync(params.path, "utf-8");
      } catch (e) {
        return `Failed to read import file: ${e instanceof Error ? e.message : String(e)}`;
      }
    
      let payload: ExportPayload;
      try {
        payload = JSON.parse(raw);
      } catch (e) {
        return `Invalid JSON: ${e instanceof Error ? e.message : String(e)}`;
      }
    
      if (!SUPPORTED_SCHEMA_VERSIONS.has(payload.schema_version)) {
        return `Unsupported schema_version ${payload.schema_version}. This build accepts: ${[...SUPPORTED_SCHEMA_VERSIONS].join(", ")}.`;
      }
    
      let imported = 0;
      let skipped = 0;
      let overwritten = 0;
    
      const tx = db.transaction(() => {
        const selectMem = db.prepare("SELECT id FROM memories WHERE id = ?");
        const insertMem = db.prepare(`
          INSERT INTO memories (id, project_id, memory_type, scope, title, body, tags,
                                importance_score, confidence_score, access_count, last_accessed_at,
                                is_pinned, supersedes_memory_id, source, adaptive_score,
                                created_at, updated_at, deleted_at)
          VALUES (@id, @project_id, @memory_type, @scope, @title, @body, @tags,
                  @importance_score, @confidence_score, @access_count, @last_accessed_at,
                  @is_pinned, @supersedes_memory_id, @source, @adaptive_score,
                  @created_at, @updated_at, @deleted_at)
        `);
        const updateMem = db.prepare(`
          UPDATE memories SET
            project_id = @project_id, memory_type = @memory_type, scope = @scope,
            title = @title, body = @body, tags = @tags,
            importance_score = @importance_score, confidence_score = @confidence_score,
            access_count = @access_count, last_accessed_at = @last_accessed_at,
            is_pinned = @is_pinned, supersedes_memory_id = @supersedes_memory_id,
            source = @source, adaptive_score = @adaptive_score,
            updated_at = @updated_at, deleted_at = @deleted_at
          WHERE id = @id
        `);
    
        for (const mem of payload.memories ?? []) {
          const row = {
            id: mem.id,
            project_id: mem.project_id ?? null,
            memory_type: mem.memory_type ?? "fact",
            scope: mem.scope ?? "project",
            title: mem.title,
            body: mem.body ?? "",
            tags: mem.tags ?? null,
            importance_score: mem.importance_score ?? 0.5,
            confidence_score: mem.confidence_score ?? 1.0,
            access_count: mem.access_count ?? 0,
            last_accessed_at: mem.last_accessed_at ?? null,
            is_pinned: mem.is_pinned ?? 0,
            supersedes_memory_id: mem.supersedes_memory_id ?? null,
            source: mem.source ?? "user",
            adaptive_score: mem.adaptive_score ?? 0.5,
            created_at: mem.created_at ?? new Date().toISOString(),
            updated_at: mem.updated_at ?? new Date().toISOString(),
            deleted_at: mem.deleted_at ?? null,
          };
    
          const existing = selectMem.get(row.id);
          if (existing) {
            if (strategy === "overwrite") {
              updateMem.run(row);
              overwritten++;
            } else {
              skipped++;
            }
          } else {
            insertMem.run(row);
            imported++;
          }
        }
      });
    
      tx();
    
      return (
        `Import complete: ${imported} imported, ${skipped} skipped, ${overwritten} overwritten. ` +
        `Strategy=${strategy}.`
      );
    }
  • The `ExportPayload` interface defines the shape of the JSON file that `handleMemoryImport` expects (schema_version, exported_at, projects, memories, decisions, pitfalls).
    interface ExportPayload {
      schema_version: number;
      exported_at: string;
      projects: any[];
      memories: any[];
      decisions: any[];
      pitfalls: any[];
    }
  • src/index.ts:647-672 (registration)
    Registers the `memory_import` tool with the MCP server, including title, description, inputSchema (path and strategy), annotations, outputSchema, and the handler callback that calls handleMemoryImport.
    server.registerTool(
      "memory_import",
      {
        title: "Import memories from JSON",
        description: [
          "Import memories, decisions, and pitfalls from a JSON file produced by `memory_export` (server-local path, not a URL).",
          "Conflict handling via `strategy`: `skip` (default, safe to re-run) keeps existing rows on id collision; `overwrite` replaces them — use for authoritative restores.",
          "Side effects: inserts/updates rows in `memories`, `decisions`, `pitfalls`. Embeddings are queued asynchronously.",
        ].join(" "),
        inputSchema: {
          path: z.string().min(1).describe("Absolute path on the server's filesystem to a JSON file produced by `memory_export` (e.g. `/tmp/memento-backup.json`)."),
          strategy: z.enum(["skip", "overwrite"]).default("skip").describe("`skip` (default) preserves existing rows on id collision; `overwrite` replaces them."),
        },
        annotations: {
          title: "Import memories from JSON",
          readOnlyHint: false,
          destructiveHint: false,
          idempotentHint: false,
          openWorldHint: false,
        },
        outputSchema: {
          message: z.string().describe("Summary line with counts of rows inserted / updated / skipped per table (memories, decisions, pitfalls)."),
        },
      },
      async (params) => textResult(await handleMemoryImport(db, params))
    );
  • Imports used by `handleMemoryImport`: Database type from better-sqlite3 and readFileSync from fs for reading the JSON file.
    import type Database from "better-sqlite3";
    import { readFileSync } from "node:fs";
    
    const SUPPORTED_SCHEMA_VERSIONS = new Set([2]);
    const CURRENT_SCHEMA_VERSION = 2;
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description goes well beyond annotations (which are minimal) by detailing side effects: 'inserts/updates rows in `memories`, `decisions`, `pitfalls`' and 'Embeddings are queued asynchronously.' It also clarifies re-run safety for 'skip' strategy. No contradiction with annotations; descriptive transparency is high.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences, front-loaded with the primary action, and every sentence adds critical information. No wasted words, achieving high density of useful content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multi-table import, conflict handling, async embedding), the description covers all essential aspects: source, path constraint, strategy options, side effects, and asynchronous behavior. An output schema exists, so return values need not be described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Although schema coverage is 100%, the description adds value beyond the schema by clarifying 'server-local path, not a URL' (schema already implied server-local) and by explaining the strategy options with use-case guidance ('safe to re-run' for skip, 'use for authoritative restores' for overwrite).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states: 'Import memories, decisions, and pitfalls from a JSON file produced by `memory_export`.' This provides a specific verb ('Import'), resource ('memories, decisions, and pitfalls'), and input format, clearly distinguishing from sibling tools like 'memory_export' (export) and 'memory_store' (individual store).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description gives clear context: the file must be from `memory_export`, and conflict handling strategies are explained ('skip' for safe re-runs, 'overwrite' for authoritative restores). However, it does not explicitly contrast with sibling tools like 'memory_store' or provide when-not-to-use scenarios, making it slightly less than perfect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfrmonteiro99/memento-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server