Skip to main content
Glama

memory_import_advanced

Import memory data from files with validation and conflict resolution to ensure data integrity during integration.

Instructions

Advanced memory import with validation and conflict resolution

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputPathYesInput file path
optionsNo

Implementation Reference

  • Tool input schema definition for memory_import_advanced, including all advanced options for format, mode, validation, conflict resolution, backup, transformations, etc.
    {
      name: "memory_import_advanced",
      description:
        "Advanced memory import with validation and conflict resolution",
      inputSchema: {
        type: "object",
        properties: {
          inputPath: { type: "string", description: "Input file path" },
          options: {
            type: "object",
            properties: {
              format: {
                type: "string",
                enum: [
                  "json",
                  "jsonl",
                  "csv",
                  "xml",
                  "yaml",
                  "sqlite",
                  "archive",
                ],
                default: "json",
              },
              mode: {
                type: "string",
                enum: ["merge", "replace", "append", "update"],
                default: "merge",
              },
              validation: {
                type: "string",
                enum: ["strict", "loose", "none"],
                default: "strict",
              },
              conflictResolution: {
                type: "string",
                enum: ["skip", "overwrite", "merge", "rename"],
                default: "skip",
              },
              backup: { type: "boolean", default: true },
              dryRun: { type: "boolean", default: false },
              mapping: {
                type: "object",
                description: "Field mapping for different schemas",
              },
              transformation: {
                type: "object",
                properties: {
                  enabled: { type: "boolean", default: false },
                  rules: {
                    type: "array",
                    items: {
                      type: "object",
                      properties: {
                        field: { type: "string" },
                        operation: {
                          type: "string",
                          enum: ["convert", "transform", "validate"],
                        },
                        params: { type: "object" },
                      },
                    },
                  },
                },
              },
            },
          },
        },
        required: ["inputPath"],
      },
    },
  • Core handler function implementing the advanced memory import logic: format detection, data loading/parsing, validation, backup creation, conflict resolution (skip/overwrite/merge/rename), dry-run support, transformations, and detailed result reporting with ImportResult type.
    async importMemories(
      inputPath: string,
      options: Partial<ImportOptions> = {},
    ): Promise<ImportResult> {
      const defaultOptions: ImportOptions = {
        format: "json",
        mode: "merge",
        validation: "strict",
        conflictResolution: "skip",
        backup: true,
        dryRun: false,
      };
    
      const activeOptions = { ...defaultOptions, ...options };
      const startTime = Date.now();
    
      this.emit("import_started", { inputPath, options: activeOptions });
    
      try {
        // Create backup if requested
        if (activeOptions.backup && !activeOptions.dryRun) {
          await this.createBackup();
        }
    
        // Detect and verify format
        const detectedFormat = await this.detectFormat(inputPath);
        if (detectedFormat !== activeOptions.format) {
          this.emit("format_mismatch", {
            detected: detectedFormat,
            specified: activeOptions.format,
          });
        }
    
        // Load and parse import data
        const importData = await this.loadImportData(inputPath, activeOptions);
    
        // Validate import data
        const validationResult = await this.validateImportData(
          importData,
          activeOptions,
        );
    
        if (
          validationResult.invalid > 0 &&
          activeOptions.validation === "strict"
        ) {
          throw new Error(
            `Validation failed: ${validationResult.invalid} invalid entries`,
          );
        }
    
        // Process import data
        const result = await this.processImportData(importData, activeOptions);
    
        this.emit("import_completed", {
          result,
          duration: Date.now() - startTime,
        });
    
        return result;
      } catch (error) {
        const errorMessage =
          error instanceof Error ? error.message : String(error);
        this.emit("import_error", { error: errorMessage });
    
        return {
          success: false,
          processed: 0,
          imported: 0,
          skipped: 0,
          errors: 1,
          errorDetails: [errorMessage],
          conflicts: 0,
          validation: {
            valid: 0,
            invalid: 0,
            warnings: [],
          },
          summary: {
            newEntries: 0,
            updatedEntries: 0,
            duplicateEntries: 0,
            failedEntries: 0,
          },
          metadata: {
            importedAt: new Date(),
            source: inputPath,
            format: activeOptions.format,
            mode: activeOptions.mode,
          },
        };
      }
    }
  • TypeScript interface defining the input options structure for advanced memory import, matching the tool's inputSchema exactly.
    export interface ImportOptions {
      format: "json" | "jsonl" | "csv" | "xml" | "yaml" | "sqlite" | "archive";
      mode: "merge" | "replace" | "append" | "update";
      validation: "strict" | "loose" | "none";
      conflictResolution: "skip" | "overwrite" | "merge" | "rename";
      backup: boolean;
      dryRun: boolean;
      mapping?: Record<string, string>; // Field mapping for different schemas
      transformation?: {
        enabled: boolean;
        rules: Array<{
          field: string;
          operation: "convert" | "transform" | "validate";
          params: any;
        }>;
      };
    }
  • Key helper method that processes the loaded data: applies transformations/mappings, handles conflicts per resolution strategy, performs storage operations (store/update), tracks statistics, imports learning/KG data if present. Called by the main handler.
     * Get supported formats
     */
    getSupportedFormats(): {
      export: string[];
      import: string[];
      compression: string[];
      encryption: string[];
    } {
      return {
        export: ["json", "jsonl", "csv", "xml", "yaml", "sqlite", "archive"],
        import: ["json", "jsonl", "csv", "xml", "yaml", "sqlite", "archive"],
        compression: ["gzip", "zip", "none"],
        encryption: ["aes-256-gcm", "aes-192-gcm", "aes-128-gcm"],
      };
    }
    
    /**
     * Validate export/import compatibility
     */
    async validateCompatibility(
      sourcePath: string,
      _targetSystem: string = "DocuMCP",
    ): Promise<{
      compatible: boolean;
      issues: string[];
      recommendations: string[];
      migrationRequired: boolean;
    }> {
      try {
        const format = await this.detectFormat(sourcePath);
        const sampleData = await this.loadSampleData(sourcePath, format);
    
        const issues: string[] = [];
        const recommendations: string[] = [];
        let compatible = true;
        let migrationRequired = false;
    
        // Check format compatibility
        if (!this.getSupportedFormats().import.includes(format)) {
          issues.push(`Unsupported format: ${format}`);
          compatible = false;
        }
    
        // Check schema compatibility
        const schemaIssues = this.validateSchema(sampleData);
        if (schemaIssues.length > 0) {
          issues.push(...schemaIssues);
          migrationRequired = true;
        }
    
        // Check data integrity
        const integrityIssues = this.validateDataIntegrity(sampleData);
        if (integrityIssues.length > 0) {
          issues.push(...integrityIssues);
          recommendations.push("Consider data cleaning before import");
        }
    
        // Generate recommendations
        if (migrationRequired) {
          recommendations.push("Create migration plan for schema transformation");
        }
    
        if (format === "csv") {
          recommendations.push(
            "Consider using JSON or JSONL for better data preservation",
          );
        }
    
        return {
          compatible,
          issues,
          recommendations,
          migrationRequired,
        };
      } catch (error) {
        return {
          compatible: false,
          issues: [error instanceof Error ? error.message : String(error)],
          recommendations: ["Verify file format and accessibility"],
          migrationRequired: false,
        };
      }
    }
    
    /**
     * Private helper methods
     */
    private async getFilteredEntries(
      filters?: ExportOptions["filters"],
    ): Promise<MemoryEntry[]> {
      let entries = await this.storage.getAll();
    
      if (!filters) return entries;
    
      // Apply type filter
      if (filters.types && filters.types.length > 0) {
        entries = entries.filter((entry) => filters.types!.includes(entry.type));
      }
    
      // Apply date range filter
      if (filters.dateRange) {
        entries = entries.filter((entry) => {
          const entryDate = new Date(entry.timestamp);
          return (
            entryDate >= filters.dateRange!.start &&
            entryDate <= filters.dateRange!.end
          );
        });
      }
    
      // Apply project filter
      if (filters.projects && filters.projects.length > 0) {
        entries = entries.filter((entry) =>
          filters.projects!.some(
            (project) =>
              entry.data.projectPath?.includes(project) ||
              entry.data.projectId === project,
          ),
        );
      }
    
      // Apply tags filter
      if (filters.tags && filters.tags.length > 0) {
        entries = entries.filter(
          (entry) => entry.tags?.some((tag) => filters.tags!.includes(tag)),
        );
      }
    
      // Apply outcomes filter
      if (filters.outcomes && filters.outcomes.length > 0) {
        entries = entries.filter(
          (entry) =>
            filters.outcomes!.includes(entry.data.outcome) ||
            (entry.data.success === true &&
              filters.outcomes!.includes("success")) ||
            (entry.data.success === false &&
              filters.outcomes!.includes("failure")),
        );
      }
    
      return entries;
    }
    
    private async prepareExportData(
      entries: MemoryEntry[],
      options: ExportOptions,
    ): Promise<any> {
      const exportData: any = {
        metadata: {
          version: this.version,
          exportedAt: new Date().toISOString(),
          source: "DocuMCP Memory System",
          entries: entries.length,
          options: {
            includeMetadata: options.includeMetadata,
            includeLearning: options.includeLearning,
            includeKnowledgeGraph: options.includeKnowledgeGraph,
          },
        },
        memories: entries,
      };
    
      // Include learning data if requested
      if (options.includeLearning) {
        const patterns = await this.learningSystem.getPatterns();
        exportData.learning = {
          patterns,
          statistics: await this.learningSystem.getStatistics(),
        };
      }
    
      // Include knowledge graph if requested
      if (options.includeKnowledgeGraph) {
        const nodes = await this.knowledgeGraph.getAllNodes();
        const edges = await this.knowledgeGraph.getAllEdges();
        exportData.knowledgeGraph = {
          nodes,
          edges,
          statistics: await this.knowledgeGraph.getStatistics(),
        };
      }
    
      return exportData;
    }
    
    private applyAnonymization(
      data: any,
      anonymizeOptions: { fields: string[]; method: string },
    ): void {
      const anonymizeValue = (value: any, method: string): any => {
        if (typeof value !== "string") return value;
    
        switch (method) {
          case "hash":
            return this.hashValue(value);
          case "remove":
            return null;
          case "pseudonymize":
            return this.pseudonymizeValue(value);
          default:
            return value;
        }
      };
    
      const anonymizeObject = (obj: any): void => {
        for (const [key, value] of Object.entries(obj)) {
          if (anonymizeOptions.fields.includes(key)) {
            obj[key] = anonymizeValue(value, anonymizeOptions.method);
          } else if (typeof value === "object" && value !== null) {
            anonymizeObject(value);
          }
        }
      };
    
      anonymizeObject(data);
    }
    
    private hashValue(value: string): string {
      // Simple hash - in production, use a proper cryptographic hash
      let hash = 0;
      for (let i = 0; i < value.length; i++) {
        const char = value.charCodeAt(i);
        hash = (hash << 5) - hash + char;
        hash = hash & hash;
      }
      return `hash_${Math.abs(hash).toString(36)}`;
    }
    
    private pseudonymizeValue(_value: string): string {
      // Simple pseudonymization - in production, use proper techniques
      const prefixes = ["user", "project", "system", "item"];
      const suffix = Math.random().toString(36).substr(2, 8);
      const prefix = prefixes[Math.floor(Math.random() * prefixes.length)];
      return `${prefix}_${suffix}`;
    }
    
    private async exportToJSON(
      outputPath: string,
      data: any,
      _options: ExportOptions,
    ): Promise<string> {
      const jsonData = JSON.stringify(data, null, 2);
      // Handle compression-aware file paths (e.g., file.json.gz)
      let filePath = outputPath;
      if (!outputPath.includes(".json")) {
        filePath = `${outputPath}.json`;
      }
      await fs.writeFile(filePath, jsonData, "utf8");
      return filePath;
    }
    
    private async exportToJSONL(
      outputPath: string,
      data: any,
      _options: ExportOptions,
    ): Promise<string> {
      const filePath = outputPath.endsWith(".jsonl")
        ? outputPath
        : `${outputPath}.jsonl`;
    
      return new Promise((resolve, reject) => {
        const writeStream = createWriteStream(filePath);
    
        writeStream.on("error", (error) => {
          reject(error);
        });
    
        writeStream.on("finish", () => {
          resolve(filePath);
        });
    
        // Write metadata as first line
        writeStream.write(JSON.stringify(data.metadata) + "\n");
    
        // Write each memory entry as a separate line
        for (const entry of data.memories) {
          writeStream.write(JSON.stringify(entry) + "\n");
        }
    
        // Write learning data if included
        if (data.learning) {
          writeStream.write(
            JSON.stringify({ type: "learning", data: data.learning }) + "\n",
          );
        }
    
        // Write knowledge graph if included
        if (data.knowledgeGraph) {
          writeStream.write(
            JSON.stringify({
              type: "knowledgeGraph",
              data: data.knowledgeGraph,
            }) + "\n",
          );
        }
    
        writeStream.end();
      });
    }
    
    private async exportToCSV(
      outputPath: string,
      data: any,
      _options: ExportOptions,
    ): Promise<string> {
      const filePath = outputPath.endsWith(".csv")
        ? outputPath
        : `${outputPath}.csv`;
    
      // Flatten memory entries for CSV format
      const flattenedEntries = data.memories.map((entry: MemoryEntry) => ({
        id: entry.id,
        timestamp: entry.timestamp,
        type: entry.type,
        projectPath: entry.data.projectPath || "",
        projectId: entry.data.projectId || "",
        language: entry.data.language || "",
        framework: entry.data.framework || "",
        outcome: entry.data.outcome || "",
        success: entry.data.success || false,
        tags: entry.tags?.join(";") || "",
        metadata: JSON.stringify(entry.metadata || {}),
      }));
    
      // Generate CSV headers
      const headers = Object.keys(flattenedEntries[0] || {});
      const csvLines = [headers.join(",")];
    
      // Generate CSV rows
      for (const entry of flattenedEntries) {
        const row = headers.map((header) => {
          const value = entry[header as keyof typeof entry];
          const stringValue =
            typeof value === "string" ? value : JSON.stringify(value);
          return `"${stringValue.replace(/"/g, '""')}"`;
        });
        csvLines.push(row.join(","));
      }
    
      await fs.writeFile(filePath, csvLines.join("\n"), "utf8");
      return filePath;
    }
    
    private async exportToXML(
      outputPath: string,
      data: any,
      _options: ExportOptions,
    ): Promise<string> {
      const filePath = outputPath.endsWith(".xml")
        ? outputPath
        : `${outputPath}.xml`;
    
      const xmlData = this.convertToXML(data);
      await fs.writeFile(filePath, xmlData, "utf8");
      return filePath;
    }
    
    private async exportToYAML(
      outputPath: string,
      data: any,
      _options: ExportOptions,
    ): Promise<string> {
      const filePath = outputPath.endsWith(".yaml")
        ? outputPath
        : `${outputPath}.yaml`;
    
      // Simple YAML conversion - in production, use a proper YAML library
      const yamlData = this.convertToYAML(data);
      await fs.writeFile(filePath, yamlData, "utf8");
      return filePath;
    }
    
    private async exportToSQLite(
      _outputPath: string,
      _data: any,
      _options: ExportOptions,
    ): Promise<string> {
      // This would require a SQLite library like better-sqlite3
      // For now, throw an error indicating additional dependencies needed
      throw new Error(
        "SQLite export requires additional dependencies (better-sqlite3)",
      );
    }
    
    private async exportToArchive(
      outputPath: string,
      data: any,
      options: ExportOptions,
    ): Promise<string> {
      const archivePath = outputPath.endsWith(".tar")
        ? outputPath
        : `${outputPath}.tar`;
    
      // Create archive metadata
      const metadata: ArchiveMetadata = {
        version: this.version,
        created: new Date(),
        source: "DocuMCP Memory System",
        description: "Complete memory system export archive",
        manifest: {
          files: [],
          total: { files: 0, size: 0, entries: data.memories.length },
        },
        options,
      };
    
      // This would require archiving capabilities
      // For now, create multiple files and reference them in metadata
      const baseDir = archivePath.replace(".tar", "");
      await fs.mkdir(baseDir, { recursive: true });
    
      // Export memories as JSON
      const memoriesPath = `${baseDir}/memories.json`;
      await this.exportToJSON(memoriesPath, { memories: data.memories }, options);
      metadata.manifest.files.push({
        name: "memories.json",
        type: "memories",
        size: (await fs.stat(memoriesPath)).size,
        checksum: "sha256-placeholder",
        entries: data.memories.length,
      });
    
      // Export learning data if included
      if (data.learning) {
        const learningPath = `${baseDir}/learning.json`;
        await this.exportToJSON(learningPath, data.learning, options);
        metadata.manifest.files.push({
          name: "learning.json",
          type: "learning",
          size: (await fs.stat(learningPath)).size,
          checksum: "sha256-placeholder",
        });
      }
    
      // Export knowledge graph if included
      if (data.knowledgeGraph) {
        const kgPath = `${baseDir}/knowledge-graph.json`;
        await this.exportToJSON(kgPath, data.knowledgeGraph, options);
        metadata.manifest.files.push({
          name: "knowledge-graph.json",
          type: "knowledge-graph",
          size: (await fs.stat(kgPath)).size,
          checksum: "sha256-placeholder",
        });
      }
    
      // Write metadata
      const metadataPath = `${baseDir}/metadata.json`;
      await this.exportToJSON(metadataPath, metadata, options);
    
      return baseDir;
    }
    
    private async applyCompression(
      filePath: string,
      compression: string,
      targetPath?: string,
    ): Promise<string> {
      if (compression === "gzip") {
        const compressedPath = targetPath || `${filePath}.gz`;
        const content = await fs.readFile(filePath, "utf8");
        // Simple mock compression - just add a header and write the content
        await fs.writeFile(compressedPath, `GZIP_HEADER\n${content}`, "utf8");
    
        // Clean up temp file if we used one
        if (targetPath && targetPath !== filePath) {
          await fs.unlink(filePath);
        }
    
        return compressedPath;
      }
    
      // For other compression types or 'none', return original path
      this.emit("compression_skipped", {
        reason: "Not implemented",
        compression,
      });
      return filePath;
    }
    
    private async applyEncryption(
      filePath: string,
      encryption: any,
    ): Promise<string> {
      // This would require encryption capabilities
      // For now, return the original path
      this.emit("encryption_skipped", { reason: "Not implemented", encryption });
      return filePath;
    }
    
    private getIncludedComponents(options: ExportOptions): string[] {
      const components = ["memories"];
      if (options.includeMetadata) components.push("metadata");
      if (options.includeLearning) components.push("learning");
      if (options.includeKnowledgeGraph) components.push("knowledge-graph");
      return components;
    }
    
    private async detectFormat(filePath: string): Promise<string> {
      const extension = filePath.split(".").pop()?.toLowerCase();
    
      switch (extension) {
        case "json":
          return "json";
        case "jsonl":
          return "jsonl";
        case "csv":
          return "csv";
        case "xml":
          return "xml";
        case "yaml":
        case "yml":
          return "yaml";
        case "db":
        case "sqlite":
          return "sqlite";
        case "tar":
        case "zip":
          return "archive";
        default: {
          // Try to detect by content
          const content = await fs.readFile(filePath, "utf8");
          if (content.trim().startsWith("{") || content.trim().startsWith("[")) {
            return "json";
          }
          if (content.includes("<?xml")) {
            return "xml";
          }
          return "unknown";
        }
      }
    }
    
    private async loadImportData(
      filePath: string,
      options: ImportOptions,
    ): Promise<any> {
      switch (options.format) {
        case "json":
          return JSON.parse(await fs.readFile(filePath, "utf8"));
        case "jsonl":
          return this.loadJSONLData(filePath);
        case "csv":
          return this.loadCSVData(filePath);
        case "xml":
          return this.loadXMLData(filePath);
        case "yaml":
          return this.loadYAMLData(filePath);
        default:
          throw new Error(`Unsupported import format: ${options.format}`);
      }
    }
    
    private async loadJSONLData(filePath: string): Promise<any> {
      const content = await fs.readFile(filePath, "utf8");
      const lines = content.trim().split("\n");
    
      const data: any = { memories: [], learning: null, knowledgeGraph: null };
    
      for (const line of lines) {
        const parsed = JSON.parse(line);
    
        if (parsed.type === "learning") {
          data.learning = parsed.data;
        } else if (parsed.type === "knowledgeGraph") {
          data.knowledgeGraph = parsed.data;
        } else if (parsed.version) {
          data.metadata = parsed;
        } else {
          data.memories.push(parsed);
        }
      }
    
      return data;
    }
    
    private async loadCSVData(filePath: string): Promise<any> {
      const content = await fs.readFile(filePath, "utf8");
      const lines = content.trim().split("\n");
      const headers = lines[0].split(",").map((h) => h.replace(/"/g, ""));
    
      const memories = [];
      for (let i = 1; i < lines.length; i++) {
        const values = this.parseCSVLine(lines[i]);
        const entry: any = {};
    
        for (let j = 0; j < headers.length; j++) {
          const header = headers[j];
          const value = values[j];
    
          // Parse special fields
          if (header === "tags") {
            entry.tags = value ? value.split(";") : [];
          } else if (header === "metadata") {
            try {
              entry.metadata = JSON.parse(value);
            } catch {
              entry.metadata = {};
            }
          } else if (header === "success") {
            entry.data = entry.data || {};
            entry.data.success = value === "true";
          } else if (
            [
              "projectPath",
              "projectId",
              "language",
              "framework",
              "outcome",
            ].includes(header)
          ) {
            entry.data = entry.data || {};
            entry.data[header] = value;
          } else {
            entry[header] = value;
          }
        }
    
        memories.push(entry);
      }
    
      return { memories };
    }
    
    private parseCSVLine(line: string): string[] {
      const values: string[] = [];
      let current = "";
      let inQuotes = false;
    
      for (let i = 0; i < line.length; i++) {
        const char = line[i];
    
        if (char === '"') {
          if (inQuotes && line[i + 1] === '"') {
            current += '"';
            i++;
          } else {
            inQuotes = !inQuotes;
          }
        } else if (char === "," && !inQuotes) {
          values.push(current);
          current = "";
        } else {
          current += char;
        }
      }
    
      values.push(current);
      return values;
    }
    
    private async loadXMLData(_filePath: string): Promise<any> {
      // This would require an XML parser
      throw new Error("XML import requires additional dependencies (xml2js)");
    }
    
    private async loadYAMLData(_filePath: string): Promise<any> {
      // This would require a YAML parser
      throw new Error("YAML import requires additional dependencies (js-yaml)");
    }
    
    private async validateImportData(
      data: any,
      options: ImportOptions,
    ): Promise<{ valid: number; invalid: number; warnings: string[] }> {
      const result = { valid: 0, invalid: 0, warnings: [] as string[] };
    
      if (!data.memories || !Array.isArray(data.memories)) {
        result.warnings.push("No memories array found in import data");
        return result;
      }
    
      for (const entry of data.memories) {
        if (this.validateMemoryEntry(entry, options.validation)) {
          result.valid++;
        } else {
          result.invalid++;
        }
      }
    
      return result;
    }
    
    private validateMemoryEntry(entry: any, validation: string): boolean {
      // Check for completely missing or null required fields
      if (
        !entry.id ||
        !entry.timestamp ||
        entry.type === null ||
        entry.type === undefined ||
        entry.data === null
      ) {
        return false; // These are invalid regardless of validation level
      }
    
      if (!entry.type) {
        return validation !== "strict";
      }
    
      if (validation === "strict") {
        return Boolean(entry.data && typeof entry.data === "object");
      }
    
      // For loose validation, still require data to be defined (not null)
      if (validation === "loose" && entry.data === null) {
        return false;
      }
    
      return true;
    }
    
    private async processImportData(

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/documcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server