Skip to main content
Glama

Export Graph

graph_export

Export your entire knowledge graph to a timestamped JSONL backup file, automatically pruning old backups to save storage.

Instructions

Export all graph nodes and edges to a timestamped JSONL backup file in the backups/ directory. Run this before any risky operation, or on a weekly schedule. Old backups are pruned automatically.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
keepNoNumber of backup files to keep (default 14, ~2 weeks of daily backups).
labelNoOptional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl

Implementation Reference

  • Tool registration for 'graph_export' via server.registerTool, with input schema (keep, label) and handler that calls client.exportGraph, writes JSONL backup to backups/ directory, and prunes old backups.
    // ─── Tool: graph_export ───
    
    server.registerTool("graph_export", {
      title: "Export Graph",
      description:
        "Export all graph nodes and edges to a timestamped JSONL backup file in the backups/ directory. " +
        "Run this before any risky operation, or on a weekly schedule. Old backups are pruned automatically.",
      inputSchema: {
        keep: z
          .number()
          .int()
          .min(1)
          .max(30)
          .optional()
          .default(14)
          .describe("Number of backup files to keep (default 14, ~2 weeks of daily backups)."),
        label: z
          .string()
          .optional()
          .describe("Optional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl"),
      },
    }, async ({ keep = 14, label }) => {
      const backupsDir = join(GRAPH_MEMORY_HOME, "backups");
      mkdirSync(backupsDir, { recursive: true });
    
      const now = new Date();
      const datePart = now.toISOString().replace(/[:.]/g, "-").slice(0, 19);
      const suffix = label ? `-${label.replace(/[^a-z0-9-]/gi, "-")}` : "";
      const filename = `backup-${datePart}${suffix}.jsonl`;
      const filePath = join(backupsDir, filename);
    
      try {
        const tenantId = currentTenant();
        const { nodes, edges } = await client.exportGraph(tenantId);
    
        const lines: string[] = [];
        lines.push(JSON.stringify({ record: "meta", exported_at: now.toISOString(), tenant_id: tenantId, node_count: nodes.length, edge_count: edges.length }));
        for (const node of nodes) lines.push(JSON.stringify({ record: "node", ...node }));
        for (const edge of edges) lines.push(JSON.stringify({ record: "edge", ...edge }));
    
        writeFileSync(filePath, lines.join("\n") + "\n");
        const sizeBytes = statSync(filePath).size;
    
        // Prune old backups — keep the N most recent
        const allBackups = readdirSync(backupsDir)
          .filter((f) => f.startsWith("backup-") && f.endsWith(".jsonl"))
          .map((f) => ({ name: f, mtime: statSync(join(backupsDir, f)).mtimeMs }))
          .sort((a, b) => b.mtime - a.mtime);
    
        const toDelete = allBackups.slice(keep);
        for (const f of toDelete) {
          try { unlinkSync(join(backupsDir, f.name)); } catch { /* ignore */ }
        }
    
        return toolResult({
          backup_file: filePath,
          node_count: nodes.length,
          edge_count: edges.length,
          size_bytes: sizeBytes,
          pruned: toDelete.length,
          retained: Math.min(allBackups.length, keep),
        });
      } catch (err) {
        const e = err instanceof Error ? err : new Error(String(err));
        return toolError(`graph_export failed: ${e.message}`);
      }
    });
  • The async handler function for graph_export. Writes all graph nodes/edges to a timestamped JSONL file in the backups/ directory, then prunes old backups based on keep count. Returns backup_file, node_count, edge_count, size_bytes, pruned, and retained counts.
    }, async ({ keep = 14, label }) => {
      const backupsDir = join(GRAPH_MEMORY_HOME, "backups");
      mkdirSync(backupsDir, { recursive: true });
    
      const now = new Date();
      const datePart = now.toISOString().replace(/[:.]/g, "-").slice(0, 19);
      const suffix = label ? `-${label.replace(/[^a-z0-9-]/gi, "-")}` : "";
      const filename = `backup-${datePart}${suffix}.jsonl`;
      const filePath = join(backupsDir, filename);
    
      try {
        const tenantId = currentTenant();
        const { nodes, edges } = await client.exportGraph(tenantId);
    
        const lines: string[] = [];
        lines.push(JSON.stringify({ record: "meta", exported_at: now.toISOString(), tenant_id: tenantId, node_count: nodes.length, edge_count: edges.length }));
        for (const node of nodes) lines.push(JSON.stringify({ record: "node", ...node }));
        for (const edge of edges) lines.push(JSON.stringify({ record: "edge", ...edge }));
    
        writeFileSync(filePath, lines.join("\n") + "\n");
        const sizeBytes = statSync(filePath).size;
    
        // Prune old backups — keep the N most recent
        const allBackups = readdirSync(backupsDir)
          .filter((f) => f.startsWith("backup-") && f.endsWith(".jsonl"))
          .map((f) => ({ name: f, mtime: statSync(join(backupsDir, f)).mtimeMs }))
          .sort((a, b) => b.mtime - a.mtime);
    
        const toDelete = allBackups.slice(keep);
        for (const f of toDelete) {
          try { unlinkSync(join(backupsDir, f.name)); } catch { /* ignore */ }
        }
    
        return toolResult({
          backup_file: filePath,
          node_count: nodes.length,
          edge_count: edges.length,
          size_bytes: sizeBytes,
          pruned: toDelete.length,
          retained: Math.min(allBackups.length, keep),
        });
      } catch (err) {
        const e = err instanceof Error ? err : new Error(String(err));
        return toolError(`graph_export failed: ${e.message}`);
      }
    });
  • The exportGraph method on Neo4jClient. Runs two Cypher queries to fetch all nodes and edges for the given tenant, returning them as plain record arrays for serialization by the tool handler.
    // ─── Export ───
    
    async exportGraph(tenantId: string): Promise<{
      nodes: Record<string, unknown>[];
      edges: Record<string, unknown>[];
    }> {
      const nodeRows = await this.run(`
        MATCH (n:Entity {tenant_id: $tenantId})
        RETURN n.id AS id,
               [l IN labels(n) WHERE l <> 'Entity'][0] AS type,
               n.name AS name,
               n.subtype AS subtype,
               n.confidence AS confidence,
               n.times_mentioned AS times_mentioned,
               n.first_seen AS first_seen,
               n.last_seen AS last_seen,
               n.source_file AS source_file,
               n.tenant_id AS tenant_id,
               properties(n) AS props
        ORDER BY n.first_seen
      `, { tenantId });
    
      const edgeRows = await this.run(`
        MATCH (a:Entity {tenant_id: $tenantId})-[r]->(b:Entity {tenant_id: $tenantId})
        RETURN a.id AS from_id,
               b.id AS to_id,
               type(r) AS relation,
               r.weight AS weight,
               r.last_confirmed AS last_confirmed,
               r.valid_at AS valid_at,
               r.invalid_at AS invalid_at,
               r.ingested_at AS ingested_at,
               r.tenant_id AS tenant_id,
               r.source_session AS source_session,
               r.source_transcript AS source_transcript,
               r.source_type AS source_type,
               r.evidence AS evidence,
               properties(r) AS props
        ORDER BY a.id, type(r)
      `, { tenantId });
    
      return {
        nodes: nodeRows,
        edges: edgeRows,
      };
    }
  • Input schema for graph_export: keep (optional number, default 14, min 1 max 30) and label (optional string appended to filename).
    inputSchema: {
      keep: z
        .number()
        .int()
        .min(1)
        .max(30)
        .optional()
        .default(14)
        .describe("Number of backup files to keep (default 14, ~2 weeks of daily backups)."),
      label: z
        .string()
        .optional()
        .describe("Optional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl"),
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries transparency burden. It discloses automatic pruning of old backups, timestamped filenames, and that it exports all data. While it implies a non-destructive read operation, it could explicitly state that it does not alter data, but the export intent is clear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero wasted words. Critical info (action, format, timing, side-effect) is front-loaded. Very efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and simple parameters, the description covers core usage, side effects, and scheduling. Could optionally mention that the backup is a complete snapshot, but overall sufficient for correct tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. The description adds value by explaining the context: timestamped filenames, backup directory, and automatic pruning. It complements rather than repeats schema details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Export'), the resource ('all graph nodes and edges'), the format ('JSONL'), the destination ('backups/ directory'), and includes naming convention ('timestamped'). It fully distinguishes from sibling operations like graph_prune or graph_export.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises when to run: 'before any risky operation, or on a weekly schedule', providing clear usage context and frequency guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/stevepridemore/graph-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server