Export Graph
graph_exportExport your entire knowledge graph to a timestamped JSONL backup file, automatically pruning old backups to save storage.
Instructions
Export all graph nodes and edges to a timestamped JSONL backup file in the backups/ directory. Run this before any risky operation, or on a weekly schedule. Old backups are pruned automatically.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| keep | No | Number of backup files to keep (default 14, ~2 weeks of daily backups). | |
| label | No | Optional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl |
Implementation Reference
- src/mcp-server/index.ts:1470-1536 (registration)Tool registration for 'graph_export' via server.registerTool, with input schema (keep, label) and handler that calls client.exportGraph, writes JSONL backup to backups/ directory, and prunes old backups.
// ─── Tool: graph_export ─── server.registerTool("graph_export", { title: "Export Graph", description: "Export all graph nodes and edges to a timestamped JSONL backup file in the backups/ directory. " + "Run this before any risky operation, or on a weekly schedule. Old backups are pruned automatically.", inputSchema: { keep: z .number() .int() .min(1) .max(30) .optional() .default(14) .describe("Number of backup files to keep (default 14, ~2 weeks of daily backups)."), label: z .string() .optional() .describe("Optional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl"), }, }, async ({ keep = 14, label }) => { const backupsDir = join(GRAPH_MEMORY_HOME, "backups"); mkdirSync(backupsDir, { recursive: true }); const now = new Date(); const datePart = now.toISOString().replace(/[:.]/g, "-").slice(0, 19); const suffix = label ? `-${label.replace(/[^a-z0-9-]/gi, "-")}` : ""; const filename = `backup-${datePart}${suffix}.jsonl`; const filePath = join(backupsDir, filename); try { const tenantId = currentTenant(); const { nodes, edges } = await client.exportGraph(tenantId); const lines: string[] = []; lines.push(JSON.stringify({ record: "meta", exported_at: now.toISOString(), tenant_id: tenantId, node_count: nodes.length, edge_count: edges.length })); for (const node of nodes) lines.push(JSON.stringify({ record: "node", ...node })); for (const edge of edges) lines.push(JSON.stringify({ record: "edge", ...edge })); writeFileSync(filePath, lines.join("\n") + "\n"); const sizeBytes = statSync(filePath).size; // Prune old backups — keep the N most recent const allBackups = readdirSync(backupsDir) .filter((f) => f.startsWith("backup-") && f.endsWith(".jsonl")) .map((f) => ({ name: f, mtime: statSync(join(backupsDir, f)).mtimeMs })) .sort((a, b) => b.mtime - a.mtime); const toDelete = allBackups.slice(keep); for (const f of toDelete) { try { unlinkSync(join(backupsDir, f.name)); } catch { /* ignore */ } } return toolResult({ backup_file: filePath, node_count: nodes.length, edge_count: edges.length, size_bytes: sizeBytes, pruned: toDelete.length, retained: Math.min(allBackups.length, keep), }); } catch (err) { const e = err instanceof Error ? err : new Error(String(err)); return toolError(`graph_export failed: ${e.message}`); } }); - src/mcp-server/index.ts:1491-1536 (handler)The async handler function for graph_export. Writes all graph nodes/edges to a timestamped JSONL file in the backups/ directory, then prunes old backups based on keep count. Returns backup_file, node_count, edge_count, size_bytes, pruned, and retained counts.
}, async ({ keep = 14, label }) => { const backupsDir = join(GRAPH_MEMORY_HOME, "backups"); mkdirSync(backupsDir, { recursive: true }); const now = new Date(); const datePart = now.toISOString().replace(/[:.]/g, "-").slice(0, 19); const suffix = label ? `-${label.replace(/[^a-z0-9-]/gi, "-")}` : ""; const filename = `backup-${datePart}${suffix}.jsonl`; const filePath = join(backupsDir, filename); try { const tenantId = currentTenant(); const { nodes, edges } = await client.exportGraph(tenantId); const lines: string[] = []; lines.push(JSON.stringify({ record: "meta", exported_at: now.toISOString(), tenant_id: tenantId, node_count: nodes.length, edge_count: edges.length })); for (const node of nodes) lines.push(JSON.stringify({ record: "node", ...node })); for (const edge of edges) lines.push(JSON.stringify({ record: "edge", ...edge })); writeFileSync(filePath, lines.join("\n") + "\n"); const sizeBytes = statSync(filePath).size; // Prune old backups — keep the N most recent const allBackups = readdirSync(backupsDir) .filter((f) => f.startsWith("backup-") && f.endsWith(".jsonl")) .map((f) => ({ name: f, mtime: statSync(join(backupsDir, f)).mtimeMs })) .sort((a, b) => b.mtime - a.mtime); const toDelete = allBackups.slice(keep); for (const f of toDelete) { try { unlinkSync(join(backupsDir, f.name)); } catch { /* ignore */ } } return toolResult({ backup_file: filePath, node_count: nodes.length, edge_count: edges.length, size_bytes: sizeBytes, pruned: toDelete.length, retained: Math.min(allBackups.length, keep), }); } catch (err) { const e = err instanceof Error ? err : new Error(String(err)); return toolError(`graph_export failed: ${e.message}`); } }); - src/shared/neo4j-client.ts:1582-1627 (helper)The exportGraph method on Neo4jClient. Runs two Cypher queries to fetch all nodes and edges for the given tenant, returning them as plain record arrays for serialization by the tool handler.
// ─── Export ─── async exportGraph(tenantId: string): Promise<{ nodes: Record<string, unknown>[]; edges: Record<string, unknown>[]; }> { const nodeRows = await this.run(` MATCH (n:Entity {tenant_id: $tenantId}) RETURN n.id AS id, [l IN labels(n) WHERE l <> 'Entity'][0] AS type, n.name AS name, n.subtype AS subtype, n.confidence AS confidence, n.times_mentioned AS times_mentioned, n.first_seen AS first_seen, n.last_seen AS last_seen, n.source_file AS source_file, n.tenant_id AS tenant_id, properties(n) AS props ORDER BY n.first_seen `, { tenantId }); const edgeRows = await this.run(` MATCH (a:Entity {tenant_id: $tenantId})-[r]->(b:Entity {tenant_id: $tenantId}) RETURN a.id AS from_id, b.id AS to_id, type(r) AS relation, r.weight AS weight, r.last_confirmed AS last_confirmed, r.valid_at AS valid_at, r.invalid_at AS invalid_at, r.ingested_at AS ingested_at, r.tenant_id AS tenant_id, r.source_session AS source_session, r.source_transcript AS source_transcript, r.source_type AS source_type, r.evidence AS evidence, properties(r) AS props ORDER BY a.id, type(r) `, { tenantId }); return { nodes: nodeRows, edges: edgeRows, }; } - src/mcp-server/index.ts:1477-1490 (schema)Input schema for graph_export: keep (optional number, default 14, min 1 max 30) and label (optional string appended to filename).
inputSchema: { keep: z .number() .int() .min(1) .max(30) .optional() .default(14) .describe("Number of backup files to keep (default 14, ~2 weeks of daily backups)."), label: z .string() .optional() .describe("Optional label appended to the filename, e.g. 'pre-prune' → backup-2026-05-05-pre-prune.jsonl"), },