create_files
Create up to 200 markdown files in a single batch call. Supports knowledge, project, or ctxnest destinations with automatic folder creation and partial success handling.
Instructions
Batch variant of create_file — create up to 200 markdown files in one call. Each item follows the same rules (project_id required if destination is project or ctxnest). SIDE EFFECTS: writes new files to disk and inserts FTS5 rows; missing folders are mkdir'd. Per-item failures are isolated to errors[] and the rest of the batch still commits — partial success is the norm, always inspect error_count. No external auth or rate limits. Returns {created_count, error_count, created, errors}. Use for bulk ingestion; for >200 items, page yourself.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | Files to create (max 200 per call) |
Implementation Reference
- apps/mcp/src/index.ts:1257-1309 (handler)MCP server tool registration for 'create_files' — batch variant of create_file. Accepts an array of up to 200 file definitions (each with title, content, destination, optional project_id, folder, tags). Iterates over them, calling the core createFile() per item, collecting created results and per-item errors. Returns {created_count, error_count, created, errors}.
server.tool( "create_files", "Batch variant of `create_file` — create up to 200 markdown files in one call. Each item follows the same rules (project_id required if destination is `project` or `ctxnest`). SIDE EFFECTS: writes new files to disk and inserts FTS5 rows; missing folders are mkdir'd. Per-item failures are isolated to `errors[]` and the rest of the batch still commits — partial success is the norm, always inspect `error_count`. No external auth or rate limits. Returns `{created_count, error_count, created, errors}`. Use for bulk ingestion; for >200 items, page yourself.", { files: z .array( z.object({ title: z.string(), content: z.string(), destination: z.enum(["knowledge", "project", "ctxnest"]), project_id: z.number().optional(), folder: z.string().optional(), tags: z.array(z.string()).optional(), }) ) .min(1) .max(200) .describe("Files to create (max 200 per call)"), }, async ({ files }) => { const created: any[] = []; const errors: { index: number; title: string; error: string }[] = []; for (let i = 0; i < files.length; i++) { const f = files[i]; try { const result = await createFile({ title: f.title, content: f.content, destination: f.destination, projectId: f.project_id, folder: f.folder, tags: f.tags, dataDir, }); created.push(annotateTokens(result)); } catch (e: any) { errors.push({ index: i, title: f.title, error: e?.message ?? String(e) }); } } return { content: [ { type: "text", text: JSON.stringify( { created_count: created.length, error_count: errors.length, created, errors }, null, 2 ), }, ], }; } ); - apps/mcp/src/index.ts:1260-1274 (schema)Zod schema for the create_files tool input. Defines an array of objects with title (string), content (string), destination (enum: knowledge|project|ctxnest), optional project_id (number), optional folder (string), and optional tags (string[]). Array is limited to 1-200 items.
{ files: z .array( z.object({ title: z.string(), content: z.string(), destination: z.enum(["knowledge", "project", "ctxnest"]), project_id: z.number().optional(), folder: z.string().optional(), tags: z.array(z.string()).optional(), }) ) .min(1) .max(200) .describe("Files to create (max 200 per call)"), - apps/mcp/src/index.ts:1257-1309 (registration)Registration of 'create_files' tool on the MCP server via server.tool(). The description explains it as a batch variant of create_file supporting up to 200 files, with per-item failure isolation.
server.tool( "create_files", "Batch variant of `create_file` — create up to 200 markdown files in one call. Each item follows the same rules (project_id required if destination is `project` or `ctxnest`). SIDE EFFECTS: writes new files to disk and inserts FTS5 rows; missing folders are mkdir'd. Per-item failures are isolated to `errors[]` and the rest of the batch still commits — partial success is the norm, always inspect `error_count`. No external auth or rate limits. Returns `{created_count, error_count, created, errors}`. Use for bulk ingestion; for >200 items, page yourself.", { files: z .array( z.object({ title: z.string(), content: z.string(), destination: z.enum(["knowledge", "project", "ctxnest"]), project_id: z.number().optional(), folder: z.string().optional(), tags: z.array(z.string()).optional(), }) ) .min(1) .max(200) .describe("Files to create (max 200 per call)"), }, async ({ files }) => { const created: any[] = []; const errors: { index: number; title: string; error: string }[] = []; for (let i = 0; i < files.length; i++) { const f = files[i]; try { const result = await createFile({ title: f.title, content: f.content, destination: f.destination, projectId: f.project_id, folder: f.folder, tags: f.tags, dataDir, }); created.push(annotateTokens(result)); } catch (e: any) { errors.push({ index: i, title: f.title, error: e?.message ?? String(e) }); } } return { content: [ { type: "text", text: JSON.stringify( { created_count: created.length, error_count: errors.length, created, errors }, null, 2 ), }, ], }; } ); - Core createFile() function that creates a single file. Handles destination routing (knowledge, project, ctxnest), file path resolution, directory creation, disk write, SQLite insert (with FTS indexing and tags), git auto-commit, and rollback on DB failure. This is the underlying helper called for each item in the create_files batch.
export async function createFile(opts: CreateFileOptions): Promise<FileRecordWithContent> { const db = getDatabase(); const { title, content, destination, projectId, folder, tags = [], dataDir, sourcePath } = opts; let filePath: string; let storageType: StorageType; const slug = slugify(title); if (!slug) { throw new Error("Invalid title: produces empty slug"); } const filename = `${slug}.md`; if (destination === "knowledge") { const knowledgeDir = join(dataDir, "knowledge"); mkdirSync(knowledgeDir, { recursive: true }); filePath = folder ? assertPathInside(knowledgeDir, join(folder, filename)) : assertPathInside(knowledgeDir, filename); storageType = "local"; } else if (destination === "ctxnest") { if (!projectId) { throw new Error("projectId is required for ctxnest destination"); } const project = db.prepare("SELECT slug FROM projects WHERE id = ?").get(projectId) as { slug: string } | undefined; if (!project) { throw new Error(`Project not found: ${projectId}`); } const projectDir = join(dataDir, "projects", project.slug); mkdirSync(projectDir, { recursive: true }); filePath = folder ? assertPathInside(projectDir, join(folder, filename)) : assertPathInside(projectDir, filename); storageType = "local"; } else if (destination === "project") { if (!projectId) { throw new Error("projectId is required for project destination"); } const project = db.prepare("SELECT path FROM projects WHERE id = ?").get(projectId) as { path: string | null } | undefined; if (!project || !project.path) { throw new Error(`Project path not found for project: ${projectId}`); } filePath = folder ? assertPathInside(project.path, join(folder, filename)) : assertPathInside(project.path, filename); storageType = "reference"; } else { throw new Error(`Unknown destination: ${destination}`); } mkdirSync(dirname(filePath), { recursive: true }); // Stash pre-existing content (if any) so we can fully restore on a // DB-txn failure — otherwise writeFileSync below would leave the // caller's content sitting under a row that points elsewhere. let preExistingContent: Buffer | null = null; if (existsSync(filePath)) { try { preExistingContent = readFileSync(filePath); } catch (e) { console.warn("createFile: failed to stash pre-existing content for rollback:", e); } } writeFileSync(filePath, content, "utf8"); const contentHash = computeHash(content); // Files + FTS + tag links in one txn so a partial failure can't leave // a row that's unsearchable forever. const insertStmt = db.prepare(` INSERT INTO files (path, title, project_id, storage_type, source_path, content_hash) VALUES (?, ?, ?, ?, ?, ?) `); const insertTagStmt = db.prepare("INSERT OR IGNORE INTO tags (name) VALUES (?)"); const getTagStmt = db.prepare("SELECT id FROM tags WHERE name = ?"); const linkTagStmt = db.prepare("INSERT INTO file_tags (file_id, tag_id) VALUES (?, ?)"); const ftsStmt = db.prepare("INSERT INTO fts_index (rowid, title, content) VALUES (?, ?, ?)"); let fileId: number; try { fileId = db.transaction(() => { const result = insertStmt.run(filePath, title, projectId || null, storageType, sourcePath || null, contentHash); const id = Number(result.lastInsertRowid); if (tags.length > 0) { for (const tagName of tags) { insertTagStmt.run(tagName); const tag = getTagStmt.get(tagName) as { id: number }; linkTagStmt.run(id, tag.id); } } ftsStmt.run(id, title, content); return id; })(); } catch (dbError) { // Roll back the file write so the watcher doesn't adopt the orphan // as an untitled record (losing the user-supplied title and tags). if (preExistingContent !== null) { try { writeFileSync(filePath, preExistingContent); } catch (e) { console.error("createFile: failed to restore pre-existing content after DB error:", e); } } else { try { unlinkSync(filePath); } catch {} } throw dbError; } const fileRecord = db.prepare("SELECT * FROM files WHERE id = ?").get(fileId) as FileRecord; // Reference files version against their project's own git, not the data dir. let gitWarning: string | undefined; try { let repoDir = dataDir; if (storageType === "reference" && projectId) { const project = db.prepare("SELECT path FROM projects WHERE id = ?").get(projectId) as { path: string | null } | undefined; if (project?.path) repoDir = project.path; } await commitFile(repoDir, filePath, `Create context file: ${title}`); } catch (error) { gitWarning = error instanceof Error ? error.message : String(error); console.warn("Git auto-commit failed during creation:", error); } return { ...fileRecord, content, ...(gitWarning ? { git_warning: gitWarning } : {}), }; } - CreateFileOptions interface — the typed input structure for the core createFile() helper. Contains title, content, destination, optional projectId, folder, tags, dataDir, and sourcePath.
export interface CreateFileOptions { title: string; content: string; destination: Destination; projectId?: number; folder?: string; tags?: string[]; dataDir: string; sourcePath?: string; }