fast_read_file
Read files with auto-chunking support, allowing you to specify byte offsets, line ranges, or use continuation tokens for large files.
Instructions
Reads a file (with auto-chunking support)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | File path to read | |
| start_offset | No | Starting byte offset | |
| max_size | No | Maximum size to read | |
| line_start | No | Starting line number | |
| line_count | No | Number of lines to read | |
| encoding | No | Text encoding | utf-8 |
| continuation_token | No | Continuation token from a previous call | |
| auto_chunk | No | Enable auto-chunking |
Implementation Reference
- api/server.ts:429-480 (handler)The main execution logic for the fast_read_file tool. Supports reading by byte offset or line range, with size limits, truncation, path validation, and metadata return.async function handleReadFile(args: any) { const { path: filePath, start_offset = 0, max_size, line_start, line_count, encoding = 'utf-8' } = args; const safePath_resolved = safePath(filePath); const stats = await fs.stat(safePath_resolved); if (!stats.isFile()) { throw new Error('Path is not a file'); } const maxReadSize = max_size ? Math.min(max_size, CLAUDE_MAX_CHUNK_SIZE) : CLAUDE_MAX_CHUNK_SIZE; if (line_start !== undefined) { const linesToRead = line_count ? Math.min(line_count, CLAUDE_MAX_LINES) : CLAUDE_MAX_LINES; const fileContent = await fs.readFile(safePath_resolved, encoding as BufferEncoding); const lines = fileContent.split('\n'); const selectedLines = lines.slice(line_start, line_start + linesToRead); return { content: selectedLines.join('\n'), mode: 'lines', start_line: line_start, lines_read: selectedLines.length, total_lines: lines.length, file_size: stats.size, file_size_readable: formatSize(stats.size), encoding: encoding, path: safePath_resolved }; } const fileHandle = await fs.open(safePath_resolved, 'r'); const buffer = Buffer.alloc(maxReadSize); const { bytesRead } = await fileHandle.read(buffer, 0, maxReadSize, start_offset); await fileHandle.close(); const content = buffer.subarray(0, bytesRead).toString(encoding as BufferEncoding); const result = truncateContent(content); return { content: result.content, mode: 'bytes', start_offset: start_offset, bytes_read: bytesRead, file_size: stats.size, file_size_readable: formatSize(stats.size), encoding: encoding, truncated: result.truncated, has_more: start_offset + bytesRead < stats.size, path: safePath_resolved }; }
- api/server.ts:110-121 (schema)Input schema validation for fast_read_file tool parameters.inputSchema: { type: 'object', properties: { path: { type: 'string', description: '읽을 파일 경로' }, start_offset: { type: 'number', description: '시작 바이트 위치' }, max_size: { type: 'number', description: '읽을 최대 크기' }, line_start: { type: 'number', description: '시작 라인 번호' }, line_count: { type: 'number', description: '읽을 라인 수' }, encoding: { type: 'string', description: '텍스트 인코딩', default: 'utf-8' } }, required: ['path'] }
- api/server.ts:107-122 (registration)Tool registration in MCP_TOOLS array, exposed via tools/list endpoint.{ name: 'fast_read_file', description: '파일을 읽습니다 (청킹 지원)', inputSchema: { type: 'object', properties: { path: { type: 'string', description: '읽을 파일 경로' }, start_offset: { type: 'number', description: '시작 바이트 위치' }, max_size: { type: 'number', description: '읽을 최대 크기' }, line_start: { type: 'number', description: '시작 라인 번호' }, line_count: { type: 'number', description: '읽을 라인 수' }, encoding: { type: 'string', description: '텍스트 인코딩', default: 'utf-8' } }, required: ['path'] } },
- api/server.ts:323-325 (registration)Dispatch/registration of fast_read_file handler in the tools/call switch statement.case 'fast_read_file': result = await handleReadFile(args); break;
- api/server.ts:77-94 (helper)Helper function for truncating large file contents to fit response limits, used in fast_read_file handler.function truncateContent(content: string, maxSize: number = CLAUDE_MAX_RESPONSE_SIZE) { const contentBytes = Buffer.byteLength(content, 'utf8'); if (contentBytes <= maxSize) { return { content, truncated: false }; } let truncated = content; while (Buffer.byteLength(truncated, 'utf8') > maxSize) { truncated = truncated.slice(0, -1); } return { content: truncated, truncated: true, original_size: contentBytes, truncated_size: Buffer.byteLength(truncated, 'utf8') }; }