Skip to main content
Glama

read_context

Analyze and read code files with customizable filters, recursive directory traversal, and chunked handling for large files. Automatically excludes common artifact directories like .git/, node_modules/, and .venv/ for efficient processing.

Instructions

Read and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:

  • Version Control: .git/

  • Python: .venv/, pycache/, *.pyc, etc.

  • JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.

  • IDE/Editor: .idea/, .vscode/, .env, etc.

For large files or directories, use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
chunkNumberNoWhich chunk to return (0-based). Use with get_chunk_count to handle large files/directories.
encodingNoFile encoding (e.g., utf8, ascii, latin1)utf8
fileTypesNoFile extension(s) to include WITHOUT dots (e.g. ["ts", "js", "py"] or just "ts"). Empty/undefined means all files.
maxSizeNoMaximum file size in bytes. Files larger than this will be chunked.
pathYesPath to file or directory to read
recursiveNoWhether to read directories recursively (includes subdirectories)

Implementation Reference

  • src/index.ts:267-306 (registration)
    Server capabilities registration defining the read_context tool with description and input schema.
    read_context: { description: 'WARNING: Run get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.\nRead and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:\n- Version Control: .git/\n- Python: .venv/, __pycache__/, *.pyc, etc.\n- JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.\n- IDE/Editor: .idea/, .vscode/, .env, etc.\n\n**WARNING** use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.', inputSchema: { type: 'object', properties: { path: { type: 'string', description: 'Path to file or directory to read' }, maxSize: { type: 'number', description: 'Maximum file size in bytes. Files larger than this will be chunked.', default: 1048576 }, encoding: { type: 'string', description: 'File encoding (e.g., utf8, ascii, latin1)', default: 'utf8' }, recursive: { type: 'boolean', description: 'Whether to read directories recursively (includes subdirectories)', default: true }, fileTypes: { type: ['array', 'string'], items: { type: 'string' }, description: 'File extension(s) to include WITHOUT dots (e.g. ["ts", "js", "py"] or just "ts"). Empty/undefined means all files.', default: [] }, chunkNumber: { type: 'number', description: 'Which chunk to return (0-based). Use with get_chunk_count to handle large files/directories.', default: 0 } }, required: ['path'] } },
  • Primary handler function for read_context tool: parses args, calls readContent, chunks output, returns JSON response.
    private async handleReadFile(args: any) { const { path: filePath, encoding = 'utf8', maxSize, recursive = true, fileTypes, chunkNumber = 0 } = args; try { const filesInfo = await this.readContent(filePath, encoding as BufferEncoding, maxSize, recursive, fileTypes); const { content, hasMore } = this.getContentChunk(filesInfo, chunkNumber * this.config.chunkSize); return this.createJsonResponse({ content, hasMore, nextChunk: hasMore ? chunkNumber + 1 : null }); } catch (error) { throw this.handleFileOperationError(error, 'read file', filePath); } }
  • Core helper implementing file/directory reading logic: globbing, filtering by type/size/ignores, caching, hashing for read_context.
    private async readContent( filePath: string, encoding: BufferEncoding = 'utf8', maxSize?: number, recursive: boolean = true, fileTypes?: string[] | string ): Promise<FilesInfo> { const filesInfo: FilesInfo = {}; const absolutePath = path.resolve(filePath); const cleanFileTypes = Array.isArray(fileTypes) ? fileTypes.map(ext => ext.toLowerCase().replace(/^\./, '')) : fileTypes ? [fileTypes.toLowerCase().replace(/^\./, '')] : undefined; await this.loggingService.debug('Reading content with file type filtering', { cleanFileTypes, absolutePath, operation: 'read_content' }); // Handle single file if ((await fs.stat(absolutePath)).isFile()) { if (cleanFileTypes && !cleanFileTypes.some(ext => absolutePath.toLowerCase().endsWith(`.${ext}`))) { return filesInfo; } const stat = await fs.stat(absolutePath); if (maxSize && stat.size > maxSize) { throw new FileOperationError( FileErrorCode.FILE_TOO_LARGE, `File ${absolutePath} exceeds maximum size limit of ${maxSize} bytes`, absolutePath ); } // Check cache first const cached = this.fileContentCache.get(absolutePath); let content: string; if (cached && cached.lastModified === stat.mtimeMs) { content = cached.content; } else { content = await fs.readFile(absolutePath, encoding); this.fileContentCache.set(absolutePath, { content, lastModified: stat.mtimeMs }); } const hash = createHash('md5').update(content).digest('hex'); filesInfo[absolutePath] = { path: absolutePath, content, hash, size: stat.size, lastModified: stat.mtimeMs }; return filesInfo; } // Handle directory: use POSIX join for glob const pattern = recursive ? '**/*' : '*'; const globPattern = path.posix.join(absolutePath.split(path.sep).join(path.posix.sep), pattern); const files = await this.globPromise(globPattern, { ignore: DEFAULT_IGNORE_PATTERNS, nodir: true, dot: false, cache: true, follow: false }); await Promise.all(files.map(async (file) => { if (cleanFileTypes && !cleanFileTypes.some(ext => file.toLowerCase().endsWith(`.${ext}`))) { return; } try { const stat = await fs.stat(file); if (maxSize && stat.size > maxSize) { return; } // Check cache first const cached = this.fileContentCache.get(file); let content: string; if (cached && cached.lastModified === stat.mtimeMs) { content = cached.content; } else { content = await fs.readFile(file, encoding); this.fileContentCache.set(file, { content, lastModified: stat.mtimeMs }); } const hash = createHash('md5').update(content).digest('hex'); filesInfo[file] = { path: file, content, hash, size: stat.size, lastModified: stat.mtimeMs }; } catch (error) { await this.loggingService.error('Error reading file for info collection', error as Error, { filePath: file, operation: 'get_files_info' }); } })); return filesInfo; }
  • src/index.ts:1598-1639 (registration)
    Tool dispatch registration in CallToolRequestHandler: maps 'read_context' calls to handleReadFile method.
    this.server.setRequestHandler(CallToolRequestSchema, async (request) => { try { if (!request.params.arguments) { throw new McpError(ErrorCode.InvalidParams, 'Missing arguments'); } switch (request.params.name) { case 'list_context_files': return await this.handleListFiles(request.params.arguments); case 'read_context': return await this.handleReadFile(request.params.arguments); case 'search_context': return await this.handleSearchFiles(request.params.arguments); case 'get_chunk_count': return await this.handleGetChunkCount(request.params.arguments); case 'set_profile': return await this.handleSetProfile(request.params.arguments); case 'get_profile_context': return await this.handleGetProfileContext(request.params.arguments); case 'generate_outline': return await this.handleGenerateOutline(request.params.arguments); case 'getFiles': return await this.handleGetFiles(request.params.arguments); default: throw new McpError( ErrorCode.MethodNotFound, `Unknown tool: ${request.params.name}` ); } } catch (error) { if (error instanceof FileOperationError) { return { content: [{ type: 'text', text: `File operation error: ${error.message} (${error.code})` }], isError: true }; } throw error; } });
  • Tool schema definition returned by ListToolsRequestHandler.
    { name: 'read_context', description: 'Read and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:\n- Version Control: .git/\n- Python: .venv/, __pycache__/, *.pyc, etc.\n- JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.\n- IDE/Editor: .idea/, .vscode/, .env, etc.\n\nFor large files or directories, use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.', inputSchema: { type: 'object', properties: { path: { type: 'string', description: 'Path to file or directory to read' }, maxSize: { type: 'number', description: 'Maximum file size in bytes. Files larger than this will be chunked.', default: 1048576 }, encoding: { type: 'string', description: 'File encoding (e.g., utf8, ascii, latin1)', default: 'utf8' }, recursive: { type: 'boolean', description: 'Whether to read directories recursively (includes subdirectories)', default: true }, fileTypes: { type: ['array', 'string'], items: { type: 'string' }, description: 'File extension(s) to include WITHOUT dots (e.g. ["ts", "js", "py"] or just "ts"). Empty/undefined means all files.', default: [] }, chunkNumber: { type: 'number', description: 'Which chunk to return (0-based). Use with get_chunk_count to handle large files/directories.', default: 0 } }, required: ['path'] } },

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-file-context-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server