Skip to main content
Glama
bsmi021

MCP File Context Server

by bsmi021

read_context

Read and analyze code files with advanced filtering and chunking, automatically ignoring common artifact directories and files for efficient code examination.

Instructions

Read and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:

  • Version Control: .git/

  • Python: .venv/, pycache/, *.pyc, etc.

  • JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.

  • IDE/Editor: .idea/, .vscode/, .env, etc.

For large files or directories, use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYesPath to file or directory to read
maxSizeNoMaximum file size in bytes. Files larger than this will be chunked.
encodingNoFile encoding (e.g., utf8, ascii, latin1)utf8
recursiveNoWhether to read directories recursively (includes subdirectories)
fileTypesNoFile extension(s) to include WITHOUT dots (e.g. ["ts", "js", "py"] or just "ts"). Empty/undefined means all files.
chunkNumberNoWhich chunk to return (0-based). Use with get_chunk_count to handle large files/directories.

Implementation Reference

  • Handler method that processes read_context tool requests, reads file/directory content using readContent, chunks it, and returns JSON response.
    private async handleReadFile(args: any) {
        const {
            path: filePath,
            encoding = 'utf8',
            maxSize,
            recursive = true,
            fileTypes,
            chunkNumber = 0
        } = args;
    
        try {
            const filesInfo = await this.readContent(filePath, encoding as BufferEncoding, maxSize, recursive, fileTypes);
            const { content, hasMore } = this.getContentChunk(filesInfo, chunkNumber * this.config.chunkSize);
    
            return this.createJsonResponse({
                content,
                hasMore,
                nextChunk: hasMore ? chunkNumber + 1 : null
            });
        } catch (error) {
            throw this.handleFileOperationError(error, 'read file', filePath);
        }
    }
  • Input schema and description for the read_context tool defined in server capabilities.
    read_context: {
        description: 'WARNING: Run get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.\nRead and analyze code files with advanced filtering and chunking. The server automatically ignores common artifact directories and files:\n- Version Control: .git/\n- Python: .venv/, __pycache__/, *.pyc, etc.\n- JavaScript/Node.js: node_modules/, bower_components/, .next/, dist/, etc.\n- IDE/Editor: .idea/, .vscode/, .env, etc.\n\n**WARNING** use get_chunk_count first to determine total chunks, then request specific chunks using chunkNumber parameter.',
        inputSchema: {
            type: 'object',
            properties: {
                path: {
                    type: 'string',
                    description: 'Path to file or directory to read'
                },
                maxSize: {
                    type: 'number',
                    description: 'Maximum file size in bytes. Files larger than this will be chunked.',
                    default: 1048576
                },
                encoding: {
                    type: 'string',
                    description: 'File encoding (e.g., utf8, ascii, latin1)',
                    default: 'utf8'
                },
                recursive: {
                    type: 'boolean',
                    description: 'Whether to read directories recursively (includes subdirectories)',
                    default: true
                },
                fileTypes: {
                    type: ['array', 'string'],
                    items: { type: 'string' },
                    description: 'File extension(s) to include WITHOUT dots (e.g. ["ts", "js", "py"] or just "ts"). Empty/undefined means all files.',
                    default: []
                },
                chunkNumber: {
                    type: 'number',
                    description: 'Which chunk to return (0-based). Use with get_chunk_count to handle large files/directories.',
                    default: 0
                }
            },
            required: ['path']
        }
    },
  • src/index.ts:1604-1626 (registration)
    Registration of read_context tool in the CallToolRequestSchema request handler switch statement, mapping to handleReadFile.
    switch (request.params.name) {
        case 'list_context_files':
            return await this.handleListFiles(request.params.arguments);
        case 'read_context':
            return await this.handleReadFile(request.params.arguments);
        case 'search_context':
            return await this.handleSearchFiles(request.params.arguments);
        case 'get_chunk_count':
            return await this.handleGetChunkCount(request.params.arguments);
        case 'set_profile':
            return await this.handleSetProfile(request.params.arguments);
        case 'get_profile_context':
            return await this.handleGetProfileContext(request.params.arguments);
        case 'generate_outline':
            return await this.handleGenerateOutline(request.params.arguments);
        case 'getFiles':
            return await this.handleGetFiles(request.params.arguments);
        default:
            throw new McpError(
                ErrorCode.MethodNotFound,
                `Unknown tool: ${request.params.name}`
            );
    }
  • Core helper method implementing the file reading logic for read_context, handling single files and directories with glob, filtering, caching, and ignoring patterns.
    private async readContent(
        filePath: string,
        encoding: BufferEncoding = 'utf8',
        maxSize?: number,
        recursive: boolean = true,
        fileTypes?: string[] | string
    ): Promise<FilesInfo> {
        const filesInfo: FilesInfo = {};
        const absolutePath = path.resolve(filePath);
        const cleanFileTypes = Array.isArray(fileTypes)
            ? fileTypes.map(ext => ext.toLowerCase().replace(/^\./, ''))
            : fileTypes
                ? [fileTypes.toLowerCase().replace(/^\./, '')]
                : undefined;
    
        await this.loggingService.debug('Reading content with file type filtering', {
            cleanFileTypes,
            absolutePath,
            operation: 'read_content'
        });
    
        // Handle single file
        if ((await fs.stat(absolutePath)).isFile()) {
            if (cleanFileTypes && !cleanFileTypes.some(ext => absolutePath.toLowerCase().endsWith(`.${ext}`))) {
                return filesInfo;
            }
    
            const stat = await fs.stat(absolutePath);
            if (maxSize && stat.size > maxSize) {
                throw new FileOperationError(
                    FileErrorCode.FILE_TOO_LARGE,
                    `File ${absolutePath} exceeds maximum size limit of ${maxSize} bytes`,
                    absolutePath
                );
            }
    
            // Check cache first
            const cached = this.fileContentCache.get(absolutePath);
            let content: string;
            if (cached && cached.lastModified === stat.mtimeMs) {
                content = cached.content;
            } else {
                content = await fs.readFile(absolutePath, encoding);
                this.fileContentCache.set(absolutePath, {
                    content,
                    lastModified: stat.mtimeMs
                });
            }
    
            const hash = createHash('md5').update(content).digest('hex');
            filesInfo[absolutePath] = {
                path: absolutePath,
                content,
                hash,
                size: stat.size,
                lastModified: stat.mtimeMs
            };
    
            return filesInfo;
        }
    
        // Handle directory: use POSIX join for glob
        const pattern = recursive ? '**/*' : '*';
        const globPattern = path.posix.join(absolutePath.split(path.sep).join(path.posix.sep), pattern);
    
        const files = await this.globPromise(globPattern, {
            ignore: DEFAULT_IGNORE_PATTERNS,
            nodir: true,
            dot: false,
            cache: true,
            follow: false
        });
    
        await Promise.all(files.map(async (file) => {
            if (cleanFileTypes && !cleanFileTypes.some(ext => file.toLowerCase().endsWith(`.${ext}`))) {
                return;
            }
    
            try {
                const stat = await fs.stat(file);
                if (maxSize && stat.size > maxSize) {
                    return;
                }
    
                // Check cache first
                const cached = this.fileContentCache.get(file);
                let content: string;
                if (cached && cached.lastModified === stat.mtimeMs) {
                    content = cached.content;
                } else {
                    content = await fs.readFile(file, encoding);
                    this.fileContentCache.set(file, {
                        content,
                        lastModified: stat.mtimeMs
                    });
                }
    
                const hash = createHash('md5').update(content).digest('hex');
                filesInfo[file] = {
                    path: file,
                    content,
                    hash,
                    size: stat.size,
                    lastModified: stat.mtimeMs
                };
            } catch (error) {
                await this.loggingService.error('Error reading file for info collection', error as Error, {
                    filePath: file,
                    operation: 'get_files_info'
                });
            }
        }));
    
        return filesInfo;
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does an excellent job disclosing behavioral traits: it specifies automatic directory exclusions (version control, Python artifacts, JavaScript/Node.js, IDE/editor files), describes chunking behavior for large files, and explains the relationship with get_chunk_count. It doesn't mention error handling, performance characteristics, or authentication needs, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The bulleted list of exclusions is efficient, and the guidance about get_chunk_count is necessary context. While slightly longer than minimal, every sentence earns its place by providing essential operational information that isn't in the schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, chunking behavior, filtering logic) and no annotations/output schema, the description does an excellent job covering operational context. It explains the automatic exclusions, chunking workflow, and relationship with sibling tools. The main gap is lack of information about return format/content, but this is reasonable given the tool's primary focus on reading operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 6 parameters thoroughly. The description adds some context about chunkNumber usage ('Use with get_chunk_count to handle large files/directories') and implies filtering through the automatic exclusions list, but doesn't provide additional parameter semantics beyond what's in the schema. This meets the baseline expectation when schema coverage is complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('read and analyze code files') and resources ('code files'), and distinguishes it from siblings by mentioning advanced filtering/chunking capabilities and automatic directory exclusions. It goes beyond a simple read operation by describing analysis and filtering features.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives: it mentions using 'get_chunk_count first to determine total chunks' for large files/directories, and the automatic exclusion list helps users understand when this tool is appropriate versus when manual filtering might be needed elsewhere. It also distinguishes from 'getFiles' by focusing on content reading rather than just file listing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bsmi021/mcp-file-context-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server