Skip to main content
Glama
rawr-ai

Filesystem MCP Server

regex_search_content

Search file content recursively using a regex pattern. Scan subdirectories from a specified path, return matching files with line details. Optional filters: file pattern, depth, size, and result limits. Works within secure directories.

Instructions

Recursively search file content using a regex pattern. Searches through subdirectories from the starting path. Returns a list of files containing matches, including line numbers and matching lines. Requires regex pattern. Optional: path, filePattern, maxDepth, maxFileSize, maxResults. Only searches within allowed directories.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
filePatternNoGlob pattern to filter files to search within (e.g., "*.ts", "data/**.json"). Defaults to searching all files.*
maxDepthNoMaximum directory depth to search recursively. Defaults to 2.
maxFileSizeNoMaximum file size in bytes to read for searching. Defaults to 10MB.
maxResultsNoMaximum number of files with matches to return. Defaults to 50.
pathYesDirectory path to start the search from.
regexYesThe regular expression pattern to search for within file content.

Implementation Reference

  • The main handler function that parses arguments, validates path, calls the regexSearchContent helper, formats results, and returns the response.
    export async function handleRegexSearchContent(
      args: unknown,
      allowedDirectories: string[],
      symlinksMap: Map<string, string>,
      noFollowSymlinks: boolean
    ) {
      const parsed = parseArgs(RegexSearchContentArgsSchema, args, 'regex_search_content');
      const {
        path: startPath,
        regex,
        filePattern,
        maxDepth,
        maxFileSize,
        maxResults
      } = parsed;
    
      const validPath = await validatePath(startPath, allowedDirectories, symlinksMap, noFollowSymlinks);
    
      try {
        const results = await regexSearchContent(
          validPath,
          regex,
          filePattern,
          maxDepth,
          maxFileSize,
          maxResults
        );
    
        if (results.length === 0) {
          return { content: [{ type: "text", text: "No matches found for the given regex pattern." }] };
        }
    
        // Format the output
        const formattedResults = results.map(fileResult => {
          const matchesText = fileResult.matches
            .map(match => `  Line ${match.lineNumber}: ${match.lineContent.trim()}`)
            .join('\n');
          return `File: ${fileResult.path}\n${matchesText}`;
        }).join('\n\n');
    
        return {
          content: [{ type: "text", text: formattedResults }],
        };
      } catch (error: any) {
        // Catch errors from regexSearchContent (e.g., invalid regex)
        throw new Error(`Error during regex content search: ${error.message}`);
      }
    }
  • TypeBox schema definition for the tool's input arguments and corresponding TypeScript type.
    export const RegexSearchContentArgsSchema = Type.Object({
      path: Type.String({ description: 'Directory path to start the search from.' }),
      regex: Type.String({ description: 'The regular expression pattern to search for within file content.' }),
      filePattern: Type.Optional(Type.String({ default: '*', description: 'Glob pattern to filter files to search within (e.g., "*.ts", "data/**.json"). Defaults to searching all files.' })),
      maxDepth: Type.Optional(Type.Integer({ minimum: 1, default: 2, description: 'Maximum directory depth to search recursively. Defaults to 2.' })),
      maxFileSize: Type.Optional(Type.Integer({ minimum: 1, default: 10 * 1024 * 1024, description: 'Maximum file size in bytes to read for searching. Defaults to 10MB.' })),
      maxResults: Type.Optional(Type.Integer({ minimum: 1, default: 50, description: 'Maximum number of files with matches to return. Defaults to 50.' }))
    });
    export type RegexSearchContentArgs = Static<typeof RegexSearchContentArgsSchema>;
  • Core utility function that performs the recursive regex search on file contents using streaming readline for large files, respects limits, and returns structured match results.
    export async function regexSearchContent(
      rootPath: string,
      regexPattern: string,
      filePattern: string = '*',
      maxDepth: number = 2,
      maxFileSize: number = 10 * 1024 * 1024, // 10MB default
      maxResults: number = 50
    ): Promise<ReadonlyArray<RegexSearchResult>> {
      const results: RegexSearchResult[] = [];
      let regex: RegExp;
    
      try {
        regex = new RegExp(regexPattern, 'g'); // Global flag to find all matches
      } catch (error: any) {
        throw new Error(`Invalid regex pattern provided: ${error.message}`);
      }
    
      async function search(currentPath: string, currentDepth: number) {
        if (currentDepth >= maxDepth || results.length >= maxResults) {
          return;
        }
    
        let entries;
        try {
          entries = await fsPromises.readdir(currentPath, { withFileTypes: true });
        } catch (error: any) {
          console.warn(`Skipping directory ${currentPath}: ${error.message}`);
          return; // Skip directories we can't read
        }
    
        for (const entry of entries) {
          if (results.length >= maxResults) return; // Check results limit again
    
          const fullPath = path.join(currentPath, entry.name);
          const relativePath = path.relative(rootPath, fullPath);
    
          if (entry.isDirectory()) {
            await search(fullPath, currentDepth + 1);
          } else if (entry.isFile()) {
            // Check if file matches the filePattern glob
            // Match file pattern against the relative path (removed matchBase: true)
            if (!minimatch(relativePath, filePattern, { dot: true })) {
              continue;
            }
    
            try {
              const stats = await fsPromises.stat(fullPath);
              if (stats.size > maxFileSize) {
                console.warn(`Skipping large file ${fullPath}: size ${stats.size} > max ${maxFileSize}`);
                continue;
              }
    
              // Use streaming approach for large files
              const fileStream = createReadStream(fullPath, { encoding: 'utf-8' });
              const rl = readline.createInterface({
                input: fileStream,
                crlfDelay: Infinity, // Handle different line endings
              });
    
              const fileMatches: { lineNumber: number; lineContent: string }[] = [];
              let currentLineNumber = 0;
    
              // Wrap readline processing in a promise
              await new Promise<void>((resolve, reject) => {
                rl.on('line', (line) => {
                  currentLineNumber++;
                  // Reset regex lastIndex before each test if using global flag
                  regex.lastIndex = 0;
                  if (regex.test(line)) {
                    fileMatches.push({ lineNumber: currentLineNumber, lineContent: line });
                  }
                });
    
                rl.on('close', () => {
                  resolve();
                });
    
                rl.on('error', (err) => {
                  // Don't reject, just warn and resolve to continue processing other files
                  console.warn(`Error reading file ${fullPath}: ${err.message}`);
                  resolve();
                });
    
                fileStream.on('error', (err) => {
                  // Handle stream errors (e.g., file not found during read)
                   console.warn(`Error reading file stream ${fullPath}: ${err.message}`);
                   resolve(); // Resolve to allow processing to continue
                });
              });
    
              if (fileMatches.length > 0) {
                if (results.length < maxResults) {
                  results.push({ path: fullPath, matches: fileMatches });
                }
                if (results.length >= maxResults) return; // Stop searching this branch
              }
            } catch (error: any) {
              console.warn(`Skipping file ${fullPath}: ${error.message}`);
              // Continue searching other files even if one fails
            }
          }
        }
      }
    
      await search(rootPath, 0);
      return results;
    }
  • index.ts:295-301 (registration)
    Registers the tool handler in the toolHandlers object, binding it to the FastMCP server execution.
    regex_search_content: (a: unknown) =>
      handleRegexSearchContent(
        a,
        allowedDirectories,
        symlinksMap,
        noFollowSymlinks,
      ),
  • index.ts:334-337 (registration)
    Defines the tool metadata (name and description) in the allTools array used for server.addTool.
    {
      name: "regex_search_content",
      description: "Search file content with regex",
    },
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: recursive searching through subdirectories, returning match details (files, line numbers, matching lines), directory restrictions ('only searches within allowed directories'), and the required 'regex' parameter. It doesn't mention error conditions, performance characteristics, or authentication needs, but covers the core operational behavior well.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: first states the core purpose and behavior, second lists parameters, third adds important constraint. Every sentence earns its place by providing essential information without redundancy. It's appropriately sized for a tool with 6 parameters and complex behavior.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a search tool with no annotations and no output schema, the description provides good coverage of what the tool does, how it behaves, and its constraints. It doesn't describe the exact return format structure (though it mentions 'list of files containing matches, including line numbers and matching lines'), and lacks information about error handling or performance limits beyond the parameter defaults. However, given the complexity and lack of structured metadata, it's reasonably complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all 6 parameters. The description mentions all parameters by name and indicates which is required ('regex') and which are optional, but doesn't add meaningful semantic context beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('recursively search file content using a regex pattern') and resource ('files'). It distinguishes itself from siblings like 'search_files' (which likely searches by filename) by specifying content-based regex searching, and from 'read_file' by being a search rather than read operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool ('search file content using a regex pattern' and 'recursively search through subdirectories'). It doesn't explicitly mention when NOT to use it or name specific alternatives, but the context is sufficient to understand its specialized regex content search purpose versus other file operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/rawr-ai/mcp-filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server