Skip to main content
Glama
modelcontextprotocol

Filesystem MCP Server

Official

Read Multiple Files

read_multiple_files
Read-only

Read contents of multiple files at once to analyze or compare them. Returns each file's content with its path; individual failures do not halt the operation.

Instructions

Read the contents of multiple files simultaneously. This is more efficient than reading files one by one when you need to analyze or compare multiple files. Each file's content is returned with its path as a reference. Failed reads for individual files won't stop the entire operation. Only works within allowed directories.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathsYesArray of file paths to read. Each path must be a string pointing to a valid file within allowed directories.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYes

Implementation Reference

  • Zod schema for ReadMultipleFilesArgs - validates that the 'paths' input is an array of strings with at least 1 element.
    const ReadMultipleFilesArgsSchema = z.object({
      paths: z
        .array(z.string())
        .min(1, "At least one file path must be provided")
        .describe("Array of file paths to read. Each path must be a string pointing to a valid file within allowed directories."),
    });
  • Registration of the 'read_multiple_files' tool with the MCP server, including title, description, input/output schemas, and annotations.
    server.registerTool(
      "read_multiple_files",
      {
        title: "Read Multiple Files",
        description:
          "Read the contents of multiple files simultaneously. This is more " +
          "efficient than reading files one by one when you need to analyze " +
          "or compare multiple files. Each file's content is returned with its " +
          "path as a reference. Failed reads for individual files won't stop " +
          "the entire operation. Only works within allowed directories.",
        inputSchema: {
          paths: z.array(z.string())
            .min(1)
            .describe("Array of file paths to read. Each path must be a string pointing to a valid file within allowed directories.")
        },
        outputSchema: { content: z.string() },
        annotations: { readOnlyHint: true }
      },
      async (args: z.infer<typeof ReadMultipleFilesArgsSchema>) => {
        const results = await Promise.all(
          args.paths.map(async (filePath: string) => {
            try {
              const validPath = await validatePath(filePath);
              const content = await readFileContent(validPath);
              return `${filePath}:\n${content}\n`;
            } catch (error) {
              const errorMessage = error instanceof Error ? error.message : String(error);
              return `${filePath}: Error - ${errorMessage}`;
            }
          }),
        );
        const text = results.join("\n---\n");
        return {
          content: [{ type: "text" as const, text }],
          structuredContent: { content: text }
        };
      }
    );
  • Handler function for read_multiple_files - reads multiple files concurrently using Promise.all, validates each path, returns content with path prefix per file, with error handling per file.
      async (args: z.infer<typeof ReadMultipleFilesArgsSchema>) => {
        const results = await Promise.all(
          args.paths.map(async (filePath: string) => {
            try {
              const validPath = await validatePath(filePath);
              const content = await readFileContent(validPath);
              return `${filePath}:\n${content}\n`;
            } catch (error) {
              const errorMessage = error instanceof Error ? error.message : String(error);
              return `${filePath}: Error - ${errorMessage}`;
            }
          }),
        );
        const text = results.join("\n---\n");
        return {
          content: [{ type: "text" as const, text }],
          structuredContent: { content: text }
        };
      }
    );
  • The readFileContent helper function that reads file content from disk using fs.readFile with UTF-8 encoding.
    export async function readFileContent(filePath: string, encoding: string = 'utf-8'): Promise<string> {
      return await fs.readFile(filePath, encoding as BufferEncoding);
    }
  • The validatePath helper function that resolves and validates that a requested path is within allowed directories, following symlinks for security.
    export async function validatePath(requestedPath: string): Promise<string> {
      const expandedPath = expandHome(requestedPath);
      const absolute = path.isAbsolute(expandedPath)
        ? path.resolve(expandedPath)
        : resolveRelativePathAgainstAllowedDirectories(expandedPath);
    
      const normalizedRequested = normalizePath(absolute);
    
      // Security: Check if path is within allowed directories before any file operations
      const isAllowed = isPathWithinAllowedDirectories(normalizedRequested, allowedDirectories);
      if (!isAllowed) {
        throw new Error(`Access denied - path outside allowed directories: ${absolute} not in ${allowedDirectories.join(', ')}`);
      }
    
      // Security: Handle symlinks by checking their real path to prevent symlink attacks
      // This prevents attackers from creating symlinks that point outside allowed directories
      try {
        const realPath = await fs.realpath(absolute);
        const normalizedReal = normalizePath(realPath);
        if (!isPathWithinAllowedDirectories(normalizedReal, allowedDirectories)) {
          throw new Error(`Access denied - symlink target outside allowed directories: ${realPath} not in ${allowedDirectories.join(', ')}`);
        }
        return realPath;
      } catch (error) {
        // Security: For new files that don't exist yet, verify parent directory
        // This ensures we can't create files in unauthorized locations
        if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
          const parentDir = path.dirname(absolute);
          try {
            const realParentPath = await fs.realpath(parentDir);
            const normalizedParent = normalizePath(realParentPath);
            if (!isPathWithinAllowedDirectories(normalizedParent, allowedDirectories)) {
              throw new Error(`Access denied - parent directory outside allowed directories: ${realParentPath} not in ${allowedDirectories.join(', ')}`);
            }
            return absolute;
          } catch {
            throw new Error(`Parent directory does not exist: ${parentDir}`);
          }
        }
        throw error;
      }
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds value beyond annotations by disclosing key behaviors: failed reads for individual files won't stop the entire operation, and it only works within allowed directories. Annotations already mark it as read-only, and the description confirms a safe read operation while adding important fault-tolerance detail.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with three sentences, all serving a distinct purpose: stating the action, providing usage context, and noting behavioral traits. It is front-loaded with the core purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that an output schema exists, the description does not need to detail return values. It covers all essential aspects: the tool's function, efficiency benefit, failure handling, and constraints. This is complete for a tool with one parameter and read-only annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% coverage for the single parameter 'paths', with a clear description in the schema itself. The tool description does not add new meaning to the parameter beyond what the schema provides, so a baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reads multiple files simultaneously, using a specific verb ('Read') and resource ('contents of multiple files'). It distinguishes itself from reading files one by one, which is a sibling tool, by highlighting efficiency for analysis or comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear guidance on when to use this tool: when you need to analyze or compare multiple files, as it is more efficient. It also notes a constraint ('Only works within allowed directories'). It does not explicitly list alternatives or exclusions, but the context is sufficient.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/modelcontextprotocol/filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server