Skip to main content
Glama
SDGLBL
by SDGLBL

read

Access and read files from the local filesystem to analyze content, supporting large files with optional line-based control for efficient data retrieval.

Instructions

Reads a file from the local filesystem. You can access any file directly by using this tool. Assume this tool is able to read all files on the machine. If the User provides a path to a file assume that path is valid. It is okay to read a file that does not exist; an error will be returned.

Usage:

  • The file_path parameter must be an absolute path, not a relative path

  • By default, it reads up to 2000 lines starting from the beginning of the file

  • You can optionally specify a line offset and limit (especially handy for long files), but it's recommended to read the whole file by not providing these parameters

  • Any lines longer than 2000 characters will be truncated

  • Results are returned using cat -n format, with line numbers starting at 1

  • For Jupyter notebooks (.ipynb files), use the notebook_read instead

  • When reading multiple files, you MUST use the batch tool to read them all at once

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
file_pathYesThe absolute path to the file to read
offsetNoThe line number to start reading from. Only provide if the file is too large to read at once
limitNoThe number of lines to read. Only provide if the file is too large to read at once

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Core handler function for the 'read' tool. Validates parameters, checks permissions, reads the file with line numbering (cat -n style), handles large files with offset/limit, truncates long lines, supports UTF-8 and fallback to latin-1 encoding, and provides detailed logging.
    async def call(
        self,
        ctx: MCPContext,
        **params: Unpack[ReadToolParams],
    ) -> str:
        """Execute the tool with the given parameters.
    
        Args:
            ctx: MCP context
            **params: Tool parameters
    
        Returns:
            Tool result
        """
        tool_ctx = self.create_tool_context(ctx)
        self.set_tool_context_info(tool_ctx)
    
        # Extract parameters
        file_path = params.get("file_path")
        offset = params.get("offset", 0)
        limit = params.get("limit", self.DEFAULT_LINE_LIMIT)
    
        # Validate required parameters for direct calls (not through MCP framework)
        if not file_path:
            await tool_ctx.error("Parameter 'file_path' is required but was None")
            return "Error: Parameter 'file_path' is required but was None"
    
        await tool_ctx.info(
            f"Reading file: {file_path} (offset: {offset}, limit: {limit})"
        )
    
        # Check if path is allowed
        if not self.is_path_allowed(file_path):
            await tool_ctx.error(
                f"Access denied - path outside allowed directories: {file_path}"
            )
            return (
                f"Error: Access denied - path outside allowed directories: {file_path}"
            )
    
        try:
            file_path_obj = Path(file_path)
    
            if not file_path_obj.exists():
                await tool_ctx.error(f"File does not exist: {file_path}")
                return f"Error: File does not exist: {file_path}"
    
            if not file_path_obj.is_file():
                await tool_ctx.error(f"Path is not a file: {file_path}")
                return f"Error: Path is not a file: {file_path}"
    
            # Read the file
            try:
                # Read and process the file with line numbers and truncation
                lines = []
                current_line = 0
                truncated_lines = 0
    
                # Try with utf-8 encoding first
                try:
                    with open(file_path_obj, "r", encoding="utf-8") as f:
                        for i, line in enumerate(f):
                            # Skip lines before offset
                            if i < offset:
                                continue
    
                            # Stop after reading 'limit' lines
                            if current_line >= limit:
                                truncated_lines = True
                                break
    
                            current_line += 1
    
                            # Truncate long lines
                            if len(line) > self.MAX_LINE_LENGTH:
                                line = (
                                    line[: self.MAX_LINE_LENGTH]
                                    + self.LINE_TRUNCATION_INDICATOR
                                )
    
                            # Add line with line number (1-based)
                            lines.append(f"{i + 1:6d}  {line.rstrip()}")
    
                except UnicodeDecodeError:
                    # Try with latin-1 encoding
                    try:
                        lines = []
                        current_line = 0
                        truncated_lines = 0
    
                        with open(file_path_obj, "r", encoding="latin-1") as f:
                            for i, line in enumerate(f):
                                # Skip lines before offset
                                if i < offset:
                                    continue
    
                                # Stop after reading 'limit' lines
                                if current_line >= limit:
                                    truncated_lines = True
                                    break
    
                                current_line += 1
    
                                # Truncate long lines
                                if len(line) > self.MAX_LINE_LENGTH:
                                    line = (
                                        line[: self.MAX_LINE_LENGTH]
                                        + self.LINE_TRUNCATION_INDICATOR
                                    )
    
                                # Add line with line number (1-based)
                                lines.append(f"{i + 1:6d}  {line.rstrip()}")
    
                        await tool_ctx.warning(
                            f"File read with latin-1 encoding: {file_path}"
                        )
    
                    except Exception:
                        await tool_ctx.error(f"Cannot read binary file: {file_path}")
                        return f"Error: Cannot read binary file: {file_path}"
    
                # Format the result
                result = "\n".join(lines)
    
                # Add truncation message if necessary
                if truncated_lines:
                    result += f"\n... (output truncated, showing {limit} of {limit + truncated_lines}+ lines)"
    
                await tool_ctx.info(f"Successfully read file: {file_path}")
                return result
    
            except Exception as e:
                await tool_ctx.error(f"Error reading file: {str(e)}")
                return f"Error: {str(e)}"
    
        except Exception as e:
            await tool_ctx.error(f"Error reading file: {str(e)}")
            return f"Error: {str(e)}"
    
    @override
  • Pydantic schema definitions for input parameters: file_path (absolute path), offset (start line, default 0), limit (max lines, default 2000). Used for validation and type hints in the tool call.
    FilePath = Annotated[
        str,
        Field(
            description="The absolute path to the file to read",
        ),
    ]
    
    Offset = Annotated[
        int,
        Field(
            description="The line number to start reading from. Only provide if the file is too large to read at once",
            default=0,
        ),
    ]
    
    Limit = Annotated[
        int,
        Field(
            description="The number of lines to read. Only provide if the file is too large to read at once",
            default=2000,
        ),
    ]
    
    
    class ReadToolParams(TypedDict):
        """Parameters for the ReadTool.
    
        Attributes:
            file_path: The absolute path to the file to read
            offset: The line number to start reading from. Only provide if the file is too large to read at once
            limit: The number of lines to read. Only provide if the file is too large to read at once
        """
    
        file_path: FilePath
        offset: Offset
        limit: Limit
  • The ReadTool's register method creates and decorates a wrapper function named 'read' with @mcp_server.tool, which delegates to the tool's call method. This is invoked by ToolRegistry.
    def register(self, mcp_server: FastMCP) -> None:
        """Register this tool with the MCP server.
    
        Creates a wrapper function with explicitly defined parameters that match
        the tool's parameter schema and registers it with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
        """
        tool_self = self
    
        @mcp_server.tool(name=self.name, description=self.description)
        async def read(
            ctx: MCPContext,
            file_path: FilePath,
            offset: Offset,
            limit: Limit,
        ) -> str:
            ctx = get_context()
            return await tool_self.call(
                ctx, file_path=file_path, offset=offset, limit=limit
            )
  • register_filesystem_tools instantiates ReadTool (via get_filesystem_tools) with permission_manager and registers all filesystem tools (including 'read') via ToolRegistry.register_tools.
    def register_filesystem_tools(
        mcp_server: FastMCP,
        permission_manager: PermissionManager,
    ) -> list[BaseTool]:
        """Register all filesystem tools with the MCP server.
    
        Args:
            mcp_server: The FastMCP server instance
            permission_manager: Permission manager for access control
    
        Returns:
            List of registered tools
        """
        tools = get_filesystem_tools(permission_manager)
        ToolRegistry.register_tools(mcp_server, tools)
        return tools
  • As part of register_all_tools (called from server.py), invokes register_filesystem_tools to register the 'read' tool and adds it to the all_tools registry.
    # Register all filesystem tools
    filesystem_tools = register_filesystem_tools(mcp_server, permission_manager)
    for tool in filesystem_tools:
        all_tools[tool.name] = tool
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so effectively. It discloses key behavioral traits: can read any file on the machine, assumes paths are valid, returns errors for non-existent files, truncates lines longer than 2000 characters, returns results in cat -n format with line numbers starting at 1, and has a default limit of 2000 lines. No contradictions with annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by usage guidelines in bullet points. It's appropriately sized for the tool's complexity, though some sentences could be more concise (e.g., 'Assume this tool is able to read all files on the machine' could be simplified). Overall efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (file reading with offset/limit capabilities), no annotations, but with an output schema present, the description is complete enough. It covers purpose, usage guidelines, behavioral traits, parameter guidance, and sibling tool relationships. The output schema handles return values, so the description doesn't need to explain them.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context about when to use offset/limit ('especially handy for long files') and recommends not providing them to read the whole file, but doesn't add significant semantic value beyond what the schema provides. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Reads a file from the local filesystem' with the specific verb 'reads' and resource 'file'. It distinguishes from siblings like 'notebook_read' for Jupyter notebooks and 'batch' for multiple files, providing clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidelines: use 'notebook_read' for .ipynb files, use 'batch' for reading multiple files, and recommends not providing offset/limit unless the file is too large. It clearly states when to use alternatives and when to avoid certain parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SDGLBL/mcp-claude-code'

If you have feedback or need assistance with the MCP directory API, please join our Discord server