Skip to main content
Glama
safurrier

MCP Filesystem Server

read_file_lines

Read specific lines from text files by specifying offset and limit parameters to extract targeted content efficiently.

Instructions

Read specific lines from a text file.

Args:
    path: Path to the file
    offset: Line offset (0-based, starts at first line)
    limit: Maximum number of lines to read (None for all remaining)
    encoding: Text encoding (default: utf-8)
    ctx: MCP context

Returns:
    File content and metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
pathYes
offsetNo
limitNo
encodingNoutf-8

Implementation Reference

  • The MCP tool handler function for 'read_file_lines'. Decorated with @mcp.tool(), it handles the tool invocation, calls the core operations method, formats the output with metadata header, and manages errors. This is the entry point called by the MCP server.
    async def read_file_lines(
        path: str,
        ctx: Context,
        offset: int = 0,
        limit: Optional[int] = None,
        encoding: str = "utf-8",
    ) -> str:
        """Read specific lines from a text file.
    
        Args:
            path: Path to the file
            offset: Line offset (0-based, starts at first line)
            limit: Maximum number of lines to read (None for all remaining)
            encoding: Text encoding (default: utf-8)
            ctx: MCP context
    
        Returns:
            File content and metadata
        """
        try:
            components = get_components()
            content, metadata = await components["operations"].read_file_lines(
                path, offset, limit, encoding
            )
    
            if not content:
                last_line_desc = "end" if limit is None else f"offset+{limit}"
                return f"No content found between offset {offset} and {last_line_desc}"
    
            # Calculate display lines (1-based for human readability)
            display_start = offset + 1
            display_end = offset + metadata["lines_read"]
    
            header = (
                f"File: {path}\n"
                f"Lines: {display_start} to {display_end} "
                f"(of {metadata['total_lines']} total)\n"
                f"----------------------------------------\n"
            )
    
            return header + content
    
        except Exception as e:
            return f"Error reading file lines: {str(e)}"
  • Core helper method in FileOperations class that implements the logic for reading specific lines from a file efficiently by precomputing line positions. Returns content and detailed metadata. Called by the MCP tool handler.
    async def read_file_lines(
        self,
        path: Union[str, Path],
        offset: int = 0,
        limit: Optional[int] = None,
        encoding: str = "utf-8",
    ) -> Tuple[str, Dict[str, Any]]:
        """Read specific lines from a text file using offset and limit.
    
        Args:
            path: Path to the file
            offset: Line offset (0-based, starts at first line)
            limit: Maximum number of lines to read (None for all remaining)
            encoding: Text encoding (default: utf-8)
    
        Returns:
            Tuple of (file content, metadata)
    
        Raises:
            ValueError: If path is outside allowed directories
            FileNotFoundError: If file does not exist
        """
        abs_path, allowed = await self.validator.validate_path(path)
        if not allowed:
            raise ValueError(f"Path outside allowed directories: {path}")
    
        # Parameter validation
        if offset < 0:
            raise ValueError("offset must be non-negative")
        if limit is not None and limit < 0:
            raise ValueError("limit must be non-negative")
    
        try:
            # Get file stats for metadata
            stats = await anyio.to_thread.run_sync(partial(abs_path.stat))
            total_size = stats.st_size
    
            # Count total lines in file - we'll need this for context
            total_lines = 0
            line_positions = []  # Store byte position of each line start
    
            async with await anyio.open_file(abs_path, "rb") as f:
                pos = 0
                line_positions.append(pos)
    
                while True:
                    line = await f.readline()
                    if not line:
                        break
    
                    pos += len(line)
                    total_lines += 1
                    # Always store the position of the start of each line
                    # This ensures we have accurate line positions for all lines
                    line_positions.append(pos)
    
            # Calculate the effective end offset if limit is specified
            end_offset = None
            if limit is not None:
                end_offset = offset + limit - 1  # Convert limit to inclusive end offset
    
            # Make sure we don't go beyond the file
            if offset >= total_lines:
                content = ""  # Nothing to read
            else:
                # Adjust end_offset if it exceeds total lines
                if end_offset is None or end_offset >= total_lines:
                    end_offset = total_lines - 1
    
                # Determine byte positions to read
                start_pos = line_positions[offset]  # Use 0-based offset directly
    
                # Calculate end position
                if end_offset >= len(line_positions) - 1:
                    # If we're requesting the last line
                    end_pos = total_size
                else:
                    # Normal case - use the position of the line AFTER the end offset
                    end_pos = line_positions[end_offset + 1]
    
                # Read the content
                async with await anyio.open_file(abs_path, "rb") as f:
                    await f.seek(start_pos)
                    content_bytes = await f.read(end_pos - start_pos)
    
                    try:
                        content = content_bytes.decode(encoding)
                    except UnicodeDecodeError:
                        raise ValueError(f"Cannot decode file as {encoding}")
    
            # Calculate the number of lines read
            if offset >= total_lines:
                lines_read = 0
            elif end_offset is None:
                lines_read = total_lines - offset
            else:
                lines_read = min((end_offset - offset + 1), (total_lines - offset))
    
            # Prepare metadata
            metadata = {
                "path": str(abs_path),
                "offset": offset,
                "limit": limit,
                "end_offset": end_offset,
                "total_lines": total_lines,
                "lines_read": lines_read,
                "total_size": total_size,
                "size_read": len(content),
                "encoding": encoding,
            }
    
            return content, metadata
    
        except FileNotFoundError:
            raise FileNotFoundError(f"File not found: {path}")
        except PermissionError:
            raise ValueError(f"Permission denied: {path}")
  • The @mcp.tool() decorator registers the read_file_lines function as an MCP tool.
    async def read_file_lines(
  • The docstring provides the schema definition for the tool's parameters and return value, used by FastMCP for input/output validation.
    """Read specific lines from a text file.
    
    Args:
        path: Path to the file
        offset: Line offset (0-based, starts at first line)
        limit: Maximum number of lines to read (None for all remaining)
        encoding: Text encoding (default: utf-8)
        ctx: MCP context
    
    Returns:
        File content and metadata
    """
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but lacks critical behavioral details: no mention of file size limits, error handling (e.g., for missing files or invalid encodings), performance characteristics, or what 'metadata' in the return includes. For a file I/O tool with zero annotation coverage, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: a clear purpose statement followed by well-organized parameter explanations and return information. Every sentence earns its place, with no redundant or vague language. The information is front-loaded with the core functionality.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (file I/O with line selection), no annotations, and no output schema, the description is partially complete. It excels at parameter documentation but lacks behavioral context (error handling, limits) and details about the return structure ('metadata' is vague). For a tool with these characteristics, more completeness would be expected.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides comprehensive parameter semantics beyond the 0% schema description coverage. It explains each parameter's purpose: 'path' as file location, 'offset' as 0-based line starting point, 'limit' as maximum lines (with None meaning all remaining), and 'encoding' as text encoding with default. This fully compensates for the schema's lack of descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Read specific lines') and resource ('from a text file'), distinguishing it from siblings like 'read_file' (which reads entire files) and 'head_file'/'tail_file' (which read from beginning/end). The verb+resource combination is precise and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the parameter explanations (offset, limit), suggesting this tool is for selective line reading rather than full-file reading. However, it doesn't explicitly state when to choose this over alternatives like 'read_file', 'head_file', or 'tail_file', nor does it mention any prerequisites or exclusions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/safurrier/mcp-filesystem'

If you have feedback or need assistance with the MCP directory API, please join our Discord server