Skip to main content
Glama

read_file_lines

Retrieve a specific line range from a file using 1-indexed inclusive bounds. Ideal for inspecting stack-trace regions or large file chunks without loading the entire content.

Instructions

Return a 1-indexed inclusive line slice of a file. Out-of-range bounds clamp silently to the file's actual length; to < from throws. Read-only; no side effects, auth, or rate limits. Returns the snippet plus its size_bytes and est_tokens. Use to inspect a stack-trace region or a chunk of a large file without pulling the whole body. Prefer read_section if you know the heading, grep_in_file if you know a pattern but not the line number.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
idYesFile ID
fromYesFirst line (1-indexed, inclusive)
toYesLast line (1-indexed, inclusive)

Implementation Reference

  • The handler function for the read_file_lines tool. Reads a file by ID, extracts a 1-indexed line slice from 'from' to 'to', and returns the snippet with metadata (file_id, path, line range, total_lines, content, size_bytes, est_tokens). Bounds are clamped silently; throws if to < from.
      async ({ id, from, to }) => {
        try {
          if (to < from) throw new Error("`to` must be >= `from`");
          const file = readFile(id);
          const lines = file.content.split("\n");
          const start = Math.max(0, from - 1);
          const end = Math.min(lines.length, to);
          const slice = lines.slice(start, end).join("\n");
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify(
                  {
                    file_id: id,
                    path: file.path,
                    from: start + 1,
                    to: end,
                    total_lines: lines.length,
                    content: slice,
                    size_bytes: Buffer.byteLength(slice, "utf8"),
                    est_tokens: estimateTokensFromBuffer(Buffer.from(slice, "utf8")),
                  },
                  null,
                  2
                ),
              },
            ],
          };
        } catch (e: any) {
          return {
            isError: true,
            content: [{ type: "text", text: JSON.stringify({ error: e?.message ?? String(e) }, null, 2) }],
          };
        }
      }
    );
  • Zod schema for read_file_lines input parameters: id (number), from (positive int), to (positive int).
      id: z.number().describe("File ID"),
      from: z.number().int().positive().describe("First line (1-indexed, inclusive)"),
      to: z.number().int().positive().describe("Last line (1-indexed, inclusive)"),
    },
  • Registration of the read_file_lines tool via server.tool() with name 'read_file_lines', description, schema, and handler.
    server.tool(
      "read_file_lines",
      "Return a 1-indexed inclusive line slice of a file. Out-of-range bounds clamp silently to the file's actual length; `to < from` throws. Read-only; no side effects, auth, or rate limits. Returns the snippet plus its size_bytes and est_tokens. Use to inspect a stack-trace region or a chunk of a large file without pulling the whole body. Prefer `read_section` if you know the heading, `grep_in_file` if you know a pattern but not the line number.",
      {
        id: z.number().describe("File ID"),
        from: z.number().int().positive().describe("First line (1-indexed, inclusive)"),
        to: z.number().int().positive().describe("Last line (1-indexed, inclusive)"),
      },
      async ({ id, from, to }) => {
        try {
          if (to < from) throw new Error("`to` must be >= `from`");
          const file = readFile(id);
          const lines = file.content.split("\n");
          const start = Math.max(0, from - 1);
          const end = Math.min(lines.length, to);
          const slice = lines.slice(start, end).join("\n");
          return {
            content: [
              {
                type: "text",
                text: JSON.stringify(
                  {
                    file_id: id,
                    path: file.path,
                    from: start + 1,
                    to: end,
                    total_lines: lines.length,
                    content: slice,
                    size_bytes: Buffer.byteLength(slice, "utf8"),
                    est_tokens: estimateTokensFromBuffer(Buffer.from(slice, "utf8")),
                  },
                  null,
                  2
                ),
              },
            ],
          };
        } catch (e: any) {
          return {
            isError: true,
            content: [{ type: "text", text: JSON.stringify({ error: e?.message ?? String(e) }, null, 2) }],
          };
        }
      }
    );
  • The estimateTokensFromBuffer helper (imported from ctxnest-core) is used by the handler to compute estimated tokens from the line slice. Also estimateTokensFromFile helper defined locally.
    function estimateTokensFromFile(filePath: string, sizeBytes: number): number {
      if (sizeBytes <= 0) return 1;
      const sampleSize = Math.min(4096, sizeBytes);
      if (sampleSize < 256) return Math.max(1, Math.ceil(sizeBytes / 4));
      let mostlyAscii = true;
      let fd: number | null = null;
      try {
        fd = openSync(filePath, "r");
        const buf = Buffer.alloc(sampleSize);
        readSync(fd, buf, 0, sampleSize, 0);
        mostlyAscii = buf.toString("utf-8").length > sampleSize * 0.7;
      } catch {} finally {
        if (fd !== null) try { closeSync(fd); } catch {}
      }
      return Math.max(1, Math.ceil(sizeBytes / (mostlyAscii ? 4 : 3)));
    }
  • Categorizes read_file_lines under the 'Read' category for the web UI.
    read_file_lines: "Read",
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses out-of-range clamping behavior, error condition for to < from, and confirms read-only with no side effects, auth, or rate limits. No annotations provided, so description fully covers behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose+behavior, use case, alternatives. No redundancy, all information earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple tool with 3 parameters and no output schema, description covers error handling, return fields, use cases, and alternatives. Fully sufficient for correct agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Adds meaning beyond schema: explains 1-indexed inclusive nature, clamping, and throw condition. Also mentions return fields (snippet, size_bytes, est_tokens) not in schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the tool returns a 1-indexed inclusive line slice of a file. Distinguishes from siblings by naming alternatives (read_section, grep_in_file) and their use cases.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit when-to-use (inspect stack-trace region or chunk of large file) and when-not (prefer alternatives for known heading or pattern). Names sibling tools and conditions for choosing them.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/safiyu/ctxnest'

If you have feedback or need assistance with the MCP directory API, please join our Discord server