Skip to main content
Glama

Read File

read_file
Read-only

Read file contents or specific line ranges from codebases to access text data when symbol-level operations aren't applicable.

Instructions

Reads the given file or a chunk of it. Generally, symbolic operations like find_symbol or find_referencing_symbols should be preferred if you know which symbols you are looking for. Returns the full text of the file at the given relative path.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
relative_pathYesThe relative path to the file to read.
start_lineNoThe 0-based index of the first line to be retrieved.
end_lineNoThe 0-based index of the last line to be retrieved (inclusive). If None, read until the end of the file.
max_answer_charsNoIf the file (chunk) is longer than this number of characters, no content will be returned. Don't adjust unless there is really no other way to get the content required for the task.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds valuable context beyond annotations: it explains that the tool can read 'a chunk of it' (implying partial file reading), mentions a constraint ('If the file (chunk) is longer than this number of characters, no content will be returned'), and hints at performance considerations ('Don't adjust unless there is really no other way'). This provides useful behavioral insights not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: the first states the core functionality, the second provides crucial usage guidance, and the third clarifies the return value. Every sentence adds essential information with zero waste, making it appropriately sized and front-loaded with the most important information first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity, rich annotations (readOnlyHint, destructiveHint), 100% schema coverage, and the presence of an output schema (which handles return value documentation), the description is complete enough. It covers purpose, usage guidelines, key behavioral aspects, and integrates well with the structured data, leaving no significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already fully documents all four parameters. The description adds minimal parameter semantics beyond the schema: it mentions 'chunk' reading which relates to start_line/end_line parameters, and hints at max_answer_chars usage. However, it doesn't provide significant additional meaning beyond what's in the schema descriptions, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Reads', 'Returns') and resources ('the given file or a chunk of it', 'full text of the file'). It distinguishes from siblings by explicitly mentioning alternatives ('symbolic operations like find_symbol or find_referencing_symbols should be preferred if you know which symbols you are looking for'), making the scope and differentiation explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs. alternatives: 'Generally, symbolic operations like find_symbol or find_referencing_symbols should be preferred if you know which symbols you are looking for.' This clearly indicates when not to use this tool and names specific sibling tools as better options in certain contexts.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chrisgreenx-ctrl/serena'

If you have feedback or need assistance with the MCP directory API, please join our Discord server