Skip to main content
Glama

read_s3_file

Read files from Amazon S3 storage to display CSV, JSON, logs, or Parquet data directly in chat with filtering and preview options.

Instructions

Read any file from S3 by its full URI and display in chat.

Supports CSV, TXT, JSON, log files, .gz compressed files, and Parquet. Files larger than 5 MB are rejected to avoid crashing the server.

For Parquet files: reads the file and displays the first N rows as a formatted table (default 50 rows). Parquet files are binary so they cannot be tailed or searched — use head_rows to control output.

Args: s3_uri: Full S3 URI (e.g. 's3://bucket-name/path/to/file.csv'). tail_lines: Lines from the end for text files (default 100). -1 for all. search_text: Filter matching lines (text files only). head_rows: Rows to display for Parquet files (default 50). env: Target environment — 'dev', 'uat', 'test', or 'prod'. IMPORTANT: Do NOT guess or default. Ask the user which environment if not specified.

Returns the file contents, optionally filtered and tailed.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
s3_uriYes
tail_linesNo
search_textNo
head_rowsNo
envNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does so effectively. It discloses critical behavioral traits: file size limitations (5 MB rejection), format-specific behaviors (Parquet vs text file handling), default values for parameters, and the important requirement to ask about the 'env' parameter rather than guessing. It doesn't contradict any annotations since none exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose statement first, then format support and limitations, then Parquet-specific details, followed by parameter explanations. Every sentence earns its place, though the final return statement is somewhat redundant given the tool name and purpose.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (5 parameters, file format variations, size limits) and the presence of an output schema, the description is complete enough. It covers all critical aspects: purpose, limitations, format-specific behaviors, parameter semantics, and usage guidance without needing to explain return values since an output schema exists.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by explaining all 5 parameters in detail. It provides meaning beyond the schema: s3_uri format examples, tail_lines behavior and default, search_text applicability, head_rows purpose for Parquet, and critical guidance about the env parameter. This adds substantial value over the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Read any file from S3') and resource ('by its full URI'), distinguishing it from sibling tools like browse_s3 or get_s3_object_info which don't read file contents. It specifies the verb+resource combination precisely.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool vs alternatives: it mentions file size limits (5 MB), supported formats, and specific handling for Parquet files. It also warns against guessing the 'env' parameter and instructs to ask the user if not specified, creating clear usage boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/SrujanReddyKallu2024/MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server