Skip to main content
Glama

read_crawl_data

Read and filter exported crawl data from Screaming Frog SEO Spider. Access CSV files with pagination and column filtering for analysis.

Instructions

Read CSV data from an export. Use after export_crawl.

Args: export_id: The export_id from export_crawl file: CSV filename to read (from the file list in export_crawl output) limit: Max rows to return (default 100) offset: Number of rows to skip (for pagination) filter_column: Optional column name to filter by filter_value: Optional value to match in the filter column (case-insensitive substring)

Returns: CSV data as formatted text with column headers.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
export_idYes
fileYes
limitNo
offsetNo
filter_columnNo
filter_valueNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that the tool reads data (non-destructive) and returns formatted text, but lacks details on permissions, rate limits, error handling, or data format specifics. It adds basic context but misses key behavioral traits for a read operation with filtering capabilities.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded: purpose first, then Args and Returns sections. Every sentence earns its place—no fluff. The bullet-point style for parameters is efficient, and the text is appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters, 0% schema coverage, no annotations, but an output schema exists, the description does well. It explains the tool's purpose, usage context, and parameter semantics thoroughly. The output schema handles return values, so the description doesn't need to detail them. It could improve by addressing error cases or authentication needs.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides clear semantics for all 6 parameters: export_id links to export_crawl, file specifies the CSV filename, limit/offset handle pagination, and filter_column/filter_value enable case-insensitive substring filtering. This adds substantial meaning beyond the bare schema, though it doesn't cover all edge cases.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Read CSV data from an export.' It specifies the verb ('Read'), resource ('CSV data'), and source ('from an export'), distinguishing it from siblings like crawl_site or export_crawl. The mention of 'Use after export_crawl' further clarifies its role in the workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides usage guidance: 'Use after export_crawl.' This indicates a prerequisite and timing context, distinguishing it from alternatives like list_crawls or crawl_status. It effectively tells the agent when to invoke this tool in relation to other tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marykovziridze/screaming-frog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server