Skip to main content
Glama
AIMLPM

AIMLPM/markcrawl

read_page

Fetch complete page content from crawled URLs. Retrieve full Markdown, titles, and source text from local files to analyze specific results after searching your web crawl.

Instructions

Read the full extracted content of a specific crawled page by its URL.

Returns the complete Markdown or text content of a single page, including
its title and source URL. Use this after search_pages to read the full
content of a relevant result.

This is a read-only operation on local files — no network requests are made.
URL matching is case-insensitive and tolerates trailing slashes.

Args:
    url: The exact URL of the page to read. Must match a URL from a previous
        crawl. Case-insensitive. Example: "https://docs.example.com/auth".
    jsonl_path: Full path to the pages.jsonl file. If empty, defaults to
        <WEBCRAWLER_OUTPUT_DIR>/pages.jsonl.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYes
jsonl_pathNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses critical safety info ('read-only operation on local files — no network requests'), return format ('Markdown or text'), and matching behavior ('case-insensitive and tolerates trailing slashes'). Could add error conditions for missing URLs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with purpose front-loaded, followed by return value, usage context, behavioral notes, and Args section. Each sentence earns its place. Slightly verbose in Args section but necessary given zero schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With output schema present, description appropriately focuses on invocation context rather than return values. Covers purpose, prerequisites, parameter semantics, and safety properties. Missing only edge case handling (e.g., what happens if URL not found in index).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage. Description fully compensates: url includes constraints ('exact URL', 'must match previous crawl', 'case-insensitive') and example; jsonl_path explains default behavior ('defaults to <WEBCRAWLER_OUTPUT_DIR>/pages.jsonl'). Adds essential meaning missing from schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Read' + resource 'extracted content of a specific crawled page' + scope 'full content of a single page'. Clearly distinguishes from sibling search_pages (finds results vs reads them) and crawl_site (crawls vs reads existing data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states 'Use this after search_pages to read the full content of a relevant result', establishing clear workflow order. Also notes prerequisite 'Must match a URL from a previous crawl'. Lacks explicit 'when not to use' alternatives, but context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AIMLPM/markcrawl'

If you have feedback or need assistance with the MCP directory API, please join our Discord server