Skip to main content
Glama
robot-resources

Robot Resources Scraper

scraper_crawl_url

Crawl multiple web pages from a starting URL using BFS link discovery and return compressed markdown with 70-90% fewer tokens than raw HTML.

Instructions

Crawl multiple pages from a starting URL using BFS link discovery. Returns compressed markdown for each page with 70-90% fewer tokens than raw HTML.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesStarting URL to crawl
maxPagesNoMax pages to crawl (default: 10)
maxDepthNoMax link depth (default: 2)
modeNoFetch mode: 'fast' (plain HTTP), 'stealth' (TLS fingerprint), 'render' (headless browser), 'auto' (fast with fallback). Default: 'auto'
includeNoURL patterns to include (glob)
excludeNoURL patterns to exclude (glob)
timeoutNoPer-page timeout in milliseconds (default: 10000)

Implementation Reference

  • The handler function for the `scraper_crawl_url` tool, which orchestrates the crawling process using the `crawl` function from `@robot-resources/scraper`.
    export async function crawlUrl({
      url,
      maxPages,
      maxDepth,
      mode,
      include,
      exclude,
      timeout,
    }: {
      url: string;
      maxPages?: number;
      maxDepth?: number;
      mode?: FetchMode;
      include?: string[];
      exclude?: string[];
      timeout?: number;
    }) {
      try {
        const result = await crawl({
          url,
          limit: maxPages ?? 10,
          depth: maxDepth ?? 2,
          mode,
          include,
          exclude,
          timeout,
        });
    
        const host = new URL(url).host;
        const errorSuffix = result.errors.length > 0
          ? ` (${result.errors.length} error${result.errors.length > 1 ? 's' : ''})`
          : '';
        const summary = `Crawled ${result.totalCrawled} pages from ${host}${errorSuffix}`;
    
        const content: Array<{ type: 'text'; text: string }> = [
          { type: 'text' as const, text: summary },
        ];
    
        for (const page of result.pages) {
          const header = page.title ? `## ${page.title}\n\n` : '';
          content.push({
            type: 'text' as const,
            text: `${header}${page.markdown}`,
          });
        }
    
        return {
          content,
          structuredContent: {
            pages: result.pages,
            totalCrawled: result.totalCrawled,
            totalDiscovered: result.totalDiscovered,
            totalSkipped: result.totalSkipped,
            errors: result.errors,
            duration: result.duration,
          },
        };
      } catch (error) {
        return formatError(url, error);
      }
    }
  • src/server.ts:50-88 (registration)
    Registration of the `scraper_crawl_url` tool in the MCP server, including schema definition using Zod.
    server.tool(
      'scraper_crawl_url',
      'Crawl multiple pages from a starting URL using BFS link discovery. Returns compressed markdown for each page with 70-90% fewer tokens than raw HTML.',
      {
        url: z.string().url().describe('Starting URL to crawl'),
        maxPages: z
          .number()
          .int()
          .min(1)
          .max(100)
          .optional()
          .describe('Max pages to crawl (default: 10)'),
        maxDepth: z
          .number()
          .int()
          .min(0)
          .max(5)
          .optional()
          .describe('Max link depth (default: 2)'),
        mode: z
          .enum(['fast', 'stealth', 'render', 'auto'])
          .optional()
          .describe("Fetch mode: 'fast' (plain HTTP), 'stealth' (TLS fingerprint), 'render' (headless browser), 'auto' (fast with fallback). Default: 'auto'"),
        include: z
          .array(z.string())
          .optional()
          .describe('URL patterns to include (glob)'),
        exclude: z
          .array(z.string())
          .optional()
          .describe('URL patterns to exclude (glob)'),
        timeout: z
          .number()
          .positive()
          .optional()
          .describe('Per-page timeout in milliseconds (default: 10000)'),
      },
      async (args) => crawlUrl(args),
    );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the BFS crawling algorithm, multi-page scope, and the 70-90% token reduction in output. It doesn't mention rate limits, authentication needs, or error handling, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences that efficiently convey the core functionality and key output characteristic. Every word earns its place with no redundancy or unnecessary elaboration. The description is front-loaded with the primary action and scope.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 7-parameter tool with no annotations and no output schema, the description provides good operational context but lacks details about return format structure, error conditions, or performance characteristics. It mentions compressed markdown output but doesn't describe the data structure or pagination approach.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds no parameter-specific information beyond what's already in the schema descriptions. It mentions 'starting URL' which is covered by the url parameter description, but doesn't elaborate on parameter interactions or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('crawl multiple pages', 'returns compressed markdown') and resources ('starting URL', 'BFS link discovery'). It distinguishes from the sibling tool scraper_compress_url by emphasizing multi-page crawling rather than single-page compression.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through 'crawl multiple pages' and 'starting URL', suggesting this is for web scraping tasks. However, it doesn't explicitly state when to use this tool versus the sibling scraper_compress_url or other alternatives, nor does it provide exclusion criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/robot-resources/scraper-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server