Skip to main content
Glama
NYO2008

Firecrawl MCP Server

by NYO2008

firecrawl_crawl

Extract content from multiple website pages by starting an asynchronous crawl job to comprehensively gather data across related webpages.

Instructions

Starts an asynchronous crawl job on a website and extracts content from all pages.

Best for: Extracting content from multiple related pages, when you need comprehensive coverage. Not recommended for: Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow). Warning: Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control. Common mistakes: Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead). Prompt Example: "Get all blog posts from the first two levels of example.com/blog." Usage Example:

{
  "name": "firecrawl_crawl",
  "arguments": {
    "url": "https://example.com/blog/*",
    "maxDepth": 2,
    "limit": 100,
    "allowExternalLinks": false,
    "deduplicateSimilarURLs": true
  }
}

Returns: Operation ID for status checking; use firecrawl_check_crawl_status to check progress.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesStarting URL for the crawl
excludePathsNoURL paths to exclude from crawling
includePathsNoOnly crawl these URL paths
maxDepthNoMaximum link depth to crawl
ignoreSitemapNoSkip sitemap.xml discovery
limitNoMaximum number of pages to crawl
allowBackwardLinksNoAllow crawling links that point to parent directories
allowExternalLinksNoAllow crawling links to external domains
webhookNo
deduplicateSimilarURLsNoRemove similar URLs during crawl
ignoreQueryParametersNoIgnore query parameters when comparing URLs
scrapeOptionsNoOptions for scraping each page

Implementation Reference

  • The handler for the 'firecrawl_crawl' tool. Validates input using isCrawlOptions, initiates an asynchronous crawl using Firecrawl's client.asyncCrawlUrl with retry logic, and returns the crawl job ID for status checking.
    case 'firecrawl_crawl': {
      if (!isCrawlOptions(args)) {
        throw new Error('Invalid arguments for firecrawl_crawl');
      }
      const { url, ...options } = args;
      const response = await withRetry(
        async () =>
          // @ts-expect-error Extended API options including origin
          client.asyncCrawlUrl(url, { ...options, origin: 'mcp-server' }),
        'crawl operation'
      );
    
      if (!response.success) {
        throw new Error(response.error);
      }
    
      return {
        content: [
          {
            type: 'text',
            text: trimResponseText(
              `Started crawl for ${url} with job ID: ${response.id}. Use firecrawl_check_crawl_status to check progress.`
            ),
          },
        ],
        isError: false,
      };
    }
  • The Tool object definition for 'firecrawl_crawl', including name, detailed description, and comprehensive inputSchema with parameters for crawling configuration such as url, maxDepth, limit, scrapeOptions, etc.
    const CRAWL_TOOL: Tool = {
      name: 'firecrawl_crawl',
      description: `
    Starts an asynchronous crawl job on a website and extracts content from all pages.
    
    **Best for:** Extracting content from multiple related pages, when you need comprehensive coverage.
    **Not recommended for:** Extracting content from a single page (use scrape); when token limits are a concern (use map + batch_scrape); when you need fast results (crawling can be slow).
    **Warning:** Crawl responses can be very large and may exceed token limits. Limit the crawl depth and number of pages, or use map + batch_scrape for better control.
    **Common mistakes:** Setting limit or maxDepth too high (causes token overflow); using crawl for a single page (use scrape instead).
    **Prompt Example:** "Get all blog posts from the first two levels of example.com/blog."
    **Usage Example:**
    \`\`\`json
    {
      "name": "firecrawl_crawl",
      "arguments": {
        "url": "https://example.com/blog/*",
        "maxDepth": 2,
        "limit": 100,
        "allowExternalLinks": false,
        "deduplicateSimilarURLs": true
      }
    }
    \`\`\`
    **Returns:** Operation ID for status checking; use firecrawl_check_crawl_status to check progress.
    `,
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'Starting URL for the crawl',
          },
          excludePaths: {
            type: 'array',
            items: { type: 'string' },
            description: 'URL paths to exclude from crawling',
          },
          includePaths: {
            type: 'array',
            items: { type: 'string' },
            description: 'Only crawl these URL paths',
          },
          maxDepth: {
            type: 'number',
            description: 'Maximum link depth to crawl',
          },
          ignoreSitemap: {
            type: 'boolean',
            description: 'Skip sitemap.xml discovery',
          },
          limit: {
            type: 'number',
            description: 'Maximum number of pages to crawl',
          },
          allowBackwardLinks: {
            type: 'boolean',
            description: 'Allow crawling links that point to parent directories',
          },
          allowExternalLinks: {
            type: 'boolean',
            description: 'Allow crawling links to external domains',
          },
          webhook: {
            oneOf: [
              {
                type: 'string',
                description: 'Webhook URL to notify when crawl is complete',
              },
              {
                type: 'object',
                properties: {
                  url: {
                    type: 'string',
                    description: 'Webhook URL',
                  },
                  headers: {
                    type: 'object',
                    description: 'Custom headers for webhook requests',
                  },
                },
                required: ['url'],
              },
            ],
          },
          deduplicateSimilarURLs: {
            type: 'boolean',
            description: 'Remove similar URLs during crawl',
          },
          ignoreQueryParameters: {
            type: 'boolean',
            description: 'Ignore query parameters when comparing URLs',
          },
          scrapeOptions: {
            type: 'object',
            properties: {
              formats: {
                type: 'array',
                items: {
                  type: 'string',
                  enum: [
                    'markdown',
                    'html',
                    'rawHtml',
                    'screenshot',
                    'links',
                    'screenshot@fullPage',
                    'extract',
                  ],
                },
              },
              onlyMainContent: {
                type: 'boolean',
              },
              includeTags: {
                type: 'array',
                items: { type: 'string' },
              },
              excludeTags: {
                type: 'array',
                items: { type: 'string' },
              },
              waitFor: {
                type: 'number',
              },
            },
            description: 'Options for scraping each page',
          },
        },
        required: ['url'],
      },
    };
  • src/index.ts:955-966 (registration)
    Registration of all tools including CRAWL_TOOL (firecrawl_crawl) in the MCP server's ListToolsRequestHandler.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        SCRAPE_TOOL,
        MAP_TOOL,
        CRAWL_TOOL,
        CHECK_CRAWL_STATUS_TOOL,
        SEARCH_TOOL,
        EXTRACT_TOOL,
        DEEP_RESEARCH_TOOL,
        GENERATE_LLMSTXT_TOOL,
      ],
    }));
  • Type guard helper function used in the handler to validate arguments for the firecrawl_crawl tool.
    function isCrawlOptions(args: unknown): args is CrawlParams & { url: string } {
      return (
        typeof args === 'object' &&
        args !== null &&
        'url' in args &&
        typeof (args as { url: unknown }).url === 'string'
      );
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Since no annotations are provided, the description carries the full burden of behavioral disclosure and excels at this. It clearly explains that this is an asynchronous operation, warns about potential token limit issues, mentions that crawling can be slow, describes the return format (operation ID), and explains the need to use a separate tool (firecrawl_check_crawl_status) to check progress. This provides rich behavioral context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is exceptionally well-structured with clear sections (Best for, Not recommended for, Warning, Common mistakes, Prompt Example, Usage Example, Returns) that make information easy to find. Every sentence earns its place by providing essential guidance, warnings, or examples without redundancy. The formatting with bold headers and code blocks enhances readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (12 parameters, asynchronous operation, no output schema, no annotations), the description provides comprehensive context. It explains the asynchronous nature, return format, progress checking mechanism, performance considerations, token limit warnings, and sibling tool relationships. For a complex tool with no annotations or output schema, this description provides all necessary contextual information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 92% schema description coverage, the baseline would be 3, but the description adds significant value through the usage example that shows practical parameter combinations and the prompt example that contextualizes parameter use. While it doesn't explain individual parameters, it provides semantic guidance about how parameters work together in real scenarios, elevating the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('starts an asynchronous crawl job', 'extracts content from all pages') and distinguishes it from siblings by explicitly mentioning when to use scrape instead. The opening sentence provides a complete, unambiguous statement of what the tool does.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides excellent usage guidance with explicit 'Best for' and 'Not recommended for' sections, names specific alternative tools (scrape, map + batch_scrape), and includes a 'Common mistakes' section with concrete examples. This gives comprehensive context about when to use this tool versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NYO2008/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server