Skip to main content
Glama
Krieg2065

Firecrawl MCP Server

by Krieg2065

firecrawl_scrape

Extract webpage content in multiple formats like markdown or HTML, execute actions before scraping, and filter specific elements for precise data collection.

Instructions

Scrape a single webpage with advanced options for content extraction. Supports various formats including markdown, HTML, and screenshots. Can execute custom actions like clicking or scrolling before scraping.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to scrape
formatsNoContent formats to extract (default: ['markdown'])
onlyMainContentNoExtract only the main content, filtering out navigation, footers, etc.
includeTagsNoHTML tags to specifically include in extraction
excludeTagsNoHTML tags to exclude from extraction
waitForNoTime in milliseconds to wait for dynamic content to load
timeoutNoMaximum time in milliseconds to wait for the page to load
actionsNoList of actions to perform before scraping
extractNoConfiguration for structured data extraction
mobileNoUse mobile viewport
skipTlsVerificationNoSkip TLS certificate verification
removeBase64ImagesNoRemove base64 encoded images from output
locationNoLocation settings for scraping

Implementation Reference

  • The primary handler for the 'firecrawl_scrape' tool within the CallToolRequestSchema switch statement. Validates input using isScrapeOptions, invokes the Firecrawl client's scrapeUrl method with user-provided options, processes the response by extracting and formatting content according to specified formats (markdown, html, etc.), handles errors, logs performance metrics, and returns the formatted content.
    case 'firecrawl_scrape': {
      if (!isScrapeOptions(args)) {
        throw new Error('Invalid arguments for firecrawl_scrape');
      }
      const { url, ...options } = args;
      try {
        const scrapeStartTime = Date.now();
        safeLog(
          'info',
          `Starting scrape for URL: ${url} with options: ${JSON.stringify(options)}`
        );
    
        const response = await client.scrapeUrl(url, {
          ...options,
          // @ts-expect-error Extended API options including origin
          origin: 'mcp-server',
        });
    
        // Log performance metrics
        safeLog(
          'info',
          `Scrape completed in ${Date.now() - scrapeStartTime}ms`
        );
    
        if ('success' in response && !response.success) {
          throw new Error(response.error || 'Scraping failed');
        }
    
        // Format content based on requested formats
        const contentParts = [];
    
        if (options.formats?.includes('markdown') && response.markdown) {
          contentParts.push(response.markdown);
        }
        if (options.formats?.includes('html') && response.html) {
          contentParts.push(response.html);
        }
        if (options.formats?.includes('rawHtml') && response.rawHtml) {
          contentParts.push(response.rawHtml);
        }
        if (options.formats?.includes('links') && response.links) {
          contentParts.push(response.links.join('\n'));
        }
        if (options.formats?.includes('screenshot') && response.screenshot) {
          contentParts.push(response.screenshot);
        }
        if (options.formats?.includes('extract') && response.extract) {
          contentParts.push(JSON.stringify(response.extract, null, 2));
        }
    
        // If options.formats is empty, default to markdown
        if (!options.formats || options.formats.length === 0) {
          options.formats = ['markdown'];
        }
    
        // Add warning to response if present
        if (response.warning) {
          safeLog('warning', response.warning);
        }
    
        return {
          content: [
            {
              type: 'text',
              text: trimResponseText(
                contentParts.join('\n\n') || 'No content available'
              ),
            },
          ],
          isError: false,
        };
      } catch (error) {
        const errorMessage =
          error instanceof Error ? error.message : String(error);
        return {
          content: [{ type: 'text', text: trimResponseText(errorMessage) }],
          isError: true,
        };
      }
    }
  • Tool definition including name, detailed description, and comprehensive inputSchema specifying parameters like url (required), formats, actions, extraction schema, etc., for validating and documenting the firecrawl_scrape tool inputs.
    const SCRAPE_TOOL: Tool = {
      name: 'firecrawl_scrape',
      description:
        'Scrape a single webpage with advanced options for content extraction. ' +
        'Supports various formats including markdown, HTML, and screenshots. ' +
        'Can execute custom actions like clicking or scrolling before scraping.',
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The URL to scrape',
          },
          formats: {
            type: 'array',
            items: {
              type: 'string',
              enum: [
                'markdown',
                'html',
                'rawHtml',
                'screenshot',
                'links',
                'screenshot@fullPage',
                'extract',
              ],
            },
            default: ['markdown'],
            description: "Content formats to extract (default: ['markdown'])",
          },
          onlyMainContent: {
            type: 'boolean',
            description:
              'Extract only the main content, filtering out navigation, footers, etc.',
          },
          includeTags: {
            type: 'array',
            items: { type: 'string' },
            description: 'HTML tags to specifically include in extraction',
          },
          excludeTags: {
            type: 'array',
            items: { type: 'string' },
            description: 'HTML tags to exclude from extraction',
          },
          waitFor: {
            type: 'number',
            description: 'Time in milliseconds to wait for dynamic content to load',
          },
          timeout: {
            type: 'number',
            description:
              'Maximum time in milliseconds to wait for the page to load',
          },
          actions: {
            type: 'array',
            items: {
              type: 'object',
              properties: {
                type: {
                  type: 'string',
                  enum: [
                    'wait',
                    'click',
                    'screenshot',
                    'write',
                    'press',
                    'scroll',
                    'scrape',
                    'executeJavascript',
                  ],
                  description: 'Type of action to perform',
                },
                selector: {
                  type: 'string',
                  description: 'CSS selector for the target element',
                },
                milliseconds: {
                  type: 'number',
                  description: 'Time to wait in milliseconds (for wait action)',
                },
                text: {
                  type: 'string',
                  description: 'Text to write (for write action)',
                },
                key: {
                  type: 'string',
                  description: 'Key to press (for press action)',
                },
                direction: {
                  type: 'string',
                  enum: ['up', 'down'],
                  description: 'Scroll direction',
                },
                script: {
                  type: 'string',
                  description: 'JavaScript code to execute',
                },
                fullPage: {
                  type: 'boolean',
                  description: 'Take full page screenshot',
                },
              },
              required: ['type'],
            },
            description: 'List of actions to perform before scraping',
          },
          extract: {
            type: 'object',
            properties: {
              schema: {
                type: 'object',
                description: 'Schema for structured data extraction',
              },
              systemPrompt: {
                type: 'string',
                description: 'System prompt for LLM extraction',
              },
              prompt: {
                type: 'string',
                description: 'User prompt for LLM extraction',
              },
            },
            description: 'Configuration for structured data extraction',
          },
          mobile: {
            type: 'boolean',
            description: 'Use mobile viewport',
          },
          skipTlsVerification: {
            type: 'boolean',
            description: 'Skip TLS certificate verification',
          },
          removeBase64Images: {
            type: 'boolean',
            description: 'Remove base64 encoded images from output',
          },
          location: {
            type: 'object',
            properties: {
              country: {
                type: 'string',
                description: 'Country code for geolocation',
              },
              languages: {
                type: 'array',
                items: { type: 'string' },
                description: 'Language codes for content',
              },
            },
            description: 'Location settings for scraping',
          },
        },
        required: ['url'],
      },
    };
  • src/index.ts:960-973 (registration)
    Registration of the firecrawl_scrape tool (as SCRAPE_TOOL) in the list of available tools returned by the ListToolsRequestHandler.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        SCRAPE_TOOL,
        MAP_TOOL,
        CRAWL_TOOL,
        BATCH_SCRAPE_TOOL,
        CHECK_BATCH_STATUS_TOOL,
        CHECK_CRAWL_STATUS_TOOL,
        SEARCH_TOOL,
        EXTRACT_TOOL,
        DEEP_RESEARCH_TOOL,
        GENERATE_LLMSTXT_TOOL,
      ],
    }));
  • Type guard function used in the handler to validate that the arguments match ScrapeParams with a required 'url' string, ensuring input validity before calling the Firecrawl API.
    function isScrapeOptions(
      args: unknown
    ): args is ScrapeParams & { url: string } {
      return (
        typeof args === 'object' &&
        args !== null &&
        'url' in args &&
        typeof (args as { url: unknown }).url === 'string'
      );
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It describes key capabilities (content extraction formats, action execution) and scope (single webpage), but lacks information about rate limits, authentication needs, error handling, or what happens with dynamic content. The mention of 'advanced options' is vague without specifics on limitations or performance characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with two sentences that efficiently convey core functionality. The first sentence states the primary purpose and key features, while the second adds important behavioral context. There's no wasted language, though it could be slightly more front-loaded with the most critical information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 13 parameters, nested objects, and no output schema or annotations, the description provides adequate but incomplete context. It covers the 'what' (scraping with options) but lacks information about return values, error conditions, performance expectations, or practical limitations. The absence of output schema means the description should ideally address what the tool returns, which it doesn't.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema already documents all 13 parameters thoroughly. The description adds minimal value beyond the schema, mentioning 'advanced options', 'various formats', and 'custom actions' which are already detailed in the schema properties. It doesn't provide additional syntax, examples, or constraints beyond what's in the structured data.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('scrape', 'extract', 'execute') and resources ('single webpage', 'content extraction', 'custom actions'). It distinguishes from sibling tools by emphasizing 'single webpage' versus batch/crawl operations, and mentions advanced options not implied by the name alone.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool ('scrape a single webpage with advanced options'), but does not explicitly state when not to use it or name specific alternatives. It implies usage for single-page scraping versus batch operations, but lacks explicit exclusions or comparisons to siblings like firecrawl_extract or firecrawl_crawl.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Krieg2065/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server