Skip to main content
Glama
NYO2008

Firecrawl MCP Server

by NYO2008

firecrawl_scrape

Extract content from a specific webpage using customizable options for formats, element filtering, and dynamic content handling.

Instructions

Scrape content from a single URL with advanced options.

Best for: Single page content extraction, when you know exactly which page contains the information. Not recommended for: Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract). Common mistakes: Using scrape for a list of URLs (use batch_scrape instead). Prompt Example: "Get the content of the page at https://example.com." Usage Example:

{
  "name": "firecrawl_scrape",
  "arguments": {
    "url": "https://example.com",
    "formats": ["markdown"]
  }
}

Returns: Markdown, HTML, or other formats as specified.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to scrape
formatsNoContent formats to extract (default: ['markdown'])
onlyMainContentNoExtract only the main content, filtering out navigation, footers, etc.
includeTagsNoHTML tags to specifically include in extraction
excludeTagsNoHTML tags to exclude from extraction
waitForNoTime in milliseconds to wait for dynamic content to load
timeoutNoMaximum time in milliseconds to wait for the page to load
actionsNoList of actions to perform before scraping
extractNoConfiguration for structured data extraction
mobileNoUse mobile viewport
skipTlsVerificationNoSkip TLS certificate verification
removeBase64ImagesNoRemove base64 encoded images from output
locationNoLocation settings for scraping

Implementation Reference

  • The handler logic for the 'firecrawl_scrape' tool within the switch statement in CallToolRequestSchema. Validates arguments, calls Firecrawl's scrapeUrl API, processes multiple output formats, logs performance, and returns formatted content or error.
    case 'firecrawl_scrape': {
      if (!isScrapeOptions(args)) {
        throw new Error('Invalid arguments for firecrawl_scrape');
      }
      const { url, ...options } = args;
      try {
        const scrapeStartTime = Date.now();
        safeLog(
          'info',
          `Starting scrape for URL: ${url} with options: ${JSON.stringify(options)}`
        );
    
        const response = await client.scrapeUrl(url, {
          ...options,
          // @ts-expect-error Extended API options including origin
          origin: 'mcp-server',
        });
    
        // Log performance metrics
        safeLog(
          'info',
          `Scrape completed in ${Date.now() - scrapeStartTime}ms`
        );
    
        if ('success' in response && !response.success) {
          throw new Error(response.error || 'Scraping failed');
        }
    
        // Format content based on requested formats
        const contentParts = [];
    
        if (options.formats?.includes('markdown') && response.markdown) {
          contentParts.push(response.markdown);
        }
        if (options.formats?.includes('html') && response.html) {
          contentParts.push(response.html);
        }
        if (options.formats?.includes('rawHtml') && response.rawHtml) {
          contentParts.push(response.rawHtml);
        }
        if (options.formats?.includes('links') && response.links) {
          contentParts.push(response.links.join('\n'));
        }
        if (options.formats?.includes('screenshot') && response.screenshot) {
          contentParts.push(response.screenshot);
        }
        if (options.formats?.includes('extract') && response.extract) {
          contentParts.push(JSON.stringify(response.extract, null, 2));
        }
    
        // If options.formats is empty, default to markdown
        if (!options.formats || options.formats.length === 0) {
          options.formats = ['markdown'];
        }
    
        // Add warning to response if present
        if (response.warning) {
          safeLog('warning', response.warning);
        }
    
        return {
          content: [
            {
              type: 'text',
              text: trimResponseText(
                contentParts.join('\n\n') || 'No content available'
              ),
            },
          ],
          isError: false,
        };
      } catch (error) {
        const errorMessage =
          error instanceof Error ? error.message : String(error);
        return {
          content: [{ type: 'text', text: trimResponseText(errorMessage) }],
          isError: true,
        };
      }
    }
  • Complete Tool schema definition for 'firecrawl_scrape' including detailed description, inputSchema with all parameters like url, formats, actions, extract schema, etc.
    const SCRAPE_TOOL: Tool = {
      name: 'firecrawl_scrape',
      description: `
    Scrape content from a single URL with advanced options.
    
    **Best for:** Single page content extraction, when you know exactly which page contains the information.
    **Not recommended for:** Multiple pages (use batch_scrape), unknown page (use search), structured data (use extract).
    **Common mistakes:** Using scrape for a list of URLs (use batch_scrape instead).
    **Prompt Example:** "Get the content of the page at https://example.com."
    **Usage Example:**
    \`\`\`json
    {
      "name": "firecrawl_scrape",
      "arguments": {
        "url": "https://example.com",
        "formats": ["markdown"]
      }
    }
    \`\`\`
    **Returns:** Markdown, HTML, or other formats as specified.
    `,
      inputSchema: {
        type: 'object',
        properties: {
          url: {
            type: 'string',
            description: 'The URL to scrape',
          },
          formats: {
            type: 'array',
            items: {
              type: 'string',
              enum: [
                'markdown',
                'html',
                'rawHtml',
                'screenshot',
                'links',
                'screenshot@fullPage',
                'extract',
              ],
            },
            default: ['markdown'],
            description: "Content formats to extract (default: ['markdown'])",
          },
          onlyMainContent: {
            type: 'boolean',
            description:
              'Extract only the main content, filtering out navigation, footers, etc.',
          },
          includeTags: {
            type: 'array',
            items: { type: 'string' },
            description: 'HTML tags to specifically include in extraction',
          },
          excludeTags: {
            type: 'array',
            items: { type: 'string' },
            description: 'HTML tags to exclude from extraction',
          },
          waitFor: {
            type: 'number',
            description: 'Time in milliseconds to wait for dynamic content to load',
          },
          timeout: {
            type: 'number',
            description:
              'Maximum time in milliseconds to wait for the page to load',
          },
          actions: {
            type: 'array',
            items: {
              type: 'object',
              properties: {
                type: {
                  type: 'string',
                  enum: [
                    'wait',
                    'click',
                    'screenshot',
                    'write',
                    'press',
                    'scroll',
                    'scrape',
                    'executeJavascript',
                  ],
                  description: 'Type of action to perform',
                },
                selector: {
                  type: 'string',
                  description: 'CSS selector for the target element',
                },
                milliseconds: {
                  type: 'number',
                  description: 'Time to wait in milliseconds (for wait action)',
                },
                text: {
                  type: 'string',
                  description: 'Text to write (for write action)',
                },
                key: {
                  type: 'string',
                  description: 'Key to press (for press action)',
                },
                direction: {
                  type: 'string',
                  enum: ['up', 'down'],
                  description: 'Scroll direction',
                },
                script: {
                  type: 'string',
                  description: 'JavaScript code to execute',
                },
                fullPage: {
                  type: 'boolean',
                  description: 'Take full page screenshot',
                },
              },
              required: ['type'],
            },
            description: 'List of actions to perform before scraping',
          },
          extract: {
            type: 'object',
            properties: {
              schema: {
                type: 'object',
                description: 'Schema for structured data extraction',
              },
              systemPrompt: {
                type: 'string',
                description: 'System prompt for LLM extraction',
              },
              prompt: {
                type: 'string',
                description: 'User prompt for LLM extraction',
              },
            },
            description: 'Configuration for structured data extraction',
          },
          mobile: {
            type: 'boolean',
            description: 'Use mobile viewport',
          },
          skipTlsVerification: {
            type: 'boolean',
            description: 'Skip TLS certificate verification',
          },
          removeBase64Images: {
            type: 'boolean',
            description: 'Remove base64 encoded images from output',
          },
          location: {
            type: 'object',
            properties: {
              country: {
                type: 'string',
                description: 'Country code for geolocation',
              },
              languages: {
                type: 'array',
                items: { type: 'string' },
                description: 'Language codes for content',
              },
            },
            description: 'Location settings for scraping',
          },
        },
        required: ['url'],
      },
    };
  • src/index.ts:955-966 (registration)
    Registration of the firecrawl_scrape tool (SCRAPE_TOOL) in the listTools request handler, making it available to MCP clients.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        SCRAPE_TOOL,
        MAP_TOOL,
        CRAWL_TOOL,
        CHECK_CRAWL_STATUS_TOOL,
        SEARCH_TOOL,
        EXTRACT_TOOL,
        DEEP_RESEARCH_TOOL,
        GENERATE_LLMSTXT_TOOL,
      ],
    }));
  • Type guard helper function used to validate arguments for the firecrawl_scrape handler.
    function isScrapeOptions(
      args: unknown
    ): args is ScrapeParams & { url: string } {
      return (
        typeof args === 'object' &&
        args !== null &&
        'url' in args &&
        typeof (args as { url: unknown }).url === 'string'
      );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes what the tool does (scrapes content from a URL with advanced options), mentions output formats (markdown, HTML, etc.), and includes practical examples and common pitfalls. However, it lacks details on potential side effects like rate limits, authentication needs, or error handling, which are important for a scraping tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with a clear purpose statement, followed by organized sections like 'Best for:', 'Not recommended for:', 'Common mistakes:', 'Prompt Example:', 'Usage Example:', and 'Returns:'. Each sentence adds value without redundancy, making it efficient and easy to scan for key information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (13 parameters, nested objects, no output schema, and no annotations), the description does a good job of providing context. It covers purpose, usage guidelines, examples, and return formats, which helps an agent understand when and how to use it. However, without annotations or output schema, it could benefit from more details on behavioral traits like error handling or performance constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, so the schema already documents all 13 parameters thoroughly. The description adds minimal parameter-specific information beyond the schema, such as implying the 'url' parameter in the prompt example and mentioning output formats. This meets the baseline of 3, as the schema does the heavy lifting, but the description doesn't significantly enhance parameter understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Scrape content from a single URL with advanced options,' which is a specific verb+resource combination. It explicitly distinguishes from siblings by naming alternatives like 'batch_scrape' for multiple pages, 'search' for unknown pages, and 'extract' for structured data, making the distinction clear and actionable.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance with sections like 'Best for:' (single page content extraction when the page is known), 'Not recommended for:' (multiple pages, unknown pages, structured data), and 'Common mistakes:' (using it for a list of URLs). It names specific sibling tools as alternatives, offering clear when-to-use and when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NYO2008/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server