Skip to main content
Glama
Krieg2065

Firecrawl MCP Server

by Krieg2065

firecrawl_extract

Extract structured data from web pages using LLM prompts and JSON schemas. Supports cloud and self-hosted AI for web content analysis.

Instructions

Extract structured information from web pages using LLM. Supports both cloud AI and self-hosted LLM extraction.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYesList of URLs to extract information from
promptNoPrompt for the LLM extraction
systemPromptNoSystem prompt for LLM extraction
schemaNoJSON schema for structured data extraction
allowExternalLinksNoAllow extraction from external links
enableWebSearchNoEnable web search for additional context
includeSubdomainsNoInclude subdomains in extraction

Implementation Reference

  • Main handler for 'firecrawl_extract' tool in the switch statement of CallToolRequestSchema. Validates arguments with isExtractOptions, calls client.extract from FirecrawlApp, handles success/error responses, credit tracking, and logging.
    case 'firecrawl_extract': {
      if (!isExtractOptions(args)) {
        throw new Error('Invalid arguments for firecrawl_extract');
      }
    
      try {
        const extractStartTime = Date.now();
    
        safeLog(
          'info',
          `Starting extraction for URLs: ${args.urls.join(', ')}`
        );
    
        // Log if using self-hosted instance
        if (FIRECRAWL_API_URL) {
          safeLog('info', 'Using self-hosted instance for extraction');
        }
    
        const extractResponse = await withRetry(
          async () =>
            client.extract(args.urls, {
              prompt: args.prompt,
              systemPrompt: args.systemPrompt,
              schema: args.schema,
              allowExternalLinks: args.allowExternalLinks,
              enableWebSearch: args.enableWebSearch,
              includeSubdomains: args.includeSubdomains,
              origin: 'mcp-server',
            } as ExtractParams),
          'extract operation'
        );
    
        // Type guard for successful response
        if (!('success' in extractResponse) || !extractResponse.success) {
          throw new Error(extractResponse.error || 'Extraction failed');
        }
    
        const response = extractResponse as ExtractResponse;
    
        // Monitor credits for cloud API
        if (!FIRECRAWL_API_URL && hasCredits(response)) {
          await updateCreditUsage(response.creditsUsed || 0);
        }
    
        // Log performance metrics
        safeLog(
          'info',
          `Extraction completed in ${Date.now() - extractStartTime}ms`
        );
    
        // Add warning to response if present
        const result = {
          content: [
            {
              type: 'text',
              text: trimResponseText(JSON.stringify(response.data, null, 2)),
            },
          ],
          isError: false,
        };
    
        if (response.warning) {
          safeLog('warning', response.warning);
        }
    
        return result;
      } catch (error) {
        const errorMessage =
          error instanceof Error ? error.message : String(error);
    
        // Special handling for self-hosted instance errors
        if (
          FIRECRAWL_API_URL &&
          errorMessage.toLowerCase().includes('not supported')
        ) {
          safeLog(
            'error',
            'Extraction is not supported by this self-hosted instance'
          );
          return {
            content: [
              {
                type: 'text',
                text: trimResponseText(
                  'Extraction is not supported by this self-hosted instance. Please ensure LLM support is configured.'
                ),
              },
            ],
            isError: true,
          };
        }
    
        return {
          content: [{ type: 'text', text: trimResponseText(errorMessage) }],
          isError: true,
        };
      }
    }
  • Tool definition for 'firecrawl_extract' including name, description, and detailed inputSchema for parameters like urls, prompt, schema, etc.
    const EXTRACT_TOOL: Tool = {
      name: 'firecrawl_extract',
      description:
        'Extract structured information from web pages using LLM. ' +
        'Supports both cloud AI and self-hosted LLM extraction.',
      inputSchema: {
        type: 'object',
        properties: {
          urls: {
            type: 'array',
            items: { type: 'string' },
            description: 'List of URLs to extract information from',
          },
          prompt: {
            type: 'string',
            description: 'Prompt for the LLM extraction',
          },
          systemPrompt: {
            type: 'string',
            description: 'System prompt for LLM extraction',
          },
          schema: {
            type: 'object',
            description: 'JSON schema for structured data extraction',
          },
          allowExternalLinks: {
            type: 'boolean',
            description: 'Allow extraction from external links',
          },
          enableWebSearch: {
            type: 'boolean',
            description: 'Enable web search for additional context',
          },
          includeSubdomains: {
            type: 'boolean',
            description: 'Include subdomains in extraction',
          },
        },
        required: ['urls'],
      },
    };
  • src/index.ts:960-973 (registration)
    Registration of the 'firecrawl_extract' tool (as EXTRACT_TOOL) in the list of tools returned by ListToolsRequestSchema handler.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        SCRAPE_TOOL,
        MAP_TOOL,
        CRAWL_TOOL,
        BATCH_SCRAPE_TOOL,
        CHECK_BATCH_STATUS_TOOL,
        CHECK_CRAWL_STATUS_TOOL,
        SEARCH_TOOL,
        EXTRACT_TOOL,
        DEEP_RESEARCH_TOOL,
        GENERATE_LLMSTXT_TOOL,
      ],
    }));
  • Type guard function 'isExtractOptions' used to validate input arguments for the firecrawl_extract handler, ensuring 'urls' is a non-empty array of strings.
    function isExtractOptions(args: unknown): args is ExtractArgs {
      if (typeof args !== 'object' || args === null) return false;
      const { urls } = args as { urls?: unknown };
      return (
        Array.isArray(urls) &&
        urls.every((url): url is string => typeof url === 'string')
      );
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool uses LLM for extraction and supports different deployment modes, but lacks critical details: it doesn't specify whether this is a read-only or mutating operation, potential rate limits, authentication needs, error handling, or what the output looks like (since no output schema exists). For a tool with 7 parameters and no annotations, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two sentences that directly state the tool's core functionality and a key feature. It's front-loaded with the main purpose, and every sentence adds value (the second sentence clarifies deployment options). There's no wasted verbiage or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (7 parameters, no annotations, no output schema), the description is incomplete. It lacks output format details, error conditions, prerequisites (e.g., authentication), and behavioral constraints. For a tool that likely involves network calls and LLM processing, this leaves significant gaps for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 7 parameters thoroughly. The description adds no additional meaning beyond what's in the schema—it doesn't explain parameter interactions, default behaviors, or examples. With high schema coverage, the baseline is 3, as the description doesn't compensate but also doesn't detract.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Extract structured information from web pages using LLM.' It specifies the verb ('extract'), resource ('structured information from web pages'), and method ('using LLM'). However, it doesn't explicitly differentiate from sibling tools like 'firecrawl_scrape' or 'firecrawl_deep_research,' which likely have overlapping functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions support for 'cloud AI and self-hosted LLM extraction,' but this is a feature detail, not usage context. There are no explicit when/when-not instructions or references to sibling tools, leaving the agent to infer usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Krieg2065/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server