Skip to main content
Glama
Jaycee1996

Firecrawl MCP Server

by Jaycee1996

firecrawl_extract

Extract structured data like prices, names, and details from web pages using AI. Define custom schemas to retrieve specific information from URLs.

Instructions

Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.

Best for: Extracting specific structured data like prices, names, details from web pages. Not recommended for: When you need the full content of a page (use scrape); when you're not looking for specific structured data. Arguments:

  • urls: Array of URLs to extract information from

  • prompt: Custom prompt for the LLM extraction

  • schema: JSON schema for structured data extraction

  • allowExternalLinks: Allow extraction from external links

  • enableWebSearch: Enable web search for additional context

  • includeSubdomains: Include subdomains in extraction Prompt Example: "Extract the product name, price, and description from these product pages." Usage Example:

{ "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } }

Returns: Extracted structured data as defined by your schema.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlsYes
promptNo
schemaNo
allowExternalLinksNo
enableWebSearchNo
includeSubdomainsNo

Implementation Reference

  • src/index.ts:550-618 (registration)
    Registration of the 'firecrawl_extract' tool using server.addTool, including description, schema, and handler.
    server.addTool({ name: 'firecrawl_extract', description: ` Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction. **Best for:** Extracting specific structured data like prices, names, details from web pages. **Not recommended for:** When you need the full content of a page (use scrape); when you're not looking for specific structured data. **Arguments:** - urls: Array of URLs to extract information from - prompt: Custom prompt for the LLM extraction - schema: JSON schema for structured data extraction - allowExternalLinks: Allow extraction from external links - enableWebSearch: Enable web search for additional context - includeSubdomains: Include subdomains in extraction **Prompt Example:** "Extract the product name, price, and description from these product pages." **Usage Example:** \`\`\`json { "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } } \`\`\` **Returns:** Extracted structured data as defined by your schema. `, parameters: z.object({ urls: z.array(z.string()), prompt: z.string().optional(), schema: z.record(z.string(), z.any()).optional(), allowExternalLinks: z.boolean().optional(), enableWebSearch: z.boolean().optional(), includeSubdomains: z.boolean().optional(), }), execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const client = getClient(session); const a = args as Record<string, unknown>; log.info('Extracting from URLs', { count: Array.isArray(a.urls) ? a.urls.length : 0, }); const extractBody = removeEmptyTopLevel({ urls: a.urls as string[], prompt: a.prompt as string | undefined, schema: (a.schema as Record<string, unknown>) || undefined, allowExternalLinks: a.allowExternalLinks as boolean | undefined, enableWebSearch: a.enableWebSearch as boolean | undefined, includeSubdomains: a.includeSubdomains as boolean | undefined, origin: ORIGIN, }); const res = await client.extract(extractBody as any); return asText(res); }, });
  • Handler function that gets the Firecrawl client, prepares the extract body by removing empty fields, calls client.extract(), and returns JSON stringified result.
    execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const client = getClient(session); const a = args as Record<string, unknown>; log.info('Extracting from URLs', { count: Array.isArray(a.urls) ? a.urls.length : 0, }); const extractBody = removeEmptyTopLevel({ urls: a.urls as string[], prompt: a.prompt as string | undefined, schema: (a.schema as Record<string, unknown>) || undefined, allowExternalLinks: a.allowExternalLinks as boolean | undefined, enableWebSearch: a.enableWebSearch as boolean | undefined, includeSubdomains: a.includeSubdomains as boolean | undefined, origin: ORIGIN, }); const res = await client.extract(extractBody as any); return asText(res); },
  • Zod schema defining the input parameters: urls (required array), optional prompt, schema, allowExternalLinks, enableWebSearch, includeSubdomains.
    parameters: z.object({ urls: z.array(z.string()), prompt: z.string().optional(), schema: z.record(z.string(), z.any()).optional(), allowExternalLinks: z.boolean().optional(), enableWebSearch: z.boolean().optional(), includeSubdomains: z.boolean().optional(), }),
  • Helper function to stringify data to formatted JSON, used in handler to return results.
    function asText(data: unknown): string { return JSON.stringify(data, null, 2); }
  • Helper function to create and return FirecrawlApp client instance based on session and env vars.
    function getClient(session?: SessionData): FirecrawlApp { // For cloud service, API key is required if (process.env.CLOUD_SERVICE === 'true') { if (!session || !session.firecrawlApiKey) { throw new Error('Unauthorized'); } return createClient(session.firecrawlApiKey); } // For self-hosted instances, API key is optional if FIRECRAWL_API_URL is provided if ( !process.env.FIRECRAWL_API_URL && (!session || !session.firecrawlApiKey) ) { throw new Error( 'Unauthorized: API key is required when not using a self-hosted instance' ); } return createClient(session?.firecrawlApiKey); }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jaycee1996/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server