firecrawl_extract
Extract structured data like prices, names, and details from web pages using AI. Define custom schemas to retrieve specific information from URLs.
Instructions
Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction.
Best for: Extracting specific structured data like prices, names, details from web pages. Not recommended for: When you need the full content of a page (use scrape); when you're not looking for specific structured data. Arguments:
urls: Array of URLs to extract information from
prompt: Custom prompt for the LLM extraction
schema: JSON schema for structured data extraction
allowExternalLinks: Allow extraction from external links
enableWebSearch: Enable web search for additional context
includeSubdomains: Include subdomains in extraction Prompt Example: "Extract the product name, price, and description from these product pages." Usage Example:
Returns: Extracted structured data as defined by your schema.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| urls | Yes | ||
| prompt | No | ||
| schema | No | ||
| allowExternalLinks | No | ||
| enableWebSearch | No | ||
| includeSubdomains | No |
Implementation Reference
- src/index.ts:550-618 (registration)Registration of the 'firecrawl_extract' tool using server.addTool, including description, schema, and handler.server.addTool({ name: 'firecrawl_extract', description: ` Extract structured information from web pages using LLM capabilities. Supports both cloud AI and self-hosted LLM extraction. **Best for:** Extracting specific structured data like prices, names, details from web pages. **Not recommended for:** When you need the full content of a page (use scrape); when you're not looking for specific structured data. **Arguments:** - urls: Array of URLs to extract information from - prompt: Custom prompt for the LLM extraction - schema: JSON schema for structured data extraction - allowExternalLinks: Allow extraction from external links - enableWebSearch: Enable web search for additional context - includeSubdomains: Include subdomains in extraction **Prompt Example:** "Extract the product name, price, and description from these product pages." **Usage Example:** \`\`\`json { "name": "firecrawl_extract", "arguments": { "urls": ["https://example.com/page1", "https://example.com/page2"], "prompt": "Extract product information including name, price, and description", "schema": { "type": "object", "properties": { "name": { "type": "string" }, "price": { "type": "number" }, "description": { "type": "string" } }, "required": ["name", "price"] }, "allowExternalLinks": false, "enableWebSearch": false, "includeSubdomains": false } } \`\`\` **Returns:** Extracted structured data as defined by your schema. `, parameters: z.object({ urls: z.array(z.string()), prompt: z.string().optional(), schema: z.record(z.string(), z.any()).optional(), allowExternalLinks: z.boolean().optional(), enableWebSearch: z.boolean().optional(), includeSubdomains: z.boolean().optional(), }), execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const client = getClient(session); const a = args as Record<string, unknown>; log.info('Extracting from URLs', { count: Array.isArray(a.urls) ? a.urls.length : 0, }); const extractBody = removeEmptyTopLevel({ urls: a.urls as string[], prompt: a.prompt as string | undefined, schema: (a.schema as Record<string, unknown>) || undefined, allowExternalLinks: a.allowExternalLinks as boolean | undefined, enableWebSearch: a.enableWebSearch as boolean | undefined, includeSubdomains: a.includeSubdomains as boolean | undefined, origin: ORIGIN, }); const res = await client.extract(extractBody as any); return asText(res); }, });
- src/index.ts:597-617 (handler)Handler function that gets the Firecrawl client, prepares the extract body by removing empty fields, calls client.extract(), and returns JSON stringified result.execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const client = getClient(session); const a = args as Record<string, unknown>; log.info('Extracting from URLs', { count: Array.isArray(a.urls) ? a.urls.length : 0, }); const extractBody = removeEmptyTopLevel({ urls: a.urls as string[], prompt: a.prompt as string | undefined, schema: (a.schema as Record<string, unknown>) || undefined, allowExternalLinks: a.allowExternalLinks as boolean | undefined, enableWebSearch: a.enableWebSearch as boolean | undefined, includeSubdomains: a.includeSubdomains as boolean | undefined, origin: ORIGIN, }); const res = await client.extract(extractBody as any); return asText(res); },
- src/index.ts:589-596 (schema)Zod schema defining the input parameters: urls (required array), optional prompt, schema, allowExternalLinks, enableWebSearch, includeSubdomains.parameters: z.object({ urls: z.array(z.string()), prompt: z.string().optional(), schema: z.record(z.string(), z.any()).optional(), allowExternalLinks: z.boolean().optional(), enableWebSearch: z.boolean().optional(), includeSubdomains: z.boolean().optional(), }),
- src/index.ts:164-166 (helper)Helper function to stringify data to formatted JSON, used in handler to return results.function asText(data: unknown): string { return JSON.stringify(data, null, 2); }
- src/index.ts:142-162 (helper)Helper function to create and return FirecrawlApp client instance based on session and env vars.function getClient(session?: SessionData): FirecrawlApp { // For cloud service, API key is required if (process.env.CLOUD_SERVICE === 'true') { if (!session || !session.firecrawlApiKey) { throw new Error('Unauthorized'); } return createClient(session.firecrawlApiKey); } // For self-hosted instances, API key is optional if FIRECRAWL_API_URL is provided if ( !process.env.FIRECRAWL_API_URL && (!session || !session.firecrawlApiKey) ) { throw new Error( 'Unauthorized: API key is required when not using a self-hosted instance' ); } return createClient(session?.firecrawlApiKey); }