firecrawl_map
Discover all indexed URLs on a website to identify pages for scraping or locate specific sections of a site. Returns an array of found URLs.
Instructions
Map a website to discover all indexed URLs on the site.
Best for: Discovering URLs on a website before deciding what to scrape; finding specific sections of a website. Not recommended for: When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping). Common mistakes: Using crawl to discover URLs instead of map. Prompt Example: "List all URLs on example.com." Usage Example:
Returns: Array of URLs found on the site.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | ||
| search | No | ||
| sitemap | No | ||
| includeSubdomains | No | ||
| limit | No | ||
| ignoreQueryParameters | No |
Implementation Reference
- src/index.ts:340-356 (handler)Handler function that executes the firecrawl_map tool by calling client.map() with the provided URL and options.execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const { url, ...options } = args as { url: string } & Record< string, unknown >; const client = getClient(session); const cleaned = removeEmptyTopLevel(options as Record<string, unknown>); log.info('Mapping URL', { url: String(url) }); const res = await client.map(String(url), { ...cleaned, origin: ORIGIN, } as any); return asText(res); },
- src/index.ts:332-339 (schema)Zod schema defining the input parameters for the firecrawl_map tool.parameters: z.object({ url: z.string().url(), search: z.string().optional(), sitemap: z.enum(['include', 'skip', 'only']).optional(), includeSubdomains: z.boolean().optional(), limit: z.number().optional(), ignoreQueryParameters: z.boolean().optional(), }),
- src/index.ts:312-357 (registration)Registration of the firecrawl_map tool using server.addTool, including name, description, parameters, and execute handler.server.addTool({ name: 'firecrawl_map', description: ` Map a website to discover all indexed URLs on the site. **Best for:** Discovering URLs on a website before deciding what to scrape; finding specific sections of a website. **Not recommended for:** When you already know which specific URL you need (use scrape or batch_scrape); when you need the content of the pages (use scrape after mapping). **Common mistakes:** Using crawl to discover URLs instead of map. **Prompt Example:** "List all URLs on example.com." **Usage Example:** \`\`\`json { "name": "firecrawl_map", "arguments": { "url": "https://example.com" } } \`\`\` **Returns:** Array of URLs found on the site. `, parameters: z.object({ url: z.string().url(), search: z.string().optional(), sitemap: z.enum(['include', 'skip', 'only']).optional(), includeSubdomains: z.boolean().optional(), limit: z.number().optional(), ignoreQueryParameters: z.boolean().optional(), }), execute: async ( args: unknown, { session, log }: { session?: SessionData; log: Logger } ): Promise<string> => { const { url, ...options } = args as { url: string } & Record< string, unknown >; const client = getClient(session); const cleaned = removeEmptyTopLevel(options as Record<string, unknown>); log.info('Mapping URL', { url: String(url) }); const res = await client.map(String(url), { ...cleaned, origin: ORIGIN, } as any); return asText(res); }, });