Skip to main content
Glama

firecrawl_generate_llmstxt

Generate a standardized LLMs.txt file for any URL to define how large language models should interact with the website content.

Instructions

Generate standardized LLMs.txt file for a given URL, which provides context about how LLMs should interact with the website.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to generate LLMs.txt from
maxUrlsNoMaximum number of URLs to process (1-100, default: 10)
showFullTextNoWhether to show the full LLMs-full.txt in the response

Implementation Reference

  • Handler for the firecrawl_generate_llmstxt tool. Validates arguments, calls client.generateLLMsText, formats and returns the LLMs.txt content.
    case 'firecrawl_generate_llmstxt': { if (!isGenerateLLMsTextOptions(args)) { throw new Error('Invalid arguments for firecrawl_generate_llmstxt'); } try { const { url, ...params } = args; const generateStartTime = Date.now(); safeLog('info', `Starting LLMs.txt generation for URL: ${url}`); // Start the generation process const response = await withRetry( async () => // @ts-expect-error Extended API options including origin client.generateLLMsText(url, { ...params, origin: 'mcp-server' }), 'LLMs.txt generation' ); if (!response.success) { throw new Error(response.error || 'LLMs.txt generation failed'); } // Log performance metrics safeLog( 'info', `LLMs.txt generation completed in ${Date.now() - generateStartTime}ms` ); // Format the response let resultText = ''; if ('data' in response) { resultText = `LLMs.txt content:\n\n${response.data.llmstxt}`; if (args.showFullText && response.data.llmsfulltxt) { resultText += `\n\nLLMs-full.txt content:\n\n${response.data.llmsfulltxt}`; } } return { content: [{ type: 'text', text: trimResponseText(resultText) }], isError: false, }; } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); return { content: [{ type: 'text', text: trimResponseText(errorMessage) }], isError: true, }; } }
  • Tool schema definition including name, description, and inputSchema for firecrawl_generate_llmstxt.
    const GENERATE_LLMSTXT_TOOL: Tool = { name: 'firecrawl_generate_llmstxt', description: 'Generate standardized LLMs.txt file for a given URL, which provides context about how LLMs should interact with the website.', inputSchema: { type: 'object', properties: { url: { type: 'string', description: 'The URL to generate LLMs.txt from', }, maxUrls: { type: 'number', description: 'Maximum number of URLs to process (1-100, default: 10)', }, showFullText: { type: 'boolean', description: 'Whether to show the full LLMs-full.txt in the response', }, }, required: ['url'], }, };
  • src/index.ts:960-973 (registration)
    Registration of the firecrawl_generate_llmstxt tool (as GENERATE_LLMSTXT_TOOL) in the listTools request handler.
    server.setRequestHandler(ListToolsRequestSchema, async () => ({ tools: [ SCRAPE_TOOL, MAP_TOOL, CRAWL_TOOL, BATCH_SCRAPE_TOOL, CHECK_BATCH_STATUS_TOOL, CHECK_CRAWL_STATUS_TOOL, SEARCH_TOOL, EXTRACT_TOOL, DEEP_RESEARCH_TOOL, GENERATE_LLMSTXT_TOOL, ], }));
  • Type guard helper function to validate arguments for the firecrawl_generate_llmstxt tool.
    function isGenerateLLMsTextOptions( args: unknown ): args is { url: string } & Partial<GenerateLLMsTextParams> { return ( typeof args === 'object' && args !== null && 'url' in args && typeof (args as { url: unknown }).url === 'string' ); }
  • TypeScript interface defining parameters for LLMs.txt generation used by the tool.
    interface GenerateLLMsTextParams { /** * Maximum number of URLs to process (1-100) * @default 10 */ maxUrls?: number; /** * Whether to show the full LLMs-full.txt in the response * @default false */ showFullText?: boolean; /** * Experimental flag for streaming */ __experimental_stream?: boolean; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Krieg2065/firecrawl-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server