Skip to main content
Glama
omgwtfwow

MCP Server for Crawl4AI

by omgwtfwow

crawl

Crawl web pages with browser persistence for multi-step interactions like form filling and JavaScript execution, maintaining state across sessions.

Instructions

[SUPPORTS SESSIONS] THE ONLY TOOL WITH BROWSER PERSISTENCE

RECOMMENDED PATTERNS: • Inspect first workflow:

  1. get_html(url) → find selectors & verify elements exist

  2. create_session() → "session-123"

  3. crawl({url, session_id: "session-123", js_code: ["action 1"]})

  4. crawl({url: "/page2", session_id: "session-123", js_code: ["action 2"]})

• Multi-step with state:

  1. create_session() → "session-123"

  2. crawl({url, session_id: "session-123"}) → inspect current state

  3. crawl({url, session_id: "session-123", js_code: ["verified actions"]})

WITH session_id: Maintains browser state (cookies, localStorage, page) across calls WITHOUT session_id: Creates fresh browser each time (like other tools)

WHEN TO USE SESSIONS vs STATELESS: • Need state between calls? → create_session + crawl • Just extracting data? → Use stateless tools • Filling forms? → Inspect first, then use sessions • Taking screenshot after JS? → Must use crawl with session • Unsure if elements exist? → Always use get_html first

CRITICAL FOR js_code: RECOMMENDED: Always use screenshot: true when running js_code This avoids server serialization errors and gives visual confirmation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe URL to crawl
session_idNoENABLES PERSISTENCE: Use SAME ID across all crawl calls to maintain browser state. • First call with ID: Creates persistent browser • Subsequent calls with SAME ID: Reuses browser with all state intact • Different/no ID: Fresh browser (stateless) WARNING: ONLY works with crawl tool - other tools ignore this parameter
browser_typeNoBrowser engine for crawling. Chromium offers best compatibility, Firefox for specific use cases, WebKit for Safari-like behaviorchromium
viewport_widthNoBrowser window width in pixels. Affects responsive layouts and content visibility
viewport_heightNoBrowser window height in pixels. Impacts content loading and screenshot dimensions
user_agentNoCustom browser identity. Use for: mobile sites (include "Mobile"), avoiding bot detection, or specific browser requirements. Example: "Mozilla/5.0 (iPhone...)"
proxy_serverNoProxy server URL (e.g., "http://proxy.example.com:8080")
proxy_usernameNoProxy authentication username
proxy_passwordNoProxy authentication password
cookiesNoPre-set cookies for authentication or personalization
headersNoCustom HTTP headers for API keys, auth tokens, or specific server requirements
word_count_thresholdNoMin words per text block. Filters out menus, footers, and short snippets. Lower = more content but more noise. Higher = only substantial paragraphs
excluded_tagsNoHTML tags to remove completely. Common: ["nav", "footer", "aside", "script", "style"]. Cleans up content before extraction
remove_overlay_elementsNoAutomatically remove popups, modals, and overlays that obscure content
js_codeNoJavaScript to execute. Each string runs separately. Use return to get values. IMPORTANT: Always verify elements exist before acting on them! Use get_html first to find correct selectors, then: GOOD: ["if (document.querySelector('input[name=\"email\"]')) { ... }"] BAD: ["document.querySelector('input[name=\"email\"]').value = '...'"] USAGE PATTERNS: 1. WITH screenshot/pdf: {js_code: [...], screenshot: true} ✓ 2. MULTI-STEP: First {js_code: [...], session_id: "x"}, then {js_only: true, session_id: "x"} 3. AVOID: {js_code: [...], js_only: true} on first call ✗ SELECTOR TIPS: Use get_html first to find: • name="..." (best for forms) • id="..." (if unique) • class="..." (careful, may repeat) FORM EXAMPLE WITH VERIFICATION: [ "const emailInput = document.querySelector('input[name=\"email\"]');", "if (emailInput) emailInput.value = 'user@example.com';", "const submitBtn = document.querySelector('button[type=\"submit\"]');", "if (submitBtn) submitBtn.click();" ]
js_onlyNoFOR SUBSEQUENT CALLS ONLY: Reuse existing session without navigation First call: Use js_code WITHOUT js_only (or with screenshot/pdf) Later calls: Use js_only=true to run more JS in same session ERROR: Using js_only=true on first call causes server errors
wait_forNoWait for element that loads AFTER initial page load. Format: "css:.selector" or "js:() => condition" WHEN TO USE: • Dynamic content that loads after page (AJAX, lazy load) • Elements that appear after animations/transitions • Content loaded by JavaScript frameworks WHEN NOT TO USE: • Elements already in initial HTML (forms, static content) • Standard page elements (just use wait_until: "load") • Can cause timeouts/errors if element already exists! SELECTOR TIPS: Use get_html first to check if element exists Examples: "css:.ajax-content", "js:() => document.querySelector('.lazy-loaded')"
wait_for_timeoutNoMaximum milliseconds to wait for condition
delay_before_scrollNoMilliseconds to wait before scrolling. Allows initial content to render
scroll_delayNoMilliseconds between scroll steps for lazy-loaded content
process_iframesNoExtract content from embedded iframes including videos and forms
exclude_external_linksNoRemove links pointing to different domains for cleaner content
screenshotNoCapture full-page screenshot as base64 PNG
screenshot_directoryNoDirectory path to save screenshot (e.g., ~/Desktop, /tmp). Do NOT include filename - it will be auto-generated. Large screenshots (>800KB) won't be returned inline when saved.
pdfNoGenerate PDF as base64 preserving exact layout
cache_modeNoCache strategy. ENABLED: Use cache if available. BYPASS: Fetch fresh (recommended). DISABLED: No cacheBYPASS
timeoutNoOverall request timeout in milliseconds
verboseNoEnable server-side debug logging (not shown in output). Only for troubleshooting. Does not affect extraction results
wait_untilNoWhen to consider page loaded (use INSTEAD of wait_for for initial load): • "domcontentloaded" (default): Fast, DOM ready, use for forms/static content • "load": All resources loaded, use if you need images • "networkidle": Wait for network quiet, use for heavy JS apps WARNING: Don't use wait_for for elements in initial HTML!domcontentloaded
page_timeoutNoPage navigation timeout in milliseconds
wait_for_imagesNoWait for all images to load before extraction
ignore_body_visibilityNoSkip checking if body element is visible
scan_full_pageNoAuto-scroll entire page to trigger lazy loading. WARNING: Can be slow on long pages. Avoid combining with wait_until:"networkidle" or CSS extraction on dynamic sites. Better to use virtual_scroll_config for infinite feeds
remove_formsNoRemove all form elements from extracted content
keep_data_attributesNoPreserve data-* attributes in cleaned HTML
excluded_selectorNoCSS selector for elements to remove. Comma-separate multiple selectors. SELECTOR STRATEGY: Use get_html first to inspect page structure. Look for: • id attributes (e.g., #cookie-banner) • CSS classes (e.g., .advertisement, .popup) • data-* attributes (e.g., [data-type="ad"]) • Element type + attributes (e.g., div[role="banner"]) Examples: "#cookie-banner, .advertisement, .social-share"
only_textNoExtract only text content, no HTML structure
image_description_min_word_thresholdNoMinimum words for image alt text to be considered valid
image_score_thresholdNoMinimum relevance score for images (filters low-quality images)
exclude_external_imagesNoExclude images from external domains
screenshot_wait_forNoExtra wait time in seconds before taking screenshot
exclude_social_media_linksNoRemove links to social media platforms
exclude_domainsNoList of domains to exclude from links (e.g., ["ads.com", "tracker.io"])
simulate_userNoMimic human behavior with random mouse movements and delays. Helps bypass bot detection on protected sites. Slows crawling but improves success rate
override_navigatorNoOverride navigator properties for stealth
magicNoEXPERIMENTAL: Auto-handles popups, cookies, overlays. Use as LAST RESORT - can conflict with wait_for & CSS extraction Try first: remove_overlay_elements, excluded_selector Avoid with: CSS extraction, precise timing needs
virtual_scroll_configNoFor infinite scroll sites that REPLACE content (Twitter/Instagram feeds). USE when: Content disappears as you scroll (virtual scrolling) DON'T USE when: Content appends (use scan_full_page instead) Example: {container_selector: "#timeline", scroll_count: 10, wait_after_scroll: 1}
log_consoleNoCapture browser console logs for debugging

Implementation Reference

  • The core handler function that implements the 'crawl' tool logic. It processes a wide range of input options, builds browser and crawler configurations, calls the underlying Crawl4AIService.crawl(), handles various response types (markdown, screenshot base64, PDF, metadata, links, JS results), saves screenshots locally if requested, and constructs the MCP-standard content array response.
    async crawl(options: Record<string, unknown>) { try { // Ensure options is an object if (!options || typeof options !== 'object') { throw new Error('crawl requires options object with at least a url parameter'); } // Build browser_config const browser_config: Record<string, unknown> = { headless: true, // Always true as noted }; if (options.browser_type) browser_config.browser_type = options.browser_type; if (options.viewport_width) browser_config.viewport_width = options.viewport_width; if (options.viewport_height) browser_config.viewport_height = options.viewport_height; if (options.user_agent) browser_config.user_agent = options.user_agent; if (options.headers) browser_config.headers = options.headers; if (options.cookies) browser_config.cookies = options.cookies; // Handle proxy configuration - support both unified and legacy formats if (options.proxy) { // New unified format (0.7.3/0.7.4) browser_config.proxy = options.proxy; } else if (options.proxy_server) { // Legacy format for backward compatibility browser_config.proxy_config = { server: options.proxy_server, username: options.proxy_username, password: options.proxy_password, }; } // Build crawler_config const crawler_config: Record<string, unknown> = {}; // Content filtering if (options.word_count_threshold !== undefined) crawler_config.word_count_threshold = options.word_count_threshold; if (options.excluded_tags) crawler_config.excluded_tags = options.excluded_tags; if (options.remove_overlay_elements) crawler_config.remove_overlay_elements = options.remove_overlay_elements; // JavaScript execution if (options.js_code !== undefined && options.js_code !== null) { // If js_code is an array, join it with newlines for the server crawler_config.js_code = Array.isArray(options.js_code) ? options.js_code.join('\n') : options.js_code; } else if (options.js_code === null) { // If js_code is explicitly null, throw a helpful error throw new Error('js_code parameter is null. Please provide JavaScript code as a string or array of strings.'); } if (options.wait_for) crawler_config.wait_for = options.wait_for; if (options.wait_for_timeout) crawler_config.wait_for_timeout = options.wait_for_timeout; // Dynamic content if (options.delay_before_scroll) crawler_config.delay_before_scroll = options.delay_before_scroll; if (options.scroll_delay) crawler_config.scroll_delay = options.scroll_delay; // Content processing if (options.process_iframes) crawler_config.process_iframes = options.process_iframes; if (options.exclude_external_links) crawler_config.exclude_external_links = options.exclude_external_links; // Export options if (options.screenshot) crawler_config.screenshot = options.screenshot; if (options.pdf) crawler_config.pdf = options.pdf; // Session and cache if (options.session_id) { crawler_config.session_id = options.session_id; // Update session last_used time const session = this.sessions.get(String(options.session_id)); if (session) { session.last_used = new Date(); } } if (options.cache_mode) crawler_config.cache_mode = String(options.cache_mode).toLowerCase(); // Performance if (options.timeout) crawler_config.timeout = options.timeout; if (options.verbose) crawler_config.verbose = options.verbose; // Additional crawler parameters if (options.wait_until) crawler_config.wait_until = options.wait_until; if (options.page_timeout) crawler_config.page_timeout = options.page_timeout; if (options.wait_for_images) crawler_config.wait_for_images = options.wait_for_images; if (options.ignore_body_visibility) crawler_config.ignore_body_visibility = options.ignore_body_visibility; if (options.scan_full_page) crawler_config.scan_full_page = options.scan_full_page; if (options.remove_forms) crawler_config.remove_forms = options.remove_forms; if (options.keep_data_attributes) crawler_config.keep_data_attributes = options.keep_data_attributes; if (options.excluded_selector) crawler_config.excluded_selector = options.excluded_selector; if (options.only_text) crawler_config.only_text = options.only_text; // Media handling if (options.image_description_min_word_threshold !== undefined) crawler_config.image_description_min_word_threshold = options.image_description_min_word_threshold; if (options.image_score_threshold !== undefined) crawler_config.image_score_threshold = options.image_score_threshold; if (options.exclude_external_images) crawler_config.exclude_external_images = options.exclude_external_images; if (options.screenshot_wait_for !== undefined) crawler_config.screenshot_wait_for = options.screenshot_wait_for; // Link filtering if (options.exclude_social_media_links) crawler_config.exclude_social_media_links = options.exclude_social_media_links; if (options.exclude_domains) crawler_config.exclude_domains = options.exclude_domains; // Page interaction if (options.js_only) crawler_config.js_only = options.js_only; if (options.simulate_user) crawler_config.simulate_user = options.simulate_user; if (options.override_navigator) crawler_config.override_navigator = options.override_navigator; if (options.magic) crawler_config.magic = options.magic; // Virtual scroll if (options.virtual_scroll_config) crawler_config.virtual_scroll_config = options.virtual_scroll_config; // Cache control if (options.cache_mode) crawler_config.cache_mode = options.cache_mode; // Other if (options.log_console) crawler_config.log_console = options.log_console; if (options.capture_mhtml) crawler_config.capture_mhtml = options.capture_mhtml; // New parameters from 0.7.3/0.7.4 if (options.delay_before_return_html) crawler_config.delay_before_return_html = options.delay_before_return_html; if (options.css_selector) crawler_config.css_selector = options.css_selector; if (options.include_links !== undefined) crawler_config.include_links = options.include_links; if (options.resolve_absolute_urls !== undefined) crawler_config.resolve_absolute_urls = options.resolve_absolute_urls; // Call service with proper configuration const crawlConfig: AdvancedCrawlConfig = { url: options.url ? String(options.url) : undefined, crawler_config, }; // Add extraction strategy passthrough objects if provided if (options.extraction_strategy) crawlConfig.extraction_strategy = options.extraction_strategy as ExtractionStrategy; if (options.table_extraction_strategy) crawlConfig.table_extraction_strategy = options.table_extraction_strategy as TableExtractionStrategy; if (options.markdown_generator_options) crawlConfig.markdown_generator_options = options.markdown_generator_options as MarkdownGeneratorOptions; // Only include browser_config if we're not using a session if (!options.session_id) { crawlConfig.browser_config = browser_config; } const response: CrawlEndpointResponse = await this.service.crawl(crawlConfig); // Validate response structure if (!response || !response.results || response.results.length === 0) { throw new Error('Invalid response from server: no results received'); } const result: CrawlResultItem = response.results[0]; // Build response content const content = []; // Main content - use markdown.raw_markdown as primary content let mainContent = 'No content extracted'; if (result.extracted_content) { // Handle extraction results which might be objects or strings if (typeof result.extracted_content === 'string') { mainContent = result.extracted_content; } else if (typeof result.extracted_content === 'object') { mainContent = JSON.stringify(result.extracted_content, null, 2); } } else if (result.markdown?.raw_markdown) { mainContent = result.markdown.raw_markdown; } else if (result.html) { mainContent = result.html; } else if (result.fit_html) { mainContent = result.fit_html; } content.push({ type: 'text', text: mainContent, }); // Screenshot if available if (result.screenshot) { // Save to local directory if requested let savedFilePath: string | undefined; if (options.screenshot_directory && typeof options.screenshot_directory === 'string') { try { // Resolve home directory path let screenshotDir = options.screenshot_directory; if (screenshotDir.startsWith('~')) { const homedir = os.homedir(); screenshotDir = path.join(homedir, screenshotDir.slice(1)); } // Check if user provided a file path instead of directory if (screenshotDir.endsWith('.png') || screenshotDir.endsWith('.jpg')) { console.warn( `Warning: screenshot_directory should be a directory path, not a file path. Using parent directory.`, ); screenshotDir = path.dirname(screenshotDir); } // Ensure directory exists await fs.mkdir(screenshotDir, { recursive: true }); // Generate filename from URL and timestamp const url = new URL(String(options.url)); const hostname = url.hostname.replace(/[^a-z0-9]/gi, '-'); const timestamp = new Date().toISOString().replace(/[:.]/g, '-').slice(0, -5); const filename = `${hostname}-${timestamp}.png`; savedFilePath = path.join(screenshotDir, filename); // Convert base64 to buffer and save const buffer = Buffer.from(result.screenshot, 'base64'); await fs.writeFile(savedFilePath, buffer); } catch (saveError) { // Log error but don't fail the operation console.error('Failed to save screenshot locally:', saveError); } } // If saved locally and screenshot is large (>800KB), don't return the base64 data const screenshotSize = Buffer.from(result.screenshot, 'base64').length; const shouldReturnImage = !savedFilePath || screenshotSize < 800 * 1024; // 800KB threshold if (shouldReturnImage) { content.push({ type: 'image', data: result.screenshot, mimeType: 'image/png', }); } if (savedFilePath) { const sizeInfo = !shouldReturnImage ? ` (${Math.round(screenshotSize / 1024)}KB - too large to display inline)` : ''; content.push({ type: 'text', text: `\n---\nScreenshot saved to: ${savedFilePath}${sizeInfo}`, }); } } // PDF if available if (result.pdf) { content.push({ type: 'resource', resource: { uri: `data:application/pdf;name=${encodeURIComponent(new URL(String(options.url)).hostname)}.pdf;base64,${result.pdf}`, mimeType: 'application/pdf', blob: result.pdf, }, }); } // Metadata if (result.metadata) { content.push({ type: 'text', text: `\n---\nMetadata: ${JSON.stringify(result.metadata, null, 2)}`, }); } // Links if (result.links && (result.links.internal.length > 0 || result.links.external.length > 0)) { content.push({ type: 'text', text: `\n---\nLinks: Internal: ${result.links.internal.length}, External: ${result.links.external.length}`, }); } // JS execution results if available if (result.js_execution_result && result.js_execution_result.results.length > 0) { const jsResults = result.js_execution_result.results .map((res: unknown, idx: number) => { return `Result ${idx + 1}: ${JSON.stringify(res, null, 2)}`; }) .join('\n'); content.push({ type: 'text', text: `\n---\nJavaScript Execution Results:\n${jsResults}`, }); } // Add memory metrics if available if (response.server_memory_delta_mb !== undefined || response.server_peak_memory_mb !== undefined) { const memoryInfo = []; if (response.server_processing_time_s !== undefined) { memoryInfo.push(`Processing time: ${response.server_processing_time_s.toFixed(2)}s`); } if (response.server_memory_delta_mb !== undefined) { memoryInfo.push(`Memory delta: ${response.server_memory_delta_mb.toFixed(1)}MB`); } if (response.server_peak_memory_mb !== undefined) { memoryInfo.push(`Peak memory: ${response.server_peak_memory_mb.toFixed(1)}MB`); } if (memoryInfo.length > 0) { content.push({ type: 'text', text: `\n---\nServer metrics: ${memoryInfo.join(', ')}`, }); } } return { content }; } catch (error) { throw this.formatError(error, 'crawl'); } }
  • Zod validation schema (CrawlSchema) for the 'crawl' tool input parameters. Defines all optional fields with types, descriptions, defaults, and custom refinements (e.g., js_only requires session_id, non-empty js_code array).
    export const CrawlSchema = z .object({ url: z.string().url(), // Browser configuration browser_type: z.enum(['chromium', 'firefox', 'webkit']).optional(), viewport_width: z.number().optional(), viewport_height: z.number().optional(), user_agent: z.string().optional(), proxy_server: z.string().optional(), proxy_username: z.string().optional(), proxy_password: z.string().optional(), cookies: z .array( z.object({ name: z.string(), value: z.string(), domain: z.string(), path: z.string().optional(), }), ) .optional(), headers: z.record(z.string()).optional(), extra_args: z.array(z.string()).optional(), // Content filtering word_count_threshold: z.number().optional(), excluded_tags: z.array(z.string()).optional(), excluded_selector: z.string().optional(), remove_overlay_elements: z.boolean().optional(), only_text: z.boolean().optional(), remove_forms: z.boolean().optional(), keep_data_attributes: z.boolean().optional(), // JavaScript execution js_code: JsCodeSchema.optional(), js_only: z.boolean().optional(), wait_for: z.string().optional(), wait_for_timeout: z.number().optional(), // Page navigation & timing wait_until: z.enum(['domcontentloaded', 'networkidle', 'load']).optional(), page_timeout: z.number().optional(), wait_for_images: z.boolean().optional(), ignore_body_visibility: z.boolean().optional(), // Dynamic content delay_before_scroll: z.number().optional(), scroll_delay: z.number().optional(), scan_full_page: z.boolean().optional(), virtual_scroll_config: VirtualScrollConfigSchema.optional(), // Content processing process_iframes: z.boolean().optional(), exclude_external_links: z.boolean().optional(), // Media handling screenshot: z.boolean().optional(), screenshot_wait_for: z.number().optional(), screenshot_directory: z .string() .optional() .describe('Local directory to save screenshot file when screenshot=true'), pdf: z.boolean().optional(), capture_mhtml: z.boolean().optional(), image_description_min_word_threshold: z.number().optional(), image_score_threshold: z.number().optional(), exclude_external_images: z.boolean().optional(), // Link filtering exclude_social_media_links: z.boolean().optional(), exclude_domains: z.array(z.string()).optional(), // Page interaction simulate_user: z.boolean().optional(), override_navigator: z.boolean().optional(), magic: z.boolean().optional(), // Session and cache session_id: z.string().optional(), cache_mode: z.enum(['ENABLED', 'BYPASS', 'DISABLED']).optional(), // Performance options timeout: z.number().optional(), verbose: z.boolean().optional(), // Debug log_console: z.boolean().optional(), // New parameters from 0.7.3/0.7.4 delay_before_return_html: z.number().optional(), css_selector: z.string().optional(), include_links: z.boolean().optional(), resolve_absolute_urls: z.boolean().optional(), }) .refine( (data) => { // js_only is for subsequent calls in same session, not first call // Using it incorrectly causes server errors if (data.js_only && !data.session_id) { return false; } return true; }, { message: "Error: js_only requires session_id (it's for continuing existing sessions).\n" + 'For first call with js_code, use: {js_code: [...], screenshot: true}\n' + 'For multi-step: First {js_code: [...], session_id: "x"}, then {js_only: true, session_id: "x"}', }, ) .refine( (data) => { // Empty js_code array is not allowed if (Array.isArray(data.js_code) && data.js_code.length === 0) { return false; } return true; }, { message: 'Error: js_code array cannot be empty. Either provide JavaScript code to execute or remove the js_code parameter entirely.', }, );
  • src/server.ts:880-883 (registration)
    Registration of the 'crawl' tool handler in the MCP server's CallToolRequestSchema handler switch statement. Validates args using CrawlSchema and delegates to CrawlHandlers.crawl().
    case 'crawl': return await this.validateAndExecute('crawl', args, CrawlSchema, async (validatedArgs) => this.crawlHandlers.crawl(validatedArgs), );
  • src/server.ts:86-86 (registration)
    Instantiation of CrawlHandlers instance used for all crawl-related tools including 'crawl'.
    this.crawlHandlers = new CrawlHandlers(this.service, this.axiosClient, this.sessions);
  • Low-level service method that makes the HTTP POST /crawl request to the Crawl4AI backend, handling config transformation and error handling. Called by the 'crawl' handler.
    async crawl(options: AdvancedCrawlConfig): Promise<CrawlEndpointResponse> { // Validate JS code if present if (options.crawler_config?.js_code) { const scripts = Array.isArray(options.crawler_config.js_code) ? options.crawler_config.js_code : [options.crawler_config.js_code]; for (const script of scripts) { if (!validateJavaScriptCode(script)) { throw new Error( 'Invalid JavaScript: Contains HTML entities (&quot;), literal \\n outside strings, or HTML tags. Use proper JS syntax with real quotes and newlines.', ); } } } // Server only accepts urls array, not url string const urls = options.url ? [options.url] : options.urls || []; const requestBody: CrawlEndpointOptions & { extraction_strategy?: unknown; table_extraction_strategy?: unknown; markdown_generator_options?: unknown; } = { urls, browser_config: options.browser_config, crawler_config: options.crawler_config || {}, // Always include crawler_config, even if empty }; // Add extraction strategy passthrough fields if present if (options.extraction_strategy) { requestBody.extraction_strategy = options.extraction_strategy; } if (options.table_extraction_strategy) { requestBody.table_extraction_strategy = options.table_extraction_strategy; } if (options.markdown_generator_options) { requestBody.markdown_generator_options = options.markdown_generator_options; } try { const response = await this.axiosClient.post('/crawl', requestBody); return response.data; } catch (error) { return handleAxiosError(error); } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/omgwtfwow/mcp-crawl4ai-ts'

If you have feedback or need assistance with the MCP directory API, please join our Discord server