Skip to main content
Glama

Crawl4AI MCP Server

post-tool-linter-hook.log•28.3 kB
[2025-07-29T23:09:39.652Z] === POST-TOOL LINTER HOOK LOG === [2025-07-29T23:09:39.653Z] Date: 2025-07-29T23:09:39.653Z [2025-07-29T23:09:39.653Z] Project: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:39.653Z] Node Version: v23.9.0 [2025-07-29T23:09:39.653Z] Platform: darwin [2025-07-29T23:09:39.653Z] [2025-07-29T23:09:39.653Z] === HOOK EXECUTION START === [2025-07-29T23:09:39.653Z] Tool: Edit [2025-07-29T23:09:39.653Z] Session ID: ecf8c84b-5cbf-48bb-b092-1762efbc44f2 [2025-07-29T23:09:39.653Z] Hook input: Data: { "session_id": "ecf8c84b-5cbf-48bb-b092-1762efbc44f2", "transcript_path": "/Users/jeremyparker/.claude/projects/-Users-jeremyparker-Desktop-Claude-Coding-Projects-crawl4ai-mcp/ecf8c84b-5cbf-48bb-b092-1762efbc44f2.jsonl", "cwd": "/Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp", "hook_event_name": "PostToolUse", "tool_name": "Edit", "tool_input": { "file_path": "/Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/tests/test_tools.py", "old_string": "import pytest\nimport json\nfrom pathlib import Path\n\nfrom tests.mocks import (\n MockCrawlerRunConfig,\n create_mock_crawler,\n create_mock_extraction_strategy\n)", "new_string": "import pytest\nimport json\nfrom unittest.mock import patch, MagicMock\nfrom pathlib import Path\n\nimport fastmcp\nfrom crawl4ai_mcp_server import mcp\nfrom tests.mocks import (\n MockAsyncWebCrawler,\n MockJsonCssExtractionStrategy,\n MockCrawlerRunConfig,\n create_mock_crawler,\n create_mock_extraction_strategy\n)" }, "tool_response": { "filePath": "/Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/tests/test_tools.py", "oldString": "import pytest\nimport json\nfrom pathlib import Path\n\nfrom tests.mocks import (\n MockCrawlerRunConfig,\n create_mock_crawler,\n create_mock_extraction_strategy\n)", "newString": "import pytest\nimport json\nfrom unittest.mock import patch, MagicMock\nfrom pathlib import Path\n\nimport fastmcp\nfrom crawl4ai_mcp_server import mcp\nfrom tests.mocks import (\n MockAsyncWebCrawler,\n MockJsonCssExtractionStrategy,\n MockCrawlerRunConfig,\n create_mock_crawler,\n create_mock_extraction_strategy\n)", "originalFile": "\"\"\"\nUnit tests for all MCP tools in the Crawl4AI MCP Server.\n\nThis module tests server_status, get_page_structure, crawl_with_schema, \nand take_screenshot tools using in-memory FastMCP client.\n\"\"\"\n\nimport pytest\nimport json\nfrom pathlib import Path\n\nfrom tests.mocks import (\n MockCrawlerRunConfig,\n create_mock_crawler,\n create_mock_extraction_strategy\n)\n\n\nclass TestServerStatusTool:\n \"\"\"Test cases for the server_status tool.\"\"\"\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_server_status_success(self):\n \"\"\"Test server_status tool returns correct status information.\"\"\"\n async with fastmcp.Client(mcp) as client:\n result = await client.call_tool(\"server_status\", {})\n \n assert result is not None\n assert hasattr(result, 'content')\n assert result.content is not None\n \n # Get the content (handle both list and single content formats)\n content = result.content[0] if isinstance(result.content, list) else result.content\n data = json.loads(content.text)\n \n # Verify required fields\n assert \"server_name\" in data\n assert \"version\" in data\n assert \"status\" in data\n assert \"transport\" in data\n assert \"working_directory\" in data\n assert \"capabilities\" in data\n assert \"dependencies\" in data\n assert \"message\" in data\n \n # Verify values\n assert data[\"server_name\"] == \"Crawl4AI-MCP-Server\"\n assert data[\"version\"] == \"1.0.0\"\n assert data[\"status\"] == \"operational\"\n assert data[\"transport\"] == \"stdio\"\n assert isinstance(data[\"capabilities\"], list)\n assert isinstance(data[\"dependencies\"], dict)\n \n # Verify capabilities\n expected_capabilities = [\n \"web_crawling\",\n \"content_extraction\", \n \"screenshot_capture\",\n \"schema_based_extraction\"\n ]\n for capability in expected_capabilities:\n assert capability in data[\"capabilities\"]\n \n # Verify dependencies\n assert \"crawl4ai\" in data[\"dependencies\"]\n assert \"fastmcp\" in data[\"dependencies\"]\n assert \"playwright\" in data[\"dependencies\"]\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_server_status_working_directory(self, mcp_client):\n \"\"\"Test that server_status includes correct working directory.\"\"\"\n result = await mcp_client.call_tool(\"server_status\", {})\n content = result.content[0] if isinstance(result.content, list) else result.content\n data = json.loads(content.text)\n \n # Verify working directory is a valid path\n working_dir = data[\"working_directory\"]\n assert working_dir is not None\n assert len(working_dir) > 0\n \n # The working directory should be the current directory\n expected_cwd = str(Path.cwd())\n assert working_dir == expected_cwd\n\n\nclass TestGetPageStructureTool:\n \"\"\"Test cases for the get_page_structure tool.\"\"\"\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_get_page_structure_html_format(self, mcp_client, crawl4ai_patches):\n \"\"\"Test get_page_structure with HTML format.\"\"\"\n # Configure mock crawler\n mock_crawler = create_mock_crawler(success=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n result = await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\",\n \"format\": \"html\"\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n html_content = content.text\n \n # Verify HTML content\n assert len(html_content) > 100\n assert \"<html>\" in html_content\n assert \"</html>\" in html_content\n assert \"<title>\" in html_content\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_get_page_structure_markdown_format(self, mcp_client, crawl4ai_patches):\n \"\"\"Test get_page_structure with markdown format.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n result = await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\",\n \"format\": \"markdown\"\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n markdown_content = content.text\n \n # Verify markdown content\n assert len(markdown_content) > 50\n assert \"#\" in markdown_content # Markdown headers\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_get_page_structure_default_format(self, mcp_client, crawl4ai_patches):\n \"\"\"Test get_page_structure with default (HTML) format.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n result = await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\"\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n html_content = content.text\n \n # Should default to HTML\n assert \"<html>\" in html_content\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_get_page_structure_invalid_url(self, mcp_client, crawl4ai_patches):\n \"\"\"Test get_page_structure with invalid URL.\"\"\"\n mock_crawler = create_mock_crawler(success=False)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n with pytest.raises(Exception):\n await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://invalid-url-test.com\"\n })\n \n @pytest.mark.asyncio \n @pytest.mark.unit\n async def test_get_page_structure_invalid_format(self, mcp_client):\n \"\"\"Test get_page_structure with invalid format parameter.\"\"\"\n with pytest.raises(Exception):\n await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\",\n \"format\": \"invalid_format\"\n })\n\n\nclass TestCrawlWithSchemaTool:\n \"\"\"Test cases for the crawl_with_schema tool.\"\"\"\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_crawl_with_schema_valid_schema(self, mcp_client, crawl4ai_patches):\n \"\"\"Test crawl_with_schema with valid schema.\"\"\"\n # Configure mocks\n mock_crawler = create_mock_crawler(success=True)\n mock_strategy = create_mock_extraction_strategy(\n extracted_data=[{\"title\": \"Test Title\", \"price\": \"$19.99\"}]\n )\n \n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['strategy'].return_value = mock_strategy\n \n schema = json.dumps({\n \"title\": \"h1\",\n \"price\": \".price\"\n })\n \n result = await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": \"https://example.com\",\n \"extraction_schema\": schema\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n response = json.loads(content.text)\n \n # Verify response structure\n assert \"success\" in response\n assert \"extracted_data\" in response\n assert response[\"success\"] is True\n assert isinstance(response[\"extracted_data\"], list)\n assert len(response[\"extracted_data\"]) > 0\n \n # Verify extracted data\n extracted = response[\"extracted_data\"][0]\n assert \"title\" in extracted\n assert \"price\" in extracted\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_crawl_with_schema_invalid_json(self, mcp_client):\n \"\"\"Test crawl_with_schema with invalid JSON schema.\"\"\"\n with pytest.raises(Exception):\n await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": \"https://example.com\",\n \"extraction_schema\": \"invalid json {\"\n })\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_crawl_with_schema_empty_schema(self, mcp_client, crawl4ai_patches):\n \"\"\"Test crawl_with_schema with empty schema.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n mock_strategy = create_mock_extraction_strategy(extracted_data=[])\n \n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['strategy'].return_value = mock_strategy\n \n result = await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": \"https://example.com\",\n \"extraction_schema\": \"{}\"\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n response = json.loads(content.text)\n \n assert \"success\" in response\n assert \"extracted_data\" in response\n # Empty schema should still succeed but return empty data\n assert isinstance(response[\"extracted_data\"], list)\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_crawl_with_schema_crawl_failure(self, mcp_client, crawl4ai_patches):\n \"\"\"Test crawl_with_schema when crawling fails.\"\"\"\n mock_crawler = create_mock_crawler(success=False)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n schema = json.dumps({\"title\": \"h1\"})\n \n with pytest.raises(Exception):\n await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": \"https://invalid-url-test.com\",\n \"extraction_schema\": schema\n })\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_crawl_with_schema_complex_schema(self, mcp_client, crawl4ai_patches):\n \"\"\"Test crawl_with_schema with complex schema.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n complex_data = [{\n \"title\": \"Product Title\",\n \"price\": \"$29.99\", \n \"description\": \"Product description\",\n \"features\": [\"Feature 1\", \"Feature 2\"],\n \"link\": \"https://example.com/product\"\n }]\n mock_strategy = create_mock_extraction_strategy(extracted_data=complex_data)\n \n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['strategy'].return_value = mock_strategy\n \n complex_schema = json.dumps({\n \"title\": \"h1\",\n \"price\": \".price\",\n \"description\": \".description\", \n \"features\": \".features li\",\n \"link\": \"a.product-link\"\n })\n \n result = await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": \"https://example.com\",\n \"extraction_schema\": complex_schema\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n response = json.loads(content.text)\n \n assert response[\"success\"] is True\n extracted = response[\"extracted_data\"][0]\n assert all(key in extracted for key in [\"title\", \"price\", \"description\", \"features\", \"link\"])\n\n\nclass TestTakeScreenshotTool:\n \"\"\"Test cases for the take_screenshot tool.\"\"\"\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_take_screenshot_success(self, mcp_client, crawl4ai_patches):\n \"\"\"Test take_screenshot tool successful capture.\"\"\"\n mock_crawler = create_mock_crawler(success=True, screenshot_enabled=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['config'].return_value = MockCrawlerRunConfig(screenshot=True)\n \n result = await mcp_client.call_tool(\"take_screenshot\", {\n \"url\": \"https://example.com\"\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n response = json.loads(content.text)\n \n # Verify response structure\n assert \"success\" in response\n assert \"screenshot_data\" in response\n assert \"format\" in response\n assert \"url\" in response\n \n assert response[\"success\"] is True\n assert response[\"url\"] == \"https://example.com\"\n assert response[\"format\"] == \"base64\"\n assert len(response[\"screenshot_data\"]) > 50 # Base64 data should be substantial\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_take_screenshot_invalid_url(self, mcp_client, crawl4ai_patches):\n \"\"\"Test take_screenshot with invalid URL.\"\"\"\n mock_crawler = create_mock_crawler(success=False)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n with pytest.raises(Exception):\n await mcp_client.call_tool(\"take_screenshot\", {\n \"url\": \"https://invalid-url-test.com\"\n })\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_take_screenshot_no_screenshot_data(self, mcp_client, crawl4ai_patches):\n \"\"\"Test take_screenshot when no screenshot data is returned.\"\"\"\n # Create a mock crawler that succeeds but returns no screenshot\n mock_crawler = create_mock_crawler(success=True, screenshot_enabled=False)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['config'].return_value = MockCrawlerRunConfig(screenshot=True)\n \n with pytest.raises(Exception):\n await mcp_client.call_tool(\"take_screenshot\", {\n \"url\": \"https://example.com\"\n })\n\n\nclass TestToolIntegration:\n \"\"\"Integration tests for tool interactions.\"\"\"\n \n @pytest.mark.asyncio\n @pytest.mark.integration\n async def test_all_tools_available(self, mcp_client):\n \"\"\"Test that all expected tools are available.\"\"\"\n # This test verifies tool registration without calling them\n tools = await mcp_client.list_tools()\n \n tool_names = [tool.name for tool in tools]\n expected_tools = [\n \"server_status\",\n \"get_page_structure\", \n \"crawl_with_schema\",\n \"take_screenshot\"\n ]\n \n for tool in expected_tools:\n assert tool in tool_names\n \n @pytest.mark.asyncio\n @pytest.mark.integration\n async def test_tools_have_proper_schemas(self, mcp_client):\n \"\"\"Test that all tools have proper input schemas.\"\"\"\n tools = await mcp_client.list_tools()\n \n for tool in tools:\n assert hasattr(tool, 'inputSchema')\n assert tool.inputSchema is not None\n assert 'type' in tool.inputSchema\n assert tool.inputSchema['type'] == 'object'\n \n @pytest.mark.asyncio\n @pytest.mark.unit\n async def test_sequential_tool_calls(self, mcp_client, crawl4ai_patches):\n \"\"\"Test calling multiple tools in sequence.\"\"\"\n # Configure mocks\n mock_crawler = create_mock_crawler(success=True, screenshot_enabled=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n crawl4ai_patches['config'].return_value = MockCrawlerRunConfig(screenshot=True)\n \n # Call server_status first\n status_result = await mcp_client.call_tool(\"server_status\", {})\n assert status_result is not None\n \n # Then get page structure\n page_result = await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\"\n })\n assert page_result is not None\n \n # Finally take screenshot\n screenshot_result = await mcp_client.call_tool(\"take_screenshot\", {\n \"url\": \"https://example.com\"\n })\n assert screenshot_result is not None\n \n # All should succeed independently\n assert all(result is not None for result in [status_result, page_result, screenshot_result])\n\n\n# Additional parametrized tests\n@pytest.mark.asyncio\n@pytest.mark.unit\n@pytest.mark.parametrize(\"format_type\", [\"html\", \"markdown\"])\nasync def test_get_page_structure_formats(mcp_client, crawl4ai_patches, format_type):\n \"\"\"Test get_page_structure with different formats.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n result = await mcp_client.call_tool(\"get_page_structure\", {\n \"url\": \"https://example.com\",\n \"format\": format_type\n })\n \n assert result is not None\n content = result.content[0] if isinstance(result.content, list) else result.content\n assert len(content.text) > 50\n\n\n@pytest.mark.asyncio\n@pytest.mark.unit\n@pytest.mark.parametrize(\"url\", [\n \"https://example.com\",\n \"https://httpbin.org/html\",\n \"https://www.google.com\"\n])\nasync def test_tools_with_different_urls(mcp_client, crawl4ai_patches, url):\n \"\"\"Test tools with different valid URLs.\"\"\"\n mock_crawler = create_mock_crawler(success=True)\n crawl4ai_patches['crawler'].return_value = mock_crawler\n \n # Test get_page_structure\n result = await mcp_client.call_tool(\"get_page_structure\", {\"url\": url})\n assert result is not None\n \n # Test crawl_with_schema\n schema = json.dumps({\"title\": \"h1\"})\n result = await mcp_client.call_tool(\"crawl_with_schema\", {\n \"url\": url,\n \"extraction_schema\": schema\n })\n assert result is not None", "structuredPatch": [ { "oldStart": 7, "oldLines": 9, "newStart": 7, "newLines": 14, "lines": [ " ", " import pytest", " import json", "+from unittest.mock import patch, MagicMock", " from pathlib import Path", " ", "+import fastmcp", "+from crawl4ai_mcp_server import mcp", " from tests.mocks import (", "+ MockAsyncWebCrawler,", "+ MockJsonCssExtractionStrategy,", " MockCrawlerRunConfig,", " create_mock_crawler,", " create_mock_extraction_strategy" ] } ], "userModified": false, "replaceAll": false } } [2025-07-29T23:09:39.653Z] Extracting file paths from tool: Edit [2025-07-29T23:09:39.653Z] Found file path: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/tests/test_tools.py [2025-07-29T23:09:39.654Z] Checking path existence: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/tests/test_tools.py - EXISTS [2025-07-29T23:09:39.654Z] Total paths found before filtering: 1 [2025-07-29T23:09:39.654Z] Filtering 1 files with ignore rules [2025-07-29T23:09:39.654Z] Getting file type for extension: .py [2025-07-29T23:09:39.654Z] File type detected: python [2025-07-29T23:09:39.654Z] Loading ignore patterns for python linter [2025-07-29T23:09:39.654Z] Reading ignore file: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/.ruffignore [2025-07-29T23:09:39.654Z] Ignore file not found: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/.ruffignore [2025-07-29T23:09:39.654Z] Reading ignore file: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/.gitignore [2025-07-29T23:09:39.654Z] Ignore file not found: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/.gitignore [2025-07-29T23:09:39.654Z] Total ignore patterns loaded for python: 0 [2025-07-29T23:09:39.654Z] Filtered 1 files down to 1 files [2025-07-29T23:09:39.654Z] Total paths after ignore filtering: 1 [2025-07-29T23:09:39.654Z] Starting auto-fix and linting for 1 file(s)... [2025-07-29T23:09:39.654Z] Detecting all project types for: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:39.654Z] Validating python config file: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/pyproject.toml [2025-07-29T23:09:39.656Z] pyproject.toml validation: VALID [2025-07-29T23:09:39.656Z] Validating python config file: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/requirements.txt [2025-07-29T23:09:39.656Z] requirements.txt validation: VALID (existence check) [2025-07-29T23:09:39.656Z] Validating javascript config file: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/package.json [2025-07-29T23:09:39.657Z] package.json validation: VALID [2025-07-29T23:09:39.657Z] All detected project types: python, javascript [2025-07-29T23:09:39.657Z] Getting file type for extension: .py [2025-07-29T23:09:39.657Z] File type detected: python [2025-07-29T23:09:39.657Z] Hybrid mode analysis: edited file types [python], project types [python, javascript] [2025-07-29T23:09:39.657Z] All file types match project: true [2025-07-29T23:09:39.657Z] === RUNNING AUTO-FIX BEFORE LINTING === [2025-07-29T23:09:39.657Z] Using project-wide auto-fix mode (1 files, types: python, javascript) [2025-07-29T23:09:39.658Z] --- Auto-fixing entire project: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp --- [2025-07-29T23:09:39.658Z] Linter types: python, javascript [2025-07-29T23:09:39.658Z] Running Python project auto-fix (ruff --fix) on: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:39.658Z] Executing project auto-fix command: ruff check . --fix --respect-gitignore [2025-07-29T23:09:40.736Z] Ruff project auto-fix executed successfully [2025-07-29T23:09:40.738Z] Running JavaScript project auto-fix (eslint --fix) on: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:40.740Z] Executing ESLint project auto-fix command: "/Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/node_modules/.bin/eslint" . --fix --ignore-pattern "**/*.json" --ignore-pattern "**/*.md" --ignore-pattern "**/*.txt" --ignore-pattern "**/*.yml" --ignore-pattern "**/*.yaml" --ignore-pattern "**/*.xml" --ignore-pattern "**/*.csv" --ignore-pattern "**/*.log" [2025-07-29T23:09:46.732Z] ESLint project auto-fix executed successfully [2025-07-29T23:09:46.732Z] Project auto-fix completed with 2 result(s) [2025-07-29T23:09:46.733Z] === AUTO-FIX RESULTS === [2025-07-29T23:09:46.735Z] Auto-fix 1: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:46.735Z] Success: true [2025-07-29T23:09:46.735Z] Fixed: true [2025-07-29T23:09:46.735Z] Linter: ruff [2025-07-29T23:09:46.735Z] Auto-fix 2: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:46.735Z] Success: true [2025-07-29T23:09:46.735Z] Fixed: true [2025-07-29T23:09:46.735Z] Linter: eslint [2025-07-29T23:09:46.735Z] Using project-wide linting mode (1 files, types: python, javascript) [2025-07-29T23:09:46.736Z] --- Linting entire project: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp --- [2025-07-29T23:09:46.736Z] Linter types: python, javascript [2025-07-29T23:09:46.736Z] Running Python project linter (ruff) on: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:46.736Z] Executing project command: ruff check . --output-format json --respect-gitignore [2025-07-29T23:09:47.753Z] Ruff project linting executed successfully, parsing output... [2025-07-29T23:09:47.753Z] Found 0 total project violations [2025-07-29T23:09:47.754Z] Running JavaScript project linter (eslint) on: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:47.754Z] Executing ESLint project command: "/Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp/node_modules/.bin/eslint" . --format json --ignore-pattern "**/*.json" --ignore-pattern "**/*.md" --ignore-pattern "**/*.txt" --ignore-pattern "**/*.yml" --ignore-pattern "**/*.yaml" --ignore-pattern "**/*.xml" --ignore-pattern "**/*.csv" --ignore-pattern "**/*.log" [2025-07-29T23:09:48.464Z] ESLint project linting executed successfully, parsing output... [2025-07-29T23:09:48.464Z] Found 1 files in project ESLint results [2025-07-29T23:09:48.464Z] Project linting completed with 2 result(s) [2025-07-29T23:09:48.464Z] === LINTING RESULTS === [2025-07-29T23:09:48.464Z] File 1: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:48.464Z] Success: true [2025-07-29T23:09:48.464Z] Linter: ruff [2025-07-29T23:09:48.464Z] Violations: 0 [2025-07-29T23:09:48.464Z] File 2: /Users/jeremyparker/Desktop/Claude Coding Projects/crawl4ai-mcp [2025-07-29T23:09:48.464Z] Success: true [2025-07-29T23:09:48.464Z] Linter: eslint [2025-07-29T23:09:48.464Z] Violations: 0 [2025-07-29T23:09:48.464Z] Execution failures: 0 [2025-07-29T23:09:48.464Z] Has linting violations: false [2025-07-29T23:09:48.464Z] No linting issues found or all skipped [2025-07-29T23:09:48.464Z] Exiting with code 0 - success

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Nexus-Digital-Automations/crawl4ai-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server