114,688 tools. Last updated 2026-04-22 02:11
- List and retrieve pages from a ClickUp document, including nested subpages and optional content, to organize and access documentation structure efficiently.Apache 2.0
- Conduct comprehensive web research by crawling, searching, and analyzing content to gather detailed information on any topic.MIT
- Conduct deep web research on any query using crawling, search, and AI analysis to gather comprehensive information from multiple sources.MIT
- Extract content from multiple web pages simultaneously by crawling specified URLs, with options to retrieve main text and links.MIT
- Submit URLs to Google Indexing API for fast crawling and indexing. Use to notify Google about new, updated, or deleted web pages.MIT
Matching MCP Servers

EdgeOne Pages MCPofficial
AsecurityAlicenseBqualityA service that enables rapid deployment of HTML content to EdgeOne Pages and automatically generates publicly accessible URLs for the deployed content.Last updated2445409MIT- AsecurityAlicenseBqualityEnables deployment of HTML content, folders, and full-stack projects to EdgeOne Pages to generate publicly accessible URLs. It utilizes EdgeOne Pages Functions and KV storage for high-performance edge delivery of web applications.Last updated21MIT
Matching MCP Connectors
MCP server for SEO and web analysis data including keyword rankings, backlink profiles, site audits, and traffic analytics for AI agents.
Hire real humans for tasks agents can't do alone. 36 tools for the full hiring lifecycle.
- Search the web using DuckDuckGo to find current information, articles, documentation, and web content. Returns titles, URLs, and snippets for research and fact-finding.
- Identify broken links and redirects on web pages to maintain website integrity and improve user experience.MIT
- Crawl website pages to generate a sitemap by specifying URL, crawl depth, page limits, and external link inclusion for SEO and site structure analysis.MIT
- Discover website URLs by crawling pages or reading sitemap.xml to map site structure and content.AGPL 3.0
- Activate crawling for a specific document to make its content searchable and accessible within the document management system.
- Stop web crawlers from indexing a specific document by disabling its crawling functionality.
- Extract content from web pages by crawling URLs to retrieve text, links, and images for research and analysis.MIT
- Start multi-page web crawling to extract structured data with AI or convert content to markdown from a starting URL.MIT
- Create a new analysis suite and dispatch an AI crawling agent to automatically discover and map web application structure by navigating from a starting URL.MIT
- Extract website content into clean Markdown or text files by crawling pages and removing navigation, scripts, and boilerplate. Build searchable archives for research and data analysis.MIT
- Search the web and fetch top result pages as markdown in one call to reduce token usage and improve efficiency for research and content extraction tasks.
- Crawl multiple URLs concurrently to process URL lists, compare pages, or extract bulk web data efficiently with parallel requests.MIT
- Capture web page screenshots for verifying updates, with automatic tiling for full pages and optimized processing for CLI tools.MIT
- Conduct deep web research on complex queries using intelligent crawling, search, and LLM analysis to generate comprehensive insights from multiple sources.MIT
- Analyze complex research questions by crawling multiple web sources and generating comprehensive LLM-based analysis.MIT
- Conduct deep web research on complex queries using intelligent crawling, search, and LLM analysis to generate comprehensive insights from multiple sources.MIT
- Extract sanitized HTML from web pages to analyze structure, identify form fields, and plan automation selectors for web crawling operations.MIT
- Search the web with AI-powered results for research, fact-checking, and finding current information. Returns structured data with titles, URLs, and snippets.
- Extract clean content from multiple web pages simultaneously to compare information across sources or gather data from several pages at once.Apache 2.0
- Extract and analyze hyperlinks from web pages, organizing URLs, anchor text, and contextual information into a structured format. Supports site mapping, SEO analysis, broken link checking, and targeted crawling preparation. Handles relative and absolute URLs with optional base URL and output limits.MIT
- Gracefully shuts down the web server and disconnects all players to end a Dungeons & Dragons session managed through the DM20 Protocol.
- Analyze and compare content from two web pages using AI to identify similarities and differences.
- Extracts main content from web pages and converts it to clean Markdown format, removing navigation menus and peripheral elements for focused reading.MIT
- Convert web pages to PDF documents with customizable page size, orientation, margins, and headers/footers. Generate reports, archive web content, or create printable documentation from any URL.MIT
- Extract clean content from web pages by removing ads and navigation elements. Use this tool to retrieve main content from multiple URLs for research or analysis.
- Analyze a website's robots.txt file to determine crawl permissions and ensure compliance with ethical web scraping practices. Provides insights into allowed and disallowed paths for crawling.MIT
- Extract heading hierarchy and document structure from web pages to analyze content organization and navigation.MIT
- Stop the web GUI server for managing Claude Code conversation sessions.MIT
- Analyze web page cookies to identify privacy risks and security vulnerabilities by examining cookie attributes and third-party tracking.MIT
- Discover and parse RSS/Atom feeds from web pages to extract structured content for monitoring or analysis.MIT
- Extract meta tags, title, description, and keywords from web pages to analyze content structure and SEO elements.MIT
- Crawl websites to discover endpoints, forms, and hidden URLs for security testing and vulnerability assessment in bug bounty programs.
- Search and retrieve web content using multiple search engines. Specify queries, categories, and time ranges to find relevant information from web pages.MIT
- Retrieve web page content and explore linked pages up to a defined depth to extract comprehensive documentation insights for LLMs.MIT
- Retrieve and process web pages for LLM context using a URL, with options to include screenshots or limit content length. Part of the Web Content MCP Server for enhanced data extraction.MIT
- Extract structured data from web pages using LLM prompts and JSON schemas. Supports cloud and self-hosted AI for web content analysis.MIT
- Extract social media links and metadata from web pages to identify platform connections and contact information for analysis or integration.MIT
- Identify security vulnerabilities in web pages by scanning for XSS, CSRF, header issues, form weaknesses, and cookie problems to enhance website protection.MIT
- Transform multiple web page URLs into Markdown format using AI to create LLM-friendly content. Streamline content extraction and conversion for web pages via ReviewWebsite API.MIT
- Extract structured data from web pages using AI, converting URLs into organized information with custom schemas and prompts.MIT
- Extract structured data from web pages using LLM prompts and JSON schemas to organize information from URLs.MIT
- Extract structured data from web pages by defining a schema and providing a URL, using AI to process and organize information from online sources.MIT
- Extract emails, phone numbers, and addresses from web pages to collect contact information for business or research purposes.MIT
- Extract form elements and structure from web pages to analyze input fields, buttons, and form layouts for web scraping and automation tasks.MIT