Analyze a website's robots.txt file to determine crawl permissions and ensure compliance with ethical web scraping practices. Provides insights into allowed and disallowed paths for crawling.
A Model Context Protocol server that allows LLMs to interact with web content through standardized tools, currently supporting web scraping functionality.
Provides local web search and content fetching capabilities for AI assistants, enabling them to search DuckDuckGo and extract clean text from web pages. All requests originate from the user's machine to ensure direct network control and bypass external proxies.