Search for:
Why this server?
This server implements the Model Context Protocol (MCP) for the RAG Web Browser Actor, functioning as a web browser for large language models (LLMs) and RAG pipelines, similar to web search in ChatGPT.
Why this server?
This server enables LLMs to interact with web pages, take screenshots, generate test code, scrape web pages, and execute JavaScript in a real browser environment.
Why this server?
Provides web search functionality via DuckDuckGo for Claude Code and MCP-compatible clients, featuring advanced content exploration, navigation across search results, and detailed webpage analysis.
Why this server?
An MCP server implementation that provides tools for retrieving and processing documentation through vector search, enabling AI assistants to augment their responses with relevant documentation context
Why this server?
Enables retrieval and processing of web page content for LLMs by converting HTML to markdown, with support for content truncation and pagination.
Why this server?
A Cloudflare Workers-based server that extracts clean, formatted text from web pages using WebforAI and makes it accessible to AI models through the Model Context Protocol.
Why this server?
A middleware API that connects AI assistants like ChatGPT to Captain Data tools for extracting information from LinkedIn company and profile pages.
Why this server?
Provides functionality to fetch web content in various formats (HTML, JSON, plain text, and Markdown) through simple API calls.
Why this server?
Provides functionality to fetch web content in various formats, including HTML, JSON, plain text, and Markdown.
Why this server?
Allows Claude or other MCP-compatible AI assistants to search the web and get up-to-date information using the Perplexity API, with features for filtering results by time period.