A Model Context Protocol server that provides web and image search capabilities through Google's Custom Search API, allowing AI assistants like Claude to access current information from the internet.
mcp using PyPDF2 to:
• merge-pdfs
• extract-pages
• search-pdfs
• merge-pdfs-ordered (merge in user spec. order)
• find-related-pdfs (regex extracted text for related PDF files)
A Model Context Protocol server that enables intelligent searching across documentation for 30+ programming libraries and frameworks, fetching relevant information from official sources.
An MCP-based service that analyzes user search keywords to determine their intent, providing classifications, reasoning, references, and search suggestions to support SEO analysis.
An MCP (Model Context Protocol) server that provides Google search capabilities and webpage content analysis tools. This server enables AI models to perform Google searches and analyze webpage content programmatically.
Enables Google search and webpage content extraction via Chrome for macOS, allowing access to both unauthenticated and authenticated content, and integrates with Claude for secure and automated browsing tasks.
Enables search capabilities using a Google Custom Search Engine, allowing users to input a search term and retrieve search result titles, links, and snippets, while facilitating integration with other tools for content extraction and advanced search strategies.
Enables efficient web search integration with Jina.ai's Search API, offering clean, LLM-optimized content retrieval with support for various content types and configurable caching.
A Model Context Protocol server that enables AI models to perform real-time internet and knowledge searches through Higress, enhancing model responses with up-to-date information from Google, Bing, Arxiv, and internal knowledge bases.
Provides real-time stock news search capabilities via Tavily API, allowing MCP clients to retrieve filtered and customized stock news with various search parameters.
A server that leverages Cloudflare Browser Rendering to extract and process web content for use as context in LLMs, offering tools for fetching pages, searching documentation, extracting structured content, and summarizing content.
An MCP server that enables web search capabilities using OpenAI's o3 model, allowing AI assistants to perform text-based web searches and return AI-powered results.
Enables real-time search and retrieval of academic paper information from multiple sources, providing access to paper metadata, abstracts, and full-text content when available, with structured data responses for integration with AI models that support tool/function calling.
Enables integration of Google search functionality into MCP-enabled applications using the Serper API, providing rich search results, configurable parameters, and efficient response handling.
A server that enables document searching using Vertex AI with Gemini grounding, improving search results by grounding responses in private data stored in Vertex AI Datastore.
MCP Server for Google Cloud Healthcare API enables Agentic AI for a variety of FHIR-based digital health solutions, from smarter clinical workflows for Health Systems to Pre-Auth frameworks for Payers!
A server implementing the Model Context Protocol (MCP) to support Agent8 SDK development by providing system prompts and code example search capabilities through stdio and SSE transports.
A privacy-friendly MCP server that enables web searches and URL content extraction using DuckDuckGo, allowing AI assistants to access real-time web information without API keys.
Provides web search capabilities through Baidu with content fetching and parsing features, allowing LLMs to search the web and extract webpage content.
Facilitates searching and accessing programming resources across platforms like Stack Overflow, MDN, GitHub, npm, and PyPI, aiding LLMs in finding code examples and documentation.
Provides offline access to SAP documentation and real-time SAP Community content, integrating official documentation with community-driven solutions for comprehensive developer support.
Provides Google Search functionality for AI models using Gemini's built-in Grounding with Google Search feature, returning real-time web search results with source citations.
Enhances large language models with competitive programming knowledge by leveraging OI-Wiki content through vector search, allowing models to retrieve relevant algorithms and techniques.
A powerful MCP server that enables parallel Google searching with multiple keywords simultaneously, providing structured results while handling CAPTCHAs and simulating user browsing patterns.