Search for:
Why this server?
Provides RAG capabilities for semantic document search using Qdrant vector database, allowing users to add, search, list, and delete documentation with metadata support.
Why this server?
Enables integration with Google Drive for listing, reading, and searching over files, supporting various file types with automatic export for Google Workspace files.
Why this server?
A Model Context Protocol server that enables LLMs to read, search, and analyze code files with advanced caching and real-time file watching capabilities.
Why this server?
A Model Context Protocol server that provides Claude and other LLMs with read-only access to Hugging Face Hub APIs, enabling interaction with models, datasets, spaces, papers, and collections through natural language.
Why this server?
Enables access to Fireflies.ai API for retrieving, searching, and summarizing meeting transcripts with various filtering options and formats.
Why this server?
A server that allows AI assistants to browse and read files from specified GitHub repositories, providing access to repository contents via the Model Context Protocol.
Why this server?
A Model Context Protocol server that provides tools for interacting with databases, including PostgreSQL, DuckDB, and Google Cloud Storage Parquet files.
Why this server?
Enables querying and retrieving content from Confluence through CQL searches and page content fetching, allowing Claude to seamlessly access information stored in Confluence workspaces.
Why this server?
This project implements a Model Context Protocol (MCP) server for connecting AI models with Obsidian knowledge bases. Through this server, AI models can directly access and manipulate Obsidian notes, including reading, creating, updating, and deleting notes, as well as managing folder structures.