Browse 505 MCP Connectors from the official MCP Registry. Connect to these servers directly without local installation.
MCP server for aerospace calculations: orbital mechanics, ephemeris, DSN operations, ...
The Google Compute Engine MCP server is a fully-managed Model Context Protocol server that provides tools to manage Google Compute Engine resources through AI agents. It enables capabilities including instance management (creating, starting, stopping, resetting, listing), disk management, handling instance templates and group managers, viewing machine and accelerator types, managing images, and accessing reservation and commitment information. The server operates as a zero-deployment, enterprise-grade endpoint at https://compute.googleapis.com/mcp with built-in IAM-based security.
A Notion workspace is a collaborative environment where teams can organize work, manage projects,…
AgentMail is the email inbox API for AI agents. It gives agents their own email inboxes, like Gmail
The Google GKE MCP server is a managed Model Context Protocol server that provides AI applications with tools to manage Google Kubernetes Engine (GKE) clusters and Kubernetes resources. It exposes a structured, discoverable interface that allows AI agents to interact with GKE and Kubernetes APIs, enabling them to inspect cluster configurations, retrieve Kubernetes resource YAMLs, monitor operations like cluster upgrades, diagnose issues, and optimize costs—all without needing to parse text output or use complex kubectl commands.
VibeMarketing (https://vibemarketing.ninja/mcp) is a directory service that catalogs and provides information about various MCP (Model Context Protocol) servers. It serves as a centralized resource where users can discover different MCP servers and their capabilities. Examples of servers listed in the directory include Sequential Thinking MCP (for dynamic problem-solving through structured thought sequences) and Memory MCP (a knowledge graph-based persistent memory system).
The AWS Knowledge MCP server is a fully managed remote Model Context Protocol server that provides real-time access to official AWS content in an LLM-compatible format. It offers structured access to AWS documentation, code samples, blog posts, What's New announcements, Well-Architected best practices, and regional availability information for AWS APIs and CloudFormation resources. Key capabilities include searching and reading documentation in markdown format, getting content recommendations, listing AWS regions, and checking regional availability for services and features.
Jinko is a travel MCP server that provides hotel search and booking capabilities.
Fast, intelligent web search and web crawling. New mcp tool: Exa-code is a context tool for coding
The Ferryhopper MCP server is a connector for LLMs and AI Agents in maritime travel that exposes ferry routes, schedules, and booking options. It enables AI assistants to search ports and connections across 33 countries and 190+ ferry operators, provide real-time ferry itineraries with indicative prices, and assist users with planning island-hopping or multi-leg journeys by processing natural language queries about ferry times, passenger counts, and travel durations.
The Google Maps MCP server is a fully-managed server provided by the Maps Grounding Lite API that connects AI applications to Google Maps Platform services. It provides three main tools for building LLM applications: searching for places, looking up weather information, and computing routes with details like distance and travel time. The server acts as a proxy that translates Google Maps data into a format that AI applications can understand, enabling agents to accurately answer real-world location and travel queries.
MCP server to assist with JxBrowser development.
The Remote MCP server acts as a standardized bridge between LLM applications (like Claude, ChatGPT, and Cursor) and external services, enabling AI agents to access external tools and resources. Its primary capability is providing a centralized search tool to discover other MCP servers and their respective tools. Unlike local implementations, it runs remotely with OAuth authentication and permission controls for security.
Agent Interviews is an AI-powered research platform designed to conduct qualitative interviews an...
Greet anyone with a friendly, personalized hello. Explore the origin story of 'Hello, World.' Jump…
The Box MCP server is a secure gateway that connects external AI agents to enterprise content stored in Box, enabling agent-based document access, advanced search, and multi-file analysis while preserving Box security policies. It provides capabilities including keyword search, Box AI-powered Q&A across files, metadata extraction, file management, and authentication, all validated against Box's granular permission controls. The server integrates with major AI platforms like Anthropic Claude, Microsoft Copilot Studio, and Mistral Le Chat, and is available both as a Box-hosted remote server and a self-hosted open-source Python project.
The CustomGPT.ai MCP server is a fully managed, RAG-powered endpoint that connects large language models with private knowledge bases and external data sources. It provides tools for retrieval-augmented generation queries (send_message), data ingestion (upload_file), and source listing, enabling AI agents to query private documents like PDFs with high accuracy and real-time citations.
The BigQuery remote MCP server is a fully managed service that uses the Model Context Protocol to connect AI applications and LLMs to BigQuery data sources. It provides secure, standardized tools for AI agents to list datasets and tables, retrieve schemas, generate and execute SQL queries through natural language, and analyze data—enabling direct access to enterprise analytics data without requiring manual SQL coding.
The Instant MCP server is a wrapper around the Instant Platform SDK that enables creating, managing, and updating InstantDB applications directly within an editor. It provides tools for fetching rules files for LLMs, retrieving and pushing app schemas, managing permission rules, and executing database queries. Key capabilities include schema management (get-schema, push-schema), permission management (get-perms, push-perms), query execution, and listing recent query history.
The Listenetic MCP server is a remote, cloud-hosted server that enables AI assistants like ChatGPT and Claude to convert articles, documents, websites, and videos into high-quality AI-generated audio. It provides multi-format support for text and binary files, natural-sounding text-to-audio conversion using AI, and specialized processing for SSML, markup, markdown, and various media formats through three core tools: listentic_supported_mimetypes, listentic_add_content_text, and listentic_add_content_binary.
One MCP for the Web. Easily search, crawl, navigate, and extract websites without getting blocked.…
An MCP server that provides tools for Trunk CI Autopilot to apply fixes to failing tests.
A simple Typescript MCP server built using the official MCP Typescript SDK and smithery/cli. This…
The Stytch MCP server is a reference implementation that demonstrates remote MCP server authentication and authorization using Stytch Connected Apps. It provides OAuth 2.1-compliant authorization (including PKCE), Dynamic Client Registration, and validates Stytch-issued access tokens to enable AI agents to securely interact with external services through permissioned access, supporting scopes like openid, email, profile, and manage:project_data.
The Telnyx MCP server is an official implementation of the Model Context Protocol that enables AI clients (like Claude Desktop, Cursor, and OpenAI Agents) to interact with Telnyx's telephony, messaging, and AI assistant APIs. It provides comprehensive capabilities including making and managing phone calls, sending SMS/MMS messages, purchasing and configuring phone numbers, creating AI assistants with custom instructions, managing cloud storage buckets, scraping and embedding website content, and handling integration secrets. The server exists as both a local implementation and a remotely hosted version, allowing developers to integrate real-world communication infrastructure directly into AI applications.