The Model Context Protocol registry, hosting, and gateway.
Glama indexes every MCP server in the ecosystem, hosts the connectors you need, and proxies every call through a gateway you control – with full call logging, per-tool access control, and managed credentials for every agent you run in production.
21,371 MCP servers · 1,787 hosted connectors · Last indexed
Put the Glama MCP Gateway in front of your agents.
Every MCP call from your agents flows through Glama – so you know exactly what your AI is doing, and you decide what it's allowed to do.
Every call logged · Every tool gated · Every credential managed
Full call logging
Every tool call is logged with complete inputs and outputs. Debug broken agents, inspect every call, and audit exactly what your AI is doing.
Tool access control
Enable or disable individual tools per connector. You decide what your agents can and cannot do – down to the specific tool.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation. Credentials stay centralized in Glama – your clients never have to store them, and they never expire mid-call.
Usage analytics
See which tools your agents call, how often, and when. Understand usage patterns, catch anomalies, and attribute costs.
Plus hosted MCP deployments on dedicated infrastructure, an OpenAI-compatible LLM gateway, cost attribution labels, and team workspaces. See full pricing →
Popular MCP servers
The most-used MCP servers across the Glama registry.
- ragmap
MapRag is a discovery + routing layer for retrieval. It indexes RAG-capable MCP servers, enriches them with structured metadata, and helps agents (and humans) quickly find the right retrieval server for a task under constraints like citations, freshness, privacy, domain, and latency. MapRag does not do RAG itself. It helps you choose the best RAG tool/server to do the retrieval.
3 - Unofficial PubChem MCP Server
A comprehensive Model Context Protocol server providing access to over 110 million chemical compounds with extensive molecular properties, bioassay data, and chemical informatics tools from the PubChem database.
36 - PubMed MCP Server
A comprehensive Model Context Protocol server that enables advanced PubMed literature search, citation formatting, and research analysis through natural language interactions.
8 - Advanced Reasoning MCP Server
An MCP server that enhances sequential thinking with meta-cognitive capabilities including confidence tracking, hypothesis testing, and organized memory storage through graph-based libraries and structured JSON documents.
11 - Fetch MCP Server
This server enables LLMs to retrieve and process content from web pages, converting HTML to markdown for easier consumption.
83K - Envato Downloader API1
Provides access to download files from Envato Elements platform by URL through the Envato Downloader API.
- Google Analytics MCP Server
Official Google Analytics MCP Server
2K - Social Media Handle Checker
Checks username availability and retrieves account information across major social media platforms including YouTube, TikTok, Threads, X (Twitter), and Instagram, perfect for brand name qualification and competitive research.
Popular MCP connectors
Hosted connectors ready to plug into any MCP-compatible client – proxied through the Glama Gateway.
- linear
MCP server for Linear project management and issue tracking
- Bitrise
MCP Server for Bitrise, enabling app management, build operations, artifact management, and more.
- omni-service-node
AI-to-AI petrol station. 56 pay-per-call endpoints covering market signals, crypto/DeFi, geopolitics, earnings, insider trades, SEC filings, sanctions screening, ArXiv research, whale tracking, and more. Micropayments in USDC on Base Mainnet via x402 protocol.
- FreelanceOS
Freelance business manager — clients, proposals, invoices, time tracking, scope, and follow-ups.
- GoldenMatch
Find duplicate records in 30 seconds. Zero-config entity resolution, 97.2% F1 out of the box.
- mcp
AI-powered design and management for Webflow Sites
- GoldenCheck
Auto-discover validation rules from data — scan, profile, health-score. No rules to write.
- mcp
A Model Context Protocol server for Wix AI tools
Discover, host, and build with MCP
Every surface of the Glama platform – from the public registry to the in-browser inspector.
MCP Servers
Find an MCP server to power your Claude, ChatGPT, or Cursor agent. 21,371 indexed, updated daily.
MCP Connectors
Plug a hosted MCP connector into any client in one click. 1,787 proxied through Glama – with full call logging, per-tool access control, and managed OAuth.
MCP Tools
Search across tools from every MCP server. Find the exact capability you need – like "query Postgres" or "send email".
MCP Clients
Compare MCP-compatible clients side by side. See which features each one supports before you pick.
MCP Inspector
Test and debug any MCP server from your browser. No install, no login – paste a URL and go.
AI Playground
Chat with an AI that can use any MCP server you own. Routed through the Glama Gateway with full visibility and control over every tool call.
What is the Model Context Protocol?
MCP is an open protocol introduced by Anthropic in November 2024. It lets AI applications connect to external tools, data sources, and workflows through a single standardized interface – replacing the "integrations for every AI app times every tool" problem with an "implement once, run anywhere" one. Think of it as USB-C for AI: one shape, every device.
An MCP setup has three roles. The host is the AI application – Claude Desktop, ChatGPT, Cursor, VS Code. The client is the host's connection to one specific server. An MCP server exposes three primitives: tools (functions the model can call, like send email or query Postgres), resources (data the model can read, like a config file or database schema), and prompts (reusable templated workflows). Servers speak to clients over stdio, Server-Sent Events, or the modern Streamable HTTP transport using JSON-RPC 2.0.
MCP is supported across every major AI client – Claude Desktop, Claude Code, ChatGPT, Cursor, VS Code, Windsurf, Zed, Replit, Sourcegraph, JetBrains IDEs, and dozens more. Glama sits at the infrastructure layer: we index every MCP server and hosted connector in the ecosystem, search every tool they expose, and proxy every call through a gateway that logs inputs and outputs, enforces per-tool access control, and manages OAuth credentials – so your agents get the MCP ecosystem plus production-grade observability.
Frequently asked questions
Answers to the most common questions about MCP and how Glama fits into it.
MCP is an open protocol introduced by Anthropic in November 2024 that standardizes how AI applications connect to external tools, data sources, and workflows. Think of it as USB-C for AI — one protocol replaces custom integrations between every AI app and every tool.
An MCP setup has three roles: the host (the AI application), a client (the host's connection to one server), and the server (which exposes tools, resources, and prompts over JSON-RPC 2.0). Clients like Claude, ChatGPT, and Cursor speak MCP natively.
An MCP Gateway is a reverse proxy that sits between AI clients and MCP servers. It appears as an MCP server to the client while acting as an MCP client to the backend.
Without a gateway, every agent manages its own connections to every server — a tangled point-to-point mess that breaks at enterprise scale. A gateway consolidates all of that into one control plane that:
- Authenticates — one identity in front of every connected server
- Injects credentials — OAuth tokens and API keys stored once, refreshed automatically
- Controls tool access — enable or disable individual tools per connection
- Manages sessions —
Mcp-Session-Idtracking across reconnects - Logs every call — full JSON-RPC audit trail
Glama's MCP Gateway turns the open MCP ecosystem into a production-grade control plane — every connector in the registry is fronted by the gateway, so every call your agents make is visible, auditable, and revocable.
An MCP Proxy is a thin pass-through — it takes a JSON-RPC request from a client, forwards it to an upstream MCP server, and returns the response. Nothing else.
An MCP Gateway includes a proxy and adds the control plane on top:
- Access control — enable or disable individual tools per connection
- Credential management — encrypted OAuth storage and automatic token refresh
- Session lifecycle —
Mcp-Session-Idtracking across reconnects - Usage analytics — aggregated tool call counts, latency, cost attribution
- Full call logging — every JSON-RPC message persisted for audit
Use a proxy when you need basic routing for a single upstream server. Use a gateway when you are running MCP at enterprise scale and need governance. Glama is a full gateway — not a roll-your-own routing layer.
An MCP connector is a pre-configured MCP server you can plug into an AI client in one step. In Claude, ChatGPT, and other clients, any remote MCP server a user has added to their account shows up as a "connector".
A Glama connector goes further. It adds hosting, managed credentials, and per-tool access control on top of the underlying server — so you don't have to run the server, store the tokens, or touch any config. Paste the Glama URL into Claude, ChatGPT, or Cursor, and the Glama Gateway does the rest.
MCP open-source server hosting is a managed service that takes an open-source MCP server — someone else's code, published on GitHub — and runs it on infrastructure you don't have to provision. You get:
- A public URL with TLS
- OAuth and credential management
- Health checks, monitoring, and scaling
- No code of your own
Glama hosts open-source MCP servers from its registry: for any server with a valid build spec, you can deploy it in one click and get a streamable-http endpoint fronted by the Glama Gateway.
Every JSON-RPC message — request, response, and SSE server event — is persisted to Glama's database with the full payload, including:
toolname- Input arguments
- Output result
- Timestamps and session IDs
Call logs are scoped to the connection profile owner — workspace members with the right role can view shared logs.
Per-tool access control is a per-connection on/off flag. Disable a tool and the gateway returns a JSON-RPC error to the client without ever forwarding the request upstream — the tool author never knows the call was attempted.
OAuth credentials are stored encrypted and automatically refreshed, so your client never holds the tokens. Revoking access is one click — no credential rotation required on the client side.
Heads up: the gateway captures full JSON-RPC payloads, so tool arguments containing sensitive data (API keys, PII, customer data) are visible in the audit log. For sensitive workloads, mask those arguments at the client side before calling.
Credentials. Glama stores OAuth access tokens, refresh tokens, and API keys in encrypted form. They are decrypted in memory only at the moment of making an upstream request, and automatically refreshed before expiration. Your client never sees the raw credentials — only the Glama Gateway does.
Call logs. Full JSON-RPC payloads — including tool name, input arguments, and upstream output — are stored per connection profile so you can audit exactly what your agents did, with timestamps and session IDs for traceability.
Usage aggregates. Separately from the raw audit trail, Glama computes aggregated analytics (tool call counts, latency, cost attribution) that can be queried without reading individual call payloads — useful for dashboards and anomaly detection.
Use the Glama MCP Inspector — an in-browser tool that connects to any MCP server URL and lets you:
- List tools, resources, and prompts
- Call them with structured inputs
- See raw JSON-RPC responses
- Handle OAuth flows, bearer tokens, and custom headers
- Exercise advanced spec features — tasks, elicitations, sampling, progress notifications, audio, and images
All state is encoded in the URL so you can share or bookmark a debug session. No install, no login, no local MCP client required — paste a URL, click inspect, and you're debugging.
The official reference inspector (@modelcontextprotocol/inspector) is an npm-installed local tool. Glama's is web-based — it works against remote MCP servers directly, and requests go from your browser to the MCP server without Glama ever logging them.
Yes. When you know what capability you need but don't know which server provides it, tool-level search gets you there in one query.
Glama indexes every tool exposed by every server in its registry — names, descriptions, input schemas, and the MCP annotation hints (readOnlyHint, destructiveHint, idempotentHint) that tell you whether a tool is safe to run in automated agent loops. You can search for capabilities like:
query Postgres— find every database-query toolsend email— find every email-sending toolgenerate Figma component— find every Figma-integration tool
Once you've found a matching tool, you can either install the underlying server yourself or plug in the Glama-hosted connector for that server in one click. Few other MCP directories index at the tool level.
A Glama connector URL works across every MCP-compatible client. Depending on your client:
- Claude — Customize → Connectors → add a custom connector
- ChatGPT — Settings → Apps & Connectors → Create (Developer Mode must be enabled)
- Cursor / VS Code — add the URL to the MCP configuration in settings
- Windsurf / Zed / JetBrains — add the URL to the client's MCP server list
The same URL works everywhere — one connector, every agent.
Submit open-source MCP servers to the Glama registry straight from a GitHub repository. On the servers page, click Add MCP Server and fill in:
- The GitHub repository URL for your server
- A display name and short description
Glama runs automated quality checks (license detection, security scan, health test) during indexing. Most submissions pass automatically within minutes and become discoverable through Glama's search, category pages, and recommendation feeds.
Control how your server is indexed by adding a glama.json metadata file to your repo — it lets you set the display name, description, category, environment variables, and build spec. Servers must be on GitHub today; for other sources, reach out on Discord.
Connectors are remote MCP servers you've already deployed somewhere with a public endpoint. On the connectors page, click Add MCP Server → Connector and provide:
- A name and short description
- The server URL — must be HTTPS and speak the
streamable-httptransport - Optional private test credentials (API keys, OAuth details) so Glama can verify the connector is reachable
If your server implements OAuth 2.1 dynamic client registration (RFC 7591), you can skip the test credentials — Glama will register automatically.
Only healthy connectors are indexed for search. Unhealthy connectors stay in pending state until they become reachable. Submitted connectors are public by default and discoverable immediately once healthy. You can mark a connector as deprecated later if you're retiring the upstream server — users browsing Glama will see a notice.
The following AI clients support MCP natively or via adapter:
- Claude Desktop, Claude Code
- ChatGPT
- Cursor, Windsurf, Zed
- VS Code (with the MCP extension)
- JetBrains IDEs
- Replit
- Sourcegraph (via Cody)
Glama tracks every MCP-compatible client with its supported features listed in the client directory.
The official MCP Registry is a vendor-neutral index of MCP server metadata maintained by the MCP steering group — the canonical source of truth for publicly published servers.
Glama builds on top of it with much deeper per-connector data and a full control plane:
- Rich metadata on every connector — health checks, quality scores, security audits, tool schemas with annotations, usage telemetry, license info, and maintainer notes
- One-click hosting on managed infrastructure
- Full observability and control over every call — JSON-RPC logging, per-tool access control, managed OAuth credentials, and usage analytics
Use the official registry for vendor-neutral metadata. Use Glama when you need depth, observability, and control over production MCP traffic.
Browse MCP servers by category
86 curated categories spanning databases, developer tools, agents, and more.