Skip to main content
Glama

The Model Context Protocol registry, hosting, and gateway.

Glama indexes every MCP server in the ecosystem, hosts the connectors you need, and proxies every call through a gateway you control – with full call logging, per-tool access control, and managed credentials for every agent you run in production.

21,371 MCP servers · 1,787 hosted connectors · Last indexed

Put the Glama MCP Gateway in front of your agents.

Every MCP call from your agents flows through Glama – so you know exactly what your AI is doing, and you decide what it's allowed to do.

MCP Client
Glama MCP Gateway
MCP Server

Every call logged · Every tool gated · Every credential managed

Full call logging

Every tool call is logged with complete inputs and outputs. Debug broken agents, inspect every call, and audit exactly what your AI is doing.

Tool access control

Enable or disable individual tools per connector. You decide what your agents can and cannot do – down to the specific tool.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation. Credentials stay centralized in Glama – your clients never have to store them, and they never expire mid-call.

Usage analytics

See which tools your agents call, how often, and when. Understand usage patterns, catch anomalies, and attribute costs.

Plus hosted MCP deployments on dedicated infrastructure, an OpenAI-compatible LLM gateway, cost attribution labels, and team workspaces. See full pricing →

The most-used MCP servers across the Glama registry.

Explore all 21,371 servers →

Hosted connectors ready to plug into any MCP-compatible client – proxied through the Glama Gateway.

Explore all 1,787 connectors →

Discover, host, and build with MCP

Every surface of the Glama platform – from the public registry to the in-browser inspector.

What is the Model Context Protocol?

MCP is an open protocol introduced by Anthropic in November 2024. It lets AI applications connect to external tools, data sources, and workflows through a single standardized interface – replacing the "integrations for every AI app times every tool" problem with an "implement once, run anywhere" one. Think of it as USB-C for AI: one shape, every device.

An MCP setup has three roles. The host is the AI application – Claude Desktop, ChatGPT, Cursor, VS Code. The client is the host's connection to one specific server. An MCP server exposes three primitives: tools (functions the model can call, like send email or query Postgres), resources (data the model can read, like a config file or database schema), and prompts (reusable templated workflows). Servers speak to clients over stdio, Server-Sent Events, or the modern Streamable HTTP transport using JSON-RPC 2.0.

MCP is supported across every major AI client – Claude Desktop, Claude Code, ChatGPT, Cursor, VS Code, Windsurf, Zed, Replit, Sourcegraph, JetBrains IDEs, and dozens more. Glama sits at the infrastructure layer: we index every MCP server and hosted connector in the ecosystem, search every tool they expose, and proxy every call through a gateway that logs inputs and outputs, enforces per-tool access control, and manages OAuth credentials – so your agents get the MCP ecosystem plus production-grade observability.

Frequently asked questions

Answers to the most common questions about MCP and how Glama fits into it.

Browse MCP servers by category

86 curated categories spanning databases, developer tools, agents, and more.

See all 86 categories →