Skip to main content
Glama

MCP servers, inspector, and gateway – in one place.

Every open-source MCP server and every MCP connector, deployable to Claude, Cursor, or VS Code in one click – all routed through a gateway you control.

21,460 MCP servers · 2,124 MCP connectors · 118,987 MCP tools · Last indexed

Put the Glama MCP Gateway in front of your agents.

Every MCP call from your agents flows through Glama – so you know exactly what your AI is doing, and you decide what it's allowed to do.

MCP Client
Glama MCP Gateway
MCP Server

Every call logged · Every tool gated · Every credential managed

Full call logging

Every tool call is logged with complete inputs and outputs. Debug broken agents, inspect every call, and audit exactly what your AI is doing.

Tool access control

Enable or disable individual tools per connector. You decide what your agents can and cannot do – down to the specific tool.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation. Credentials stay centralized in Glama – your clients never have to store them, and they never expire mid-call.

Usage analytics

See which tools your agents call, how often, and when. Understand usage patterns, catch anomalies, and attribute costs.

Plus hosted MCP deployments on dedicated infrastructure, an OpenAI-compatible LLM gateway, cost attribution labels, and team workspaces. See full pricing →

The most-used MCP servers across the Glama registry.

Explore all 21,460 servers →

Hosted connectors ready to plug into any MCP-compatible client – proxied through the Glama Gateway.

Explore all 2,124 connectors →

Discover, host, and build with MCP

Every surface of the Glama platform – from the public registry to the in-browser inspector.

Frequently asked questions

Answers to the most common questions about MCP and how Glama fits into it.

What is an MCP server?

An MCP server is a small program that exposes tools, resources, and prompts to an AI client over the Model Context Protocol. It speaks JSON-RPC 2.0 over stdio, Server-Sent Events, or Streamable HTTP – and once connected, the AI can call any tool the server defines.

MCP-compatible clients include Claude Desktop, Claude Code, ChatGPT, Cursor, Windsurf, VS Code, Zed, and JetBrains IDEs. One MCP server works across all of them – you install it once, every client can use it.

Glama indexes every MCP server in the ecosystem – browse by category, search across every tool they expose, or deploy a hosted connector in one click.

How do I add an MCP server to Cursor?

Open Cursor Settings → Features → MCP → + Add New MCP Server, or edit ~/.cursor/mcp.json directly. Each server takes a command, an args array, and an optional env map – for example, command: "npx" with args: ["-y", "@example/mcp-server"].

Pick any server from the Glama registry – each server page shows the exact JSON block to paste, pre-filled for Cursor, Claude Desktop, Claude Code, and VS Code. To test a remote server before adding it, use the Glama MCP Inspector – no install required.

How do I add an MCP server to Claude Desktop or Claude Code?

In Claude Desktop, open Settings → Developer → Edit Config – that opens claude_desktop_config.json. Add your server under the mcpServers key using the same command/args/env structure Cursor uses, then quit and relaunch Claude Desktop.

In Claude Code, run claude mcp add <name> <command> from your terminal – it writes the entry to your project's MCP config directly.

Every server on the Glama registry ships with a pre-filled config block for both clients – copy it, paste it, done. To test a server remotely before wiring it up, open the Glama MCP Inspector and point it at the server URL.

Why won't my MCP server start?

Three causes account for almost every "MCP server won't connect" report:

  • Missing -y flagnpx refuses to auto-install without it. Change "args": ["@example/mcp-server"] to "args": ["-y", "@example/mcp-server"].
  • Windows path escaping – inside claude_desktop_config.json, use forward slashes (C:/Users/...) or doubled backslashes. Single backslashes break JSON parsing.
  • spawn ENOENT – the client can't find the binary. Give command an absolute path, or make sure the binary is on PATH for the shell the client launches from – Claude Desktop launches without your login shell's PATH.

If the handshake completes but individual tool calls fail, open the Glama MCP Inspector and point it at the server URL – it shows raw JSON-RPC requests and responses so you can see exactly where the error originates.

What is the Model Context Protocol (MCP)?

MCP is an open-source standard introduced by Anthropic in November 2024 that standardizes how AI applications connect to external tools and data sources. Think of it as USB-C for AI – one protocol replaces custom integrations between every AI app and every tool.

Before MCP, every AI application needed custom code to talk to every tool – an N×M integration problem. MCP reduces it to N+M: each app implements MCP once, each tool implements MCP once.

An MCP setup has three roles:

  • The host – the AI application (Claude Desktop, ChatGPT, Cursor, VS Code)
  • A client – the host's internal connection to one specific server
  • An MCP server – exposes three primitives to the client:
    • tools – functions the model can call (like "send email" or "query Postgres")
    • resources – data the model can read (like a config file or database schema)
    • prompts – templated messages the user can pick from

Servers speak to clients over stdio, Server-Sent Events, or the modern Streamable HTTP transport using JSON-RPC 2.0.

Glama sits at the infrastructure layer. We index every MCP server and hosted connector in the ecosystem, search every tool they expose, and proxy every call through a gateway that logs inputs and outputs, enforces per-tool access control, and manages OAuth credentials – so your agents get the MCP ecosystem plus production-grade observability.

What is an MCP Gateway?

An MCP Gateway is a reverse proxy that sits between AI clients and MCP servers. It appears as an MCP server to the client while acting as an MCP client to the backend.

Without a gateway, every agent manages its own connections to every server – a tangled point-to-point mess that breaks at enterprise scale. A gateway consolidates all of that into one control plane that:

  • Authenticates – one identity in front of every connected server
  • Injects credentials – OAuth tokens and API keys stored once, refreshed automatically
  • Controls tool access – enable or disable individual tools per connection
  • Manages sessionsMcp-Session-Id tracking across reconnects
  • Logs every call – full JSON-RPC audit trail

Glama's MCP Gateway turns the open MCP ecosystem into a production-grade control plane – every connector in the registry is fronted by the gateway, so every call your agents make is visible, auditable, and revocable.

What is an MCP Proxy, and how is it different from an MCP Gateway?

An MCP Proxy is a thin pass-through – it takes a JSON-RPC request from a client, forwards it to an upstream MCP server, and returns the response. Nothing else.

An MCP Gateway includes a proxy and adds the control plane on top:

  • Access control – enable or disable individual tools per connection
  • Credential management – encrypted OAuth storage and automatic token refresh
  • Session lifecycleMcp-Session-Id tracking across reconnects
  • Usage analytics – aggregated tool call counts, latency, cost attribution
  • Full call logging – every JSON-RPC message persisted for audit

Use a proxy when you need basic routing for a single upstream server. Use a gateway when you are running MCP at enterprise scale and need governance. Glama is a full gateway – not a roll-your-own routing layer.

What are MCP connectors?

An MCP connector is a pre-configured MCP server you can plug into an AI client in one step. In Claude, ChatGPT, and other clients, any remote MCP server a user has added to their account shows up as a "connector".

A Glama connector goes further. It adds hosting, managed credentials, and per-tool access control on top of the underlying server – so you don't have to run the server, store the tokens, or touch any config. Paste the Glama URL into Claude, ChatGPT, or Cursor, and the Glama Gateway does the rest.

What is MCP hosting?

MCP hosting is a managed service that runs an MCP server for you on infrastructure you don't have to provision. You get a public URL with TLS, OAuth and credential management, health checks, monitoring, and scaling – without running the server yourself.

Most MCP hosting platforms – Cloudflare Workers, Google Cloud Run, Azure Functions, AWS Bedrock AgentCore, Heroku – require you to bring your own server code. You write the MCP server, they run it.

Glama is different. We host open-source MCP servers from the registry directly: pick a server someone else already published, click deploy, and get a streamable-http endpoint fronted by the Glama Gateway – without writing a line of code yourself. Full call logging, per-tool access control, and managed OAuth credentials come out of the box.

How does Glama's MCP Gateway log tool calls and control tool access?

Every JSON-RPC message – request, response, and SSE server event – is persisted to Glama's database with the full payload, including:

  • tool name
  • Input arguments
  • Output result
  • Timestamps and session IDs

Call logs are scoped to the connection profile owner – workspace members with the right role can view shared logs.

Per-tool access control is a per-connection on/off flag. Disable a tool and the gateway returns a JSON-RPC error to the client without ever forwarding the request upstream – the tool author never knows the call was attempted.

OAuth credentials are stored encrypted and automatically refreshed, so your client never holds the tokens. Revoking access is one click – no credential rotation required on the client side.

Heads up: the gateway captures full JSON-RPC payloads, so tool arguments containing sensitive data (API keys, PII, customer data) are visible in the audit log. For sensitive workloads, mask those arguments at the client side before calling.

Where are MCP credentials and call logs stored?

Credentials. Glama stores OAuth access tokens, refresh tokens, and API keys encrypted with AES-256-GCM – each value gets its own random 16-byte IV and authentication tag, and the 256-bit key is held server-side only. They are decrypted in memory only at the moment of making an upstream request, and automatically refreshed before expiration. Your client never sees the raw credentials – only the Glama Gateway does.

Call logs. Full JSON-RPC payloads – including tool name, input arguments, and upstream output – are stored per connection profile so you can audit exactly what your agents did, with timestamps and session IDs for traceability.

How do I test or debug an MCP server?

Use the Glama MCP Inspector – an in-browser tool that connects to any MCP server URL and lets you:

  • List tools, resources, and prompts
  • Call them with structured inputs
  • See raw JSON-RPC responses
  • Handle OAuth flows, bearer tokens, and custom headers
  • Exercise advanced spec features – tasks, elicitations, sampling, progress notifications, audio, and images

All state is encoded in the URL so you can share or bookmark a debug session. No install, no login, no local MCP client required – paste a URL, click inspect, and you're debugging.

The official reference inspector (@modelcontextprotocol/inspector) is an npm-installed local tool. Glama's is web-based – it works against remote MCP servers directly, and requests go from your browser to the MCP server without Glama ever logging them.

Can I search for a specific MCP tool, not just a server?

Yes. When you know what capability you need but don't know which server provides it, tool-level search gets you there in one query.

Glama indexes every tool exposed by every server in its registry – names, descriptions, input schemas, and the MCP annotation hints (readOnlyHint, destructiveHint, idempotentHint) that tell you whether a tool is safe to run in automated agent loops. You can search for capabilities like:

  • query Postgres – find every database-query tool
  • send email – find every email-sending tool
  • generate Figma component – find every Figma-integration tool

Once you've found a matching tool, you can either install the underlying server yourself or plug in the Glama-hosted connector for that server in one click. Few other MCP directories index at the tool level.

How do I add a Glama connector to Claude, ChatGPT, or Cursor?

A Glama connector URL works across every MCP-compatible client. Depending on your client:

  • ClaudeCustomize → Connectors → click +Add custom connector
  • ChatGPTSettings → Apps & Connectors → Create (Developer Mode must be enabled)
  • CursorSettings → Features → MCP+ Add New MCP Server, or edit ~/.cursor/mcp.json
  • VS Code – Command Palette → MCP: Add Server, or edit .vscode/mcp.json in your workspace

For other MCP-compatible clients (Windsurf, Zed, JetBrains IDEs, Replit, …), setup is client-specific – check your client's MCP configuration docs. Any client that speaks MCP will accept a Glama connector URL – one connector, every agent.

How do I submit an MCP server?

Submit open-source MCP servers to the Glama registry straight from a GitHub repository. On the servers page, click Add MCP Server and fill in:

  • The GitHub repository URL for your server
  • A display name and short description

Glama runs automated quality checks (license detection, security scan, health test) during indexing. Most submissions pass automatically within minutes and become discoverable through Glama's search, category pages, and recommendation feeds.

Control how your server is indexed by adding a glama.json metadata file to your repo – it lets you set the display name, description, category, environment variables, and build spec. Servers must be on GitHub today; for other sources, reach out on Discord.

How do I submit an MCP connector?

Connectors are remote MCP servers you've already deployed somewhere with a public endpoint. On the connectors page, click Add MCP Server → Connector and provide:

  • A name and short description
  • The server URL – must be HTTPS and speak the streamable-http transport
  • Optional private test credentials (API keys, OAuth details) so Glama can verify the connector is reachable

If your server implements OAuth 2.1 dynamic client registration (RFC 7591), you can skip the test credentials – Glama will register automatically.

Only healthy connectors are indexed for search. Unhealthy connectors stay in pending state until they become reachable. Submitted connectors are public by default and discoverable immediately once healthy. You can mark a connector as deprecated later if you're retiring the upstream server – users browsing Glama will see a notice.

Which AI clients support MCP?

The following AI clients support MCP natively or via adapter:

  • Claude Desktop, Claude Code
  • ChatGPT
  • Cursor, Windsurf, Zed
  • VS Code (with the MCP extension)
  • JetBrains IDEs
  • Replit
  • Sourcegraph (via Cody)

Glama tracks every MCP-compatible client with its supported features listed in the client directory.

How is Glama different from the official MCP registry?

The official MCP Registry is a vendor-neutral index of MCP server metadata maintained by the MCP steering group – the canonical source of truth for publicly published servers.

Glama builds on top of it with much deeper per-connector data and a full control plane:

  • Rich metadata on every connector – health checks, quality scores, security audits, tool schemas with annotations, usage telemetry, license info, and maintainer notes
  • One-click hosting on managed infrastructure
  • Full observability and control over every call – JSON-RPC logging, per-tool access control, managed OAuth credentials, and usage analytics

Use the official registry for vendor-neutral metadata. Use Glama when you need depth, observability, and control over production MCP traffic.

Browse MCP servers by category

86 curated categories spanning databases, developer tools, agents, and more.

See all 86 categories →