Skip to main content
Glama

MCP vs API

Written by on .

mcp
ai
openapi

  1. The HTTP API Problem
    1. MCP: A Wire Protocol, Not Documentation
      1. Why Not Just Use OpenAPI?
        1. Five Fundamental Differences
          1. Runtime Discovery vs Static Specs
            1. Deterministic Execution vs LLM-Generated Calls
              1. Bidirectional Communication
                1. Single-Request Human Tasks
                  1. Local-First by Design
                  2. The Training Advantage
                    1. They're Layers, Not Competitors
                      1. Real-World Example
                        1. The Bottom Line
                          1. Bonus: MCP vs API video
                            1. Bonus: Existing Reddit discussions

                              Every week a new thread emerges on Reddit asking about the difference between MCP and API. I've tried summarizing everything that's been said about MCP vs API in a single post (and a single table).

                              AspectTraditional APIs (REST/GraphQL)Model Context Protocol (MCP)
                              What it isInterface styles (REST, GraphQL) with optional spec formats (OpenAPI, GraphQL SDL)Standardized protocol with enforced message structure
                              Designed forHuman developers writing codeAI agents making decisions
                              Data locationREST: Path, headers, query params, body (multiple formats)Single JSON input/output per tool
                              DiscoveryStatic docs, regenerate SDKs for changes1 2Runtime introspection (tools/list)
                              ExecutionLLM generates HTTP requests (error-prone)LLM picks tool, deterministic code runs
                              DirectionTypically client-initiated; server-push exists but not standardizedBidirectional as first-class feature
                              Local accessRequires port, auth, CORS setupNative stdio support for desktop tools
                              Training targetImpractical at scale due to heterogeneitySingle protocol enables model fine-tuning

                              I am making several broad generalizations to keep the article length reasonable.

                              I will continue to update this article with feedback from the community. If you have any suggestions, please email me at frank@glama.ai.

                              The HTTP API Problem

                              HTTP APIs suffer from combinatorial chaos. To send data to an endpoint, you might encode it in:

                              • URL path (/users/123)
                              • Request headers (X-User-Id: 123)
                              • Query parameters (?userId=123)
                              • Request body (JSON, XML, form-encoded, CSV)

                              OpenAPI/Swagger documents these variations, but as a specification format, it describes existing patterns rather than enforcing consistency. Building automated tools to reliably use arbitrary APIs remains hard because HTTP wasn't designed for this—it was the only cross-platform, firewall-friendly transport universally available from browsers.

                              MCP: A Wire Protocol, Not Documentation

                              Model Context Protocol (MCP) isn't another API standard—it's a wire protocol that enforces consistency. While OpenAPI documents existing interfaces with their variations, MCP mandates specific patterns: JSON-RPC 2.0 transport, single input schema per tool, deterministic execution.

                              Key architecture:

                              • Transport: stdio (local) or streamable HTTP
                              • Discovery: tools/list, resources/list expose capabilities at runtime
                              • Primitives: Tools (actions), Resources (read-only data), Prompts (templates)

                              There is more than the above. Refer to the MCP specification for complete overview.

                              Why Not Just Use OpenAPI?

                              The most common question: "Why not extend OpenAPI with AI-specific features?"

                              Three reasons:

                              1. OpenAPI describes; MCP prescribes. You can't fix inconsistency by documenting it better—you need enforcement at the protocol level.
                              2. Retrofitting fails at scale. OpenAPI would need to standardize transport, mandate single-location inputs, require specific schemas, add bidirectional primitives—essentially becoming a different protocol.
                              3. The ecosystem problem. Even if OpenAPI added these features tomorrow, millions of existing APIs wouldn't adopt them. MCP starts fresh with AI-first principles.

                              Five Fundamental Differences

                              1. Runtime Discovery vs Static Specs

                              API: Ship new client code when endpoints change
                              MCP: Agents query capabilities dynamically and adapt automatically

                              // MCP discovery - works with any server client.request('tools/list') // Returns all available tools with schemas

                              2. Deterministic Execution vs LLM-Generated Calls

                              API: LLM writes the HTTP request → hallucinated paths, wrong parameters
                              MCP: LLM picks which tool → wrapped code executes deterministically

                              This distinction is critical for production safety. With MCP, you can test, sanitize inputs, and handle errors in actual code, not hope the LLM formats requests correctly.

                              3. Bidirectional Communication

                              API: Server-push exists (WebSockets, SSE, GraphQL subscriptions) but lacks standardization
                              MCP: Bidirectional communication as first-class feature:

                              • Request LLM completions from server
                              • Ask users for input (elicitation)
                              • Push progress notifications

                              4. Single-Request Human Tasks

                              REST APIs fragment human tasks across endpoints. Creating a calendar event might require:

                              1. POST /events (create)
                              2. GET /conflicts (check)
                              3. POST /invitations (notify)

                              MCP tools map to complete workflows. One tool, one human task.

                              5. Local-First by Design

                              API: Requires HTTP server (port binding, CORS, auth headers)
                              MCP: Can run as local process via stdio—no network layer needed

                              Why this matters: When MCP servers run locally via stdio, they inherit the host process's permissions.

                              This enables:

                              • Direct filesystem access (read/write files)
                              • Terminal command execution
                              • System-level operations

                              A local HTTP server could provide the same capabilities. However, I think the fact that MCP led with stdio transport planted the idea that MCP servers are meant to be as local services, which is not how we typically think of APIs.

                              The Training Advantage

                              MCP's standardization creates a future opportunity: models could be trained on a single, consistent protocol rather than thousands of API variations. While models today use MCP through existing function-calling capabilities, the protocol's uniformity offers immediate practical benefits:

                              Consistent patterns across all servers:

                              • Discovery: tools/list, resources/list, prompts/list
                              • Execution: tools/call with single JSON argument object
                              • Errors: Standard JSON-RPC format with numeric codes

                              Reduced cognitive load for models:

                              // Every MCP tool follows the same pattern: { "method": "tools/call", "params": { "name": "github.search_prs", "arguments": {"query": "security", "state": "open"} } } // Versus REST APIs with endless variations: // GET /api/v2/search?q=security&type=pr // POST /graphql {"query": "{ search(query: \"security\") { ... } }"} // GET /repos/owner/repo/pulls?state=open&search=security

                              This standardization means models need to learn one calling convention instead of inferring patterns from documentation. As MCP adoption grows, future models could be specifically optimized for the protocol, similar to how models today are trained on function-calling formats.

                              They're Layers, Not Competitors

                              Most MCP servers wrap existing APIs:

                              [AI Agent] ⟷ MCP Client ⟷ MCP Server ⟷ REST API ⟷ Service

                              The mcp-github server translates repository/list into GitHub REST calls. You keep battle-tested infrastructure while adding AI-friendly ergonomics.

                              Real-World Example

                              Consider a task: "Find all pull requests mentioning security issues and create a summary report."

                              With OpenAPI/REST:

                              1. LLM reads API docs, generates: GET /repos/{owner}/{repo}/pulls?state=all
                              2. Hopes it formatted the request correctly
                              3. Parses response, generates: GET /repos/{owner}/{repo}/pulls/{number}
                              4. Repeats for each PR (rate limiting issues)
                              5. Generates search queries for comments
                              6. Assembles report

                              With MCP:

                              1. LLM calls: github.search_issues_and_prs({query: "security", type: "pr"})
                              2. Deterministic code handles pagination, rate limits, error retry
                              3. Returns structured data
                              4. LLM focuses on analysis, not API mechanics

                              The Bottom Line

                              HTTP APIs evolved to serve human developers and browser-based applications, not AI agents. MCP addresses AI-specific requirements from the ground up: runtime discovery, deterministic execution, and bidirectional communication.

                              For AI-first applications, MCP provides structural advantages—local execution, server-initiated flows, and guaranteed tool reliability—that would require significant workarounds in traditional API architectures. The practical path forward involves using both: maintaining APIs for human developers while adding MCP for AI agent integration.

                              Bonus: MCP vs API video

                              During my research, I found this video to be one of the easiest to digest the differences between MCP and API.

                              Bonus: Existing Reddit discussions

                              During my research, I found these Reddit discussions to be helpful in understanding the differences between MCP and API.

                              Footnotes

                              1. GraphQL offers schema introspection, but it lacks task-level descriptions or JSON-schema-style validation, so SDKs still regenerate for new fields.

                              2. OpenAPI 3.1+ supports runtime discovery through the OpenAPI document endpoint. The key difference is that MCP mandates runtime discovery while OpenAPI makes it optional.

                              Written by Frank Fiegel (@punkpeye)