MCP-FE (Model Context Protocol - Frontend Edge)
MCP-FE turns the browser runtime into an active, queryable node in the MCP ecosystem. Instead of continuously pushing analytics-style data, your frontend exposes on-demand MCP tools so an AI agent can ask questions about what just happened and what the UI state is right now.
It bridges the gap between AI agents (e.g., Claude or Cursor) and the real-time state of your frontend application using:
a browser worker (SharedWorker / ServiceWorker) that stores events and routes tool calls, and
a Node.js proxy that exposes an MCP endpoint to remote agents.
Why MCP-FE?
AI agents are often runtime-blind: they can read your code, but they can’t see the current DOM, the state of a Redux/Zustand store, or the exact interaction sequence that led to an error.
MCP-FE exposes the browser runtime as a first-class MCP Server so that context is retrievable on demand via tool calls.
Table of Contents
Quick Start (Local Live Demo)
This monorepo includes a small demo frontend app and the MCP proxy. Run the commands below to start a local live demo on your machine.
Install dependencies
Start the demo app + MCP Proxy
Open the demo frontend
Navigate to http://localhost:4200 (or the port shown in your terminal). The browser worker will automatically register and connect.
Connect an AI agent
Point your MCP-compatible agent to:
MCP endpoint (HTTP):
http://localhost:3001/mcp
Note: the example app connects the worker to the proxy via WebSocket (e.g.,
ws://localhost:3001).
How It Works
Traditional MCP integrations are backend-centric. Frontends usually push events continuously, whether anyone needs them or not.
MCP-FE inverts the flow:
Pull, not push: the frontend does not stream context by default.
Worker-based edge: a browser
SharedWorker(preferred) orServiceWorkerstores event history (IndexedDB) and coordinates tool calls.Proxy for remote agents: a Node.js proxy maintains a WebSocket connection to the worker and exposes MCP tools to agents.
Dynamic tools: register tools from application code; handlers run in the main thread with controlled access to state/DOM/imports.
Key Concepts
MCP Workers: SharedWorker vs ServiceWorker
SharedWorker (preferred):
One shared instance is available to all same-origin windows/iframes.
Good for multi-tab apps and when you want a single MCP edge connection per browser.
ServiceWorker (fallback):
Runs in background, lifecycle managed by the browser.
Useful when SharedWorker is not supported.
WorkerClient in this repo prefers SharedWorker and automatically falls back to ServiceWorker. It also supports passing an explicit ServiceWorkerRegistration to use a previously registered service worker.
Worker as an MCP Edge Server
The Shared/Service Worker acts as a lightweight edge node that enables you to:
Collect UI-level event history (navigation, interactions, errors)
Store events in IndexedDB for later retrieval
Expose data and actions via MCP tools
Maintain a persistent WebSocket connection to the proxy
Register custom tools dynamically with handlers running in the main thread (full browser API access)
Server-Driven Pull Model (Tool Calls)
The MCP worker never sends context proactively to the backend. Context is shared only when an AI agent explicitly requests it by calling a tool.
🛡️ Security by Design
Unlike traditional analytics or logging tools that stream data to third-party servers, MCP-FE is passive and restrictive:
Explicit Exposure Only: The AI agent has zero "magic" access to your app. It can only see data or trigger actions that you explicitly expose via
registerTooloruseMCPTool.Zero-Stream Policy: No data is ever pushed automatically. Context transfer only happens when an AI agent triggers a specific tool call.
Local Execution: Tool handlers run in your application's context, allowing you to implement custom authorization, filtering, or scrubbing before returning data to the agent.
Privacy First: Sensitive fields (PII, passwords, tokens) never leave the client unless the developer intentionally includes them in a tool's return payload.
🏗️ Architecture
The MCP-FE architecture is built on three core layers designed to keep the main application thread responsive while providing a persistent link to AI agents.
1. The Proxy Server (Node.js)
The Proxy acts as the gateway. It speaks the standard MCP Protocol towards the AI agent (via HTTP/SSE) and maintains a persistent WebSocket connection to the browser.
Role: It bridges the gap between the internet and the user's local browser session.
Security: Handles Bearer token authentication to ensure only authorized agents can talk to the worker.
2. The MCP Worker (SharedWorker / ServiceWorker)
This is the "Brain" on the Frontend Edge. It runs in its own thread, meaning it doesn't slow down your UI.
Event Logging: Automatically captures interactions and errors into IndexedDB.
Routing: When a tool call comes from the Agent, the Worker routes it to the correct tab or the Main Thread.
Resilience: Implements a Ping-Pong mechanism to keep the WebSocket alive even when the user isn't actively interacting with the page.
3. The Main Thread (Your App)
This is where your React/Vue/JS code lives.
Dynamic Tools: Using hooks like
useMCPTool, your components register handlers that have direct access to the live DOM, State, and LocalStorage.Zero-Push: It only executes logic and sends data when the Worker explicitly asks for it (the Pull Model).
Packages
MCP-FE is delivered as a set of packages in this monorepo and can be consumed directly from your applications. For install instructions, APIs, and framework-specific examples, use the package READMEs:
Package | What it’s for | Docs |
| Core: worker client + worker scripts + transport + dynamic tool registration |
|
| Core (optional): framework-agnostic event tracking (navigation/interactions/errors) |
|
| React (optional): drop-in hooks for automatic navigation/click/input tracking |
|
| React (optional): hooks for registering tools with component lifecycle management |
|
| Proxy: Node.js MCP server that bridges remote agents ↔ browser worker |
|
Using MCP-FE in Your App
You can adopt MCP-FE incrementally. The smallest useful setup is:
Run the proxy (
mcp-server) somewhere reachable by your users’ browsers.Initialize the worker client in your app and point it at the proxy.
Optionally add event tracking and/or custom tools.
Minimal frontend setup:
Typical Integration Paths
Minimal (custom tools only):
@mcp-fe/mcp-worker+ your ownregisterTool(...)handlers.Observability (events + queries): add
@mcp-fe/event-trackeror@mcp-fe/react-event-tracker.React-first:
@mcp-fe/mcp-worker+@mcp-fe/react-tools+@mcp-fe/react-event-tracker.
Minimal Example (Worker + Tool)
Summary
MCP-FE introduces a worker-based MCP edge server in the browser that enables:
server-driven context access (pull model),
minimal frontend-to-server traffic,
clean separation between UI, transport, and agent logic.
It’s a new frontend application of the Model Context Protocol, not a new protocol.
License
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
⚠️ Project Status: Experimental (PoC)
This project is currently a Proof of Concept. While the architecture is stable and demonstrates the power of Frontend MCP, it is not yet intended for high-stakes production environments.
Current focus:
Finalizing the SharedWorker/ServiceWorker fallback logic.
Refining the React hook lifecycle (auto-deregistration of tools).
Hardening the Proxy-to-Worker authentication flow.
Contributions and architectural discussions are welcome!
👨💻 Author
Michal Kopecký - Frontend engineer
I created MCP-FE to solve the "runtime-blindness" of current AI agents. By treating the browser as an active edge-node, we can provide agents with deep, real-time context without sacrificing user privacy or network performance.
Feel free to reach out for architectural discussions or collaboration!