# Custom MCP with Frontend-CLI Integration: Complete Research Guide
## Executive Summary
Building a custom MCP (Model Context Protocol) with frontend-CLI integration involves:
1. **Understanding MCP Prompts** - Predefined templates that the LLM invokes
2. **Transport Layer** - How CLI and frontend communicate (STDIO vs WebSocket)
3. **Prompt Invocation Flow** - How MCP servers expose prompts that CLIs trigger
4. **Architecture Pattern** - Replicating Gemini CLI's modular approach
---
## Part 1: How MCP Prompts Work Inside CLI
### Core Concept: Prompts in MCP
**Prompts are NOT direct text inputs.** They're structured templates that MCP servers expose as capabilities. Think of them as "smart workflow templates."
#### Key Properties of MCP Prompts
```
Prompts in MCP:
├─ Accept dynamic arguments (customizable inputs)
├─ Include context from resources (files, databases, etc.)
├─ Chain multiple interactions (sequential operations)
├─ Guide specific workflows (predefined paths)
└─ Surface as UI elements (slash commands like /git-commit)
```
### Server-Side Prompt Implementation
Your MCP server defines available prompts:
```typescript
// Server registers prompts during initialization
const PROMPTS = {
"code-review": {
name: "code-review",
description: "Review code for security and quality",
arguments: [
{
name: "language",
description: "Programming language",
required: true
},
{
name: "focus",
description: "Focus area (security, performance, style)",
required: false
}
]
},
"git-commit": {
name: "git-commit",
description: "Generate a Git commit message",
arguments: [
{
name: "changes",
description: "Git diff or description of changes",
required: true
}
]
}
};
// Register handlers
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return { prompts: Object.values(PROMPTS) };
});
server.setRequestHandler(GetPromptRequestSchema, async (request) => {
const { name, arguments: args } = request;
const prompt = PROMPTS[name];
// Return formatted prompt with context
return {
messages: [
{
role: "user",
content: `${prompt.description}\n\nArguments: ${JSON.stringify(args)}`
}
]
};
});
```
### Client-Side Prompt Invocation (CLI)
The CLI requests a prompt from the server:
```python
# Client-side (CLI)
class MCPClient:
async def get_prompt(self, server_name: str, prompt_name: str, arguments: dict):
"""Invoke a prompt from the MCP server"""
# 1. Connect to server and negotiate capabilities
session = await self.client.session(server_name)
# 2. List available prompts (discovery)
prompts = await session.list_prompts()
# 3. Request specific prompt with arguments
prompt_messages = await session.get_prompt(
name=prompt_name,
arguments=arguments # Dynamic args passed here
)
# 4. Process returned prompt messages
return prompt_messages
async def execute_prompt_workflow(self, user_query: str):
"""Full workflow: user input → prompt selection → LLM processing"""
# Step 1: CLI receives user input
user_input = input("Query: ")
# Step 2: Analyze input to determine which prompt to use
selected_prompt = self.analyze_input_for_prompt(user_input)
# Step 3: Extract arguments from user input
args = self.extract_arguments(user_input, selected_prompt)
# Step 4: Fetch prompt template from MCP server
prompt_messages = await self.get_prompt(
server_name="my-server",
prompt_name=selected_prompt,
arguments=args
)
# Step 5: Send to LLM (Claude) with the prompt
response = await self.llm.send_messages(prompt_messages + [
{"role": "user", "content": user_input}
])
# Step 6: Execute any tool calls the LLM makes
return response
```
### Flow Diagram: How Prompts Invoke Inside CLI
```
User Types: "review my Python code for security"
↓
[CLI Parser]
↓
Analyzes input → Selects "code-review" prompt
↓
Extracts arguments: {language: "python", focus: "security"}
↓
[MCP Client] calls server.get_prompt("code-review", arguments)
↓
[MCP Server] returns structured prompt template:
{
messages: [
{role: "user", content: "Review code for security and quality...\nArguments: {language, focus}"}
]
}
↓
[CLI] sends prompt template to Claude API
↓
Claude processes the prompt + user context
↓
Claude decides to call tools (if available)
↓
[CLI] executes tool calls via MCP server
↓
Returns results back to Claude
↓
Display final response to user
```
---
## Part 2: Transport Mechanisms (STDIO vs WebSocket)
### STDIO Transport (CLI-to-Server)
**How it works:**
```
Your CLI (Python/Node)
↓
stdin/stdout pipes
↓
MCP Server process
↓
JSON-RPC messages (newline-delimited)
```
**Message Format:**
```json
// CLI sends to STDIN:
{"jsonrpc":"2.0","id":1,"method":"prompts/get","params":{"name":"code-review","arguments":{"language":"python"}}}
// Server sends to STDOUT:
{"jsonrpc":"2.0","id":1,"result":{"messages":[{"role":"user","content":"..."}]}}
```
**Pros:**
- No network overhead
- Direct process communication
- Built-in to CLI tools
- Ideal for Gemini CLI architecture
**Cons:**
- One-to-one relationship (one CLI instance per server)
- Cannot expose to frontend directly
- Local-only communication
### WebSocket Transport (Frontend-to-Server)
**Architecture for frontend access:**
```
Frontend (React/Vue) [Browser]
↓
WebSocket connection
↓
WebSocket Bridge Server (Node.js on localhost:8021)
↓
STDIO-to-WebSocket proxy
↓
MCP Server process
```
**Available Solution: @mcp-b/websocket-bridge**
```bash
# Install
npm install @mcp-b/websocket-bridge
# Start bridge (auto-proxies STDIO servers)
npx @mcp-b/websocket-bridge
# Or with port specification
npx @mcp-b/websocket-bridge --port 8021
# Include MCP Inspector for debugging
npx @mcp-b/websocket-bridge --with-inspector
```
**Frontend Connection (JavaScript):**
```javascript
// React Component
import { useEffect, useState } from 'react';
export function MCPFrontend() {
const [response, setResponse] = useState('');
const [ws, setWs] = useState(null);
useEffect(() => {
// Connect to WebSocket bridge
const socket = new WebSocket('ws://localhost:8021');
socket.onopen = () => {
console.log('Connected to MCP server via WebSocket');
setWs(socket);
};
socket.onmessage = (event) => {
const data = JSON.parse(event.data);
setResponse(JSON.stringify(data.result, null, 2));
};
return () => socket.close();
}, []);
const invokePrompt = (promptName, args) => {
if (!ws) return;
// Send JSON-RPC request
const request = {
jsonrpc: '2.0',
id: Math.random(),
method: 'prompts/get',
params: {
name: promptName,
arguments: args
}
};
ws.send(JSON.stringify(request));
};
return (
<div>
<button onClick={() => invokePrompt('code-review', {language: 'python'})}>
Run Code Review Prompt
</button>
<pre>{response}</pre>
</div>
);
}
```
---
## Part 3: Custom MCP Architecture (Like Gemini CLI)
### Modular Architecture Pattern
Replicate Gemini CLI's separation of concerns:
```
packages/
├── cli/ # User-facing CLI
│ ├── input-handler.ts # Parse user commands
│ ├── prompt-router.ts # Route to appropriate prompt
│ ├── display-renderer.ts # Format & display output
│ └── main.ts # Entry point
│
├── core/ # Backend orchestration
│ ├── mcp-client.ts # MCP server communication
│ ├── prompt-executor.ts # Execute prompts with args
│ ├── llm-interface.ts # Claude/Gemini API calls
│ └── state-manager.ts # Conversation history
│
├── frontend/ # Optional web interface
│ ├── components/
│ │ ├── PromptSelector.tsx
│ │ ├── ArgumentBuilder.tsx
│ │ └── ResponseDisplay.tsx
│ └── App.tsx
│
└── mcp-server/ # Your custom MCP server
├── prompts/
│ ├── code-review.ts
│ ├── git-commit.ts
│ └── index.ts
├── tools/
│ ├── file-operations.ts
│ ├── shell-executor.ts
│ └── index.ts
└── server.ts
```
### Implementation: Complete Flow
#### 1. Define Your Prompts (MCP Server)
```typescript
// mcp-server/prompts/code-review.ts
export const codeReviewPrompt = {
name: "code-review",
description: "Analyze code for security, performance, and best practices",
arguments: [
{
name: "language",
description: "Programming language (python, javascript, typescript, etc)",
required: true
},
{
name: "focus",
description: "Focus area: security, performance, style, or all",
required: false
},
{
name: "context",
description: "Additional context about the code",
required: false
}
]
};
// mcp-server/prompts/index.ts
export const PROMPTS = {
"code-review": codeReviewPrompt,
"git-commit": gitCommitPrompt,
"documentation": documentationPrompt,
"optimize": optimizationPrompt
};
// mcp-server/server.ts
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { PROMPTS } from "./prompts/index.js";
const server = new Server(
{ name: "my-custom-server", version: "1.0.0" },
{ capabilities: { prompts: { listChanged: false } } }
);
// Handle prompt listing
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return { prompts: Object.values(PROMPTS) };
});
// Handle prompt retrieval
server.setRequestHandler(GetPromptRequestSchema, async (request) => {
const { name, arguments: args } = request;
const prompt = PROMPTS[name];
if (!prompt) throw new Error(`Unknown prompt: ${name}`);
// Build context-aware prompt
const systemPrompt = buildSystemPrompt(prompt, args);
return {
messages: [
{
role: "user",
content: systemPrompt
}
]
};
});
// Register tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return { tools: TOOLS };
});
server.setRequestHandler(CallToolRequestSchema, async (request) => {
return executeTool(request.name, request.arguments);
});
// Connect via STDIO
const transport = new StdioServerTransport();
await server.connect(transport);
```
#### 2. CLI Implementation (Prompt Router)
```typescript
// packages/cli/prompt-router.ts
import { MCPClient } from "../core/mcp-client.js";
export class PromptRouter {
private client: MCPClient;
constructor(serverPath: string) {
this.client = new MCPClient(serverPath);
}
async routeAndExecute(userInput: string) {
// Step 1: Parse user input
const { promptName, arguments: args } = this.parseInput(userInput);
// Step 2: List available prompts for validation
const availablePrompts = await this.client.listPrompts();
if (!availablePrompts.find(p => p.name === promptName)) {
throw new Error(`Prompt not found: ${promptName}`);
}
// Step 3: Get prompt with arguments
const promptMessages = await this.client.getPrompt(promptName, args);
// Step 4: Send to LLM
const response = await this.client.executeLLMWithTools(
promptMessages,
userInput
);
return response;
}
private parseInput(input: string) {
// Smart parsing: "review my code for python security"
// → {promptName: "code-review", arguments: {language: "python", focus: "security"}}
const patterns = {
'code-review': /review|analyze.*code/i,
'git-commit': /commit|git message/i,
'documentation': /document|write.*docs?/i,
'optimize': /optimize|improve|performance/i
};
let promptName = null;
for (const [name, pattern] of Object.entries(patterns)) {
if (pattern.test(input)) {
promptName = name;
break;
}
}
// Extract arguments from input
const args = {
language: this.detectLanguage(input),
focus: this.detectFocus(input),
context: input
};
return { promptName, arguments: args };
}
private detectLanguage(input: string): string {
const langs = ['python', 'javascript', 'typescript', 'rust', 'go', 'java'];
for (const lang of langs) {
if (input.toLowerCase().includes(lang)) return lang;
}
return 'general';
}
private detectFocus(input: string): string {
if (/security|vulnerability/.test(input)) return 'security';
if (/performance|speed|optimize/.test(input)) return 'performance';
if (/style|format|clean/.test(input)) return 'style';
return 'all';
}
}
```
#### 3. Core MCP Client
```typescript
// packages/core/mcp-client.ts
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { spawn } from "child_process";
import { Anthropic } from "@anthropic-ai/sdk";
export class MCPClient {
private client: Client;
private anthropic: Anthropic;
private session: any;
constructor(serverScript: string) {
this.anthropic = new Anthropic();
}
async connect(serverScript: string) {
// Spawn MCP server as subprocess
const process = spawn("node", [serverScript], {
stdio: ["pipe", "pipe", "inherit"]
});
// Create transport with subprocess streams
const transport = new StdioClientTransport({
command: "node",
args: [serverScript]
});
// Create and initialize client
this.client = new Client({
name: "my-cli-client",
version: "1.0.0"
}, { capabilities: {} });
this.session = await this.client.connect(transport);
}
async listPrompts() {
const response = await this.session.request(
{ method: "prompts/list" },
{}
);
return response.prompts;
}
async getPrompt(name: string, args: Record<string, any>) {
const response = await this.session.request(
{
method: "prompts/get",
params: { name, arguments: args }
},
{}
);
return response.messages;
}
async listTools() {
const response = await this.session.request(
{ method: "tools/list" },
{}
);
return response.tools;
}
async callTool(name: string, args: Record<string, any>) {
const response = await this.session.request(
{
method: "tools/call",
params: { name, arguments: args }
},
{}
);
return response;
}
async executeLLMWithTools(promptMessages: any[], userInput: string) {
// Get available tools
const tools = await this.listTools();
// Convert tools to Claude format
const claudeTools = tools.map(tool => ({
name: tool.name,
description: tool.description,
input_schema: tool.inputSchema
}));
// Message history
let messages = [
...promptMessages,
{ role: "user", content: userInput }
];
// Agentic loop
while (true) {
// Call Claude with tools
const response = await this.anthropic.messages.create({
model: "claude-3-5-sonnet-20241022",
max_tokens: 4096,
tools: claudeTools,
messages: messages
});
// Check if done
if (response.stop_reason === "end_turn") {
return response.content
.filter(block => block.type === "text")
.map(block => block.text)
.join("\n");
}
// Process tool calls
const toolUseBlocks = response.content.filter(
block => block.type === "tool_use"
);
if (toolUseBlocks.length === 0) break;
// Add assistant's response
messages.push({
role: "assistant",
content: response.content
});
// Execute tools and collect results
const toolResults = [];
for (const toolUse of toolUseBlocks) {
const result = await this.callTool(toolUse.name, toolUse.input);
toolResults.push({
type: "tool_result",
tool_use_id: toolUse.id,
content: JSON.stringify(result)
});
}
// Add tool results
messages.push({
role: "user",
content: toolResults
});
}
}
async cleanup() {
if (this.session) {
await this.session.close();
}
}
}
```
#### 4. Frontend Component (Web Interface)
```typescript
// packages/frontend/components/PromptExecutor.tsx
import React, { useState, useEffect } from 'react';
export function PromptExecutor() {
const [ws, setWs] = useState<WebSocket | null>(null);
const [prompts, setPrompts] = useState([]);
const [selectedPrompt, setSelectedPrompt] = useState('');
const [args, setArgs] = useState({});
const [output, setOutput] = useState('');
const [loading, setLoading] = useState(false);
useEffect(() => {
// Connect to WebSocket bridge
const socket = new WebSocket('ws://localhost:8021');
socket.onopen = () => {
setWs(socket);
// Request available prompts
socket.send(JSON.stringify({
jsonrpc: '2.0',
id: 1,
method: 'prompts/list'
}));
};
socket.onmessage = (event) => {
const msg = JSON.parse(event.data);
if (msg.result?.prompts) {
setPrompts(msg.result.prompts);
} else if (msg.result?.messages) {
setOutput(JSON.stringify(msg.result.messages, null, 2));
}
setLoading(false);
};
return () => socket?.close();
}, []);
const executePrompt = () => {
if (!ws || !selectedPrompt) return;
setLoading(true);
ws.send(JSON.stringify({
jsonrpc: '2.0',
id: Math.random(),
method: 'prompts/get',
params: {
name: selectedPrompt,
arguments: args
}
}));
};
return (
<div className="prompt-executor">
<h2>MCP Prompt Executor</h2>
<select
value={selectedPrompt}
onChange={(e) => setSelectedPrompt(e.target.value)}
>
<option value="">Select a prompt...</option>
{prompts.map(p => (
<option key={p.name} value={p.name}>
{p.description}
</option>
))}
</select>
{/* Dynamic argument builder */}
<div className="arguments">
{selectedPrompt && prompts
.find(p => p.name === selectedPrompt)
?.arguments?.map(arg => (
<div key={arg.name}>
<label>{arg.name} {arg.required ? '*' : ''}</label>
<input
type="text"
value={args[arg.name] || ''}
onChange={(e) => setArgs({
...args,
[arg.name]: e.target.value
})}
placeholder={arg.description}
/>
</div>
))}
</div>
<button
onClick={executePrompt}
disabled={loading || !selectedPrompt}
>
{loading ? 'Executing...' : 'Execute Prompt'}
</button>
{output && (
<div className="output">
<h3>Response:</h3>
<pre>{output}</pre>
</div>
)}
</div>
);
}
```
---
## Part 4: Complete Setup Instructions
### Step 1: Create MCP Server
```bash
mkdir my-custom-mcp
cd my-custom-mcp
npm init -y
npm install @modelcontextprotocol/sdk
```
### Step 2: Start All Three Layers
```bash
# Terminal 1: MCP Server (STDIO)
node mcp-server/server.ts
# Terminal 2: WebSocket Bridge (for frontend)
npx @mcp-b/websocket-bridge --port 8021
# Terminal 3: CLI
node packages/cli/main.ts
# Terminal 4: Frontend (optional)
npm start --prefix packages/frontend
```
### Step 3: Claude Desktop Integration
```json
{
"mcpServers": {
"my-custom-server": {
"command": "node",
"args": ["path/to/mcp-server/server.ts"]
}
}
}
```
Then Claude Desktop automatically discovers and uses your prompts and tools.
---
## Part 5: Key Takeaways & Architecture Decisions
| Component | Technology | Purpose |
|-----------|-----------|---------|
| **MCP Server** | Node.js/Python | Exposes prompts and tools via JSON-RPC |
| **CLI** | Node.js/Python | Interactive prompt router + LLM orchestrator |
| **Frontend** | React/Vue | Web UI for prompt execution |
| **Transport** | STDIO + WebSocket Bridge | CLI uses STDIO (1:1), frontend uses WS (many:1) |
| **LLM Orchestration** | Anthropic SDK | Handles Claude API + tool calling loop |
### Prompt Invocation Flow Summary
```
User Input
↓
[CLI Parser] Detect intent + extract args
↓
[MCP Client] Call server.prompts/get(name, args)
↓
[MCP Server] Return prompt template with context
↓
[CLI] Send prompt + user input to Claude
↓
[Claude] Process prompt → call tools if needed
↓
[CLI] Execute tool calls via MCP server
↓
[Claude] Return final response
↓
Display result to user
```
---
## References
- [MCP Prompts Specification](https://modelcontextprotocol.info/docs/concepts/prompts/)
- [MCP Architecture](https://modelcontextprotocol.info/docs/concepts/architecture/)
- [Gemini CLI Architecture](https://geminicli.com/docs/architecture/)
- [WebSocket Bridge for MCP](https://www.npmjs.com/package/@mcp-b/websocket-bridge)
- [MCP STDIO Transport](https://modelcontextprotocol.io/specification/2025-03-26/basic/transports)
- [Build MCP Client Tutorial](https://modelcontextprotocol.io/docs/develop/build-client)