# 0014. stdio-to-HTTP Bridge Implementation
**Date**: 2026-01-25
**Status**: Accepted
**Backlog Item**: TASK-0074
## Context
Phase 1 (TASK-0072) completed the HTTP MCP server with SSE transport and integrated viewer. However, the stdio-to-HTTP bridge was intentionally deferred, leaving a gap in the architecture.
### Current State
**What exists:**
- ✅ HTTP server with SSE transport (`src/http-server.ts`)
- ✅ Integrated viewer (same process)
- ✅ `/version` and `/shutdown` endpoints
- ✅ `backlog-mcp serve` command
- ✅ Backward compatible stdio mode (unchanged)
**What's missing:**
- ❌ stdio-to-HTTP bridge
- ❌ Auto-spawning of HTTP server
- ❌ Automatic version upgrades
- ❌ Single HTTP server shared by multiple stdio clients
### Problem
Users currently have TWO separate modes:
- **stdio mode** (default): Uses old `server.ts`, spawns detached viewer (buggy)
- **HTTP mode** (`serve`): New HTTP server, integrated viewer (clean)
Without the bridge:
- Existing users don't benefit from HTTP architecture improvements
- No automatic version management
- Manual mode selection required
- Original ADR-0013 design is incomplete
### Research Findings
**Phase 1 Critical Review:**
- HTTP server implementation is solid and functional
- SSE transport works correctly
- Minor edge case issues identified (not blocking)
- Session management is correct
- Version and shutdown endpoints are ready
**MCP SDK Capabilities:**
- SDK provides both server and client transports
- `@modelcontextprotocol/sdk/client/sse.js` exports SSE client
- Client SDK handles SSE parsing and session management automatically
- No external dependencies needed
**Bridge Requirements:**
1. Read JSON-RPC from stdin
2. Forward to HTTP server via SSE transport
3. Stream responses back to stdout
4. Auto-spawn HTTP server if not running
5. Version check and cooperative upgrade
6. Handle errors gracefully (server crash, network issues)
7. Minimal latency (< 50ms overhead)
## Proposed Solutions
### Option 1: EventSource-based Bridge
**Description**: Use EventSource API (with polyfill) to connect to SSE endpoint.
**Flow**:
1. stdin receives JSON-RPC message
2. If no session, connect to GET /mcp (creates SSE session)
3. POST message to /mcp/message?sessionId=X
4. Listen to EventSource for responses
5. Forward responses to stdout
**Pros**:
- Standard EventSource API (well-tested)
- Session created automatically by SSE transport
- Clean separation of concerns
**Cons**:
- Requires external dependency (eventsource npm package)
- EventSource doesn't expose sessionId easily (need to parse SSE stream)
- First message has higher latency (session creation)
- More complex sessionId extraction
**Implementation Complexity**: Medium
**Critical Issue**: EventSource API doesn't provide direct access to the sessionId generated by the server. We'd need to parse the SSE stream manually anyway, defeating the purpose of using EventSource.
### Option 2: Manual SSE Parsing Bridge
**Description**: Use native http.request to connect to /mcp, manually parse SSE stream.
**Flow**:
1. On startup, GET /mcp to establish SSE connection
2. Parse SSE stream manually (look for "data:" lines)
3. Extract sessionId from SSE transport response
4. For each stdin message, POST to /mcp/message?sessionId=X
5. SSE responses come through the GET /mcp connection
6. Forward to stdout
**Pros**:
- No external dependencies (native Node.js)
- Full control over SSE parsing
- Session created upfront (predictable)
- Can handle sessionId extraction correctly
**Cons**:
- Manual SSE parsing is error-prone
- Need to handle SSE format (data:, event:, id:, etc.)
- More complex implementation
- Need to handle reconnection logic manually
- Higher maintenance burden
**Implementation Complexity**: High
### Option 3: MCP Client SDK Bridge (RECOMMENDED)
**Description**: Use MCP SDK's client-side SSE transport to connect to HTTP server.
**Flow**:
1. Import `Client` from `@modelcontextprotocol/sdk/client/index.js`
2. Import `SSEClientTransport` from `@modelcontextprotocol/sdk/client/sse.js`
3. Create client transport pointing to http://localhost:3030/mcp
4. Connect MCP client to transport
5. For each stdin message, use client.request() to send
6. Forward responses to stdout
**Pros**:
- Uses official MCP SDK (well-tested, maintained)
- Handles SSE parsing automatically
- Handles session management automatically
- Spec-compliant
- Minimal code (SDK handles complexity)
- No external dependencies beyond existing SDK
- Easy to maintain (SDK updates benefit us)
**Cons**:
- Less control over low-level details (acceptable trade-off)
- SDK might have minimal overhead (acceptable)
**Implementation Complexity**: Low
## Decision
**Selected**: Option 3 - MCP Client SDK Bridge
**Rationale**:
1. **Product Merit**: Leverages official SDK, ensuring spec compliance and future compatibility
2. **User Experience**: Minimal latency, robust error handling from SDK
3. **Technical Merit**: Clean implementation, minimal code, easy to maintain
4. **Alignment with Codebase**: Already using MCP SDK for server, natural to use for client
5. **Long-term Maintainability**: SDK updates automatically improve bridge
**Trade-offs Accepted**:
- Less control over low-level SSE details (acceptable - SDK is well-tested)
- Minimal SDK overhead (acceptable - latency will still be < 50ms)
**Why Not Option 1 (EventSource)?**
- EventSource doesn't expose sessionId easily
- Requires external dependency
- More complex sessionId extraction
- No significant benefit over SDK approach
**Why Not Option 2 (Manual Parsing)?**
- High implementation complexity
- Error-prone SSE parsing
- Higher maintenance burden
- Reinventing what SDK already provides
## Consequences
**Positive**:
- ✅ Clean, minimal bridge implementation
- ✅ Spec-compliant (uses official SDK)
- ✅ Easy to maintain (SDK handles complexity)
- ✅ Robust error handling (SDK provides)
- ✅ Future-proof (SDK updates benefit us)
- ✅ No external dependencies beyond existing SDK
**Negative**:
- ⚠️ Less control over low-level details (acceptable)
- ⚠️ Minimal SDK overhead (acceptable)
**Risks**:
- **Risk**: MCP SDK client API changes
- **Mitigation**: SDK follows semver, breaking changes are rare
- **Risk**: SDK has bugs
- **Mitigation**: SDK is well-tested, used by many projects
- **Risk**: Bridge adds latency
- **Mitigation**: Test and measure, should be < 50ms
## Implementation Notes
### Bridge Architecture
```
┌─────────────┐
│ kiro-cli │
└──────┬──────┘
│ stdio
▼
┌─────────────────────────────────────────┐
│ Bridge (src/cli/bridge.ts) │
│ │
│ 1. Auto-spawn HTTP server if needed │
│ 2. Version check & upgrade │
│ 3. MCP Client SDK (SSE transport) │
│ 4. Forward stdin ↔ HTTP ↔ stdout │
└──────┬──────────────────────────────────┘
│ HTTP + SSE
▼
┌─────────────────────────────────────────┐
│ HTTP Server (src/http-server.ts) │
│ │
│ - SSE transport (/mcp) │
│ - Integrated viewer │
│ - Version endpoint (/version) │
│ - Shutdown endpoint (/shutdown) │
└─────────────────────────────────────────┘
```
### Core Functions
#### 1. Server Management
```typescript
async function isServerRunning(port: number): Promise<boolean> {
// Try GET /version
// Return true if 200, false otherwise
}
async function getServerVersion(port: number): Promise<string | null> {
// GET /version
// Return version string or null if error
}
async function spawnServer(port: number): Promise<void> {
// Spawn: node dist/http-server.js
// Detached process (doesn't block)
// Redirect stdout/stderr to /dev/null or log file
}
async function shutdownServer(port: number): Promise<void> {
// POST /shutdown
// Wait for server to close (poll until port free)
}
async function waitForServer(port: number, timeout: number): Promise<void> {
// Poll GET /version until success or timeout
// Exponential backoff
}
```
#### 2. Bridge Logic
```typescript
async function runBridge(port: number): Promise<void> {
// 1. Ensure server is running and up-to-date
await ensureServer(port);
// 2. Create MCP client with SSE transport
const transport = new SSEClientTransport(
new URL(`http://localhost:${port}/mcp`)
);
const client = new Client({
name: 'backlog-mcp-bridge',
version: pkg.version
}, {
capabilities: {}
});
// 3. Connect client
await client.connect(transport);
// 4. Read stdin, forward to client, write to stdout
process.stdin.on('data', async (chunk) => {
const message = JSON.parse(chunk.toString());
const response = await client.request(message);
process.stdout.write(JSON.stringify(response) + '\n');
});
}
async function ensureServer(port: number): Promise<void> {
const running = await isServerRunning(port);
if (!running) {
// Spawn new server
await spawnServer(port);
await waitForServer(port, 10000); // 10s timeout
return;
}
// Check version
const serverVersion = await getServerVersion(port);
if (serverVersion !== pkg.version) {
// Upgrade: shutdown old, spawn new
await shutdownServer(port);
await spawnServer(port);
await waitForServer(port, 10000);
}
}
```
### CLI Integration
**Update `src/cli.ts`**:
```typescript
#!/usr/bin/env node
const args = process.argv.slice(2);
if (args.includes('serve')) {
// HTTP server mode
await import('./http-server.js');
} else if (args.includes('--help') || args.includes('-h')) {
// Help text
console.log('Usage: backlog-mcp [serve]');
} else {
// Default: bridge mode
await import('./cli/bridge.js');
}
```
### Error Handling
**Server Crash During Operation**:
```typescript
transport.onerror = async (error) => {
console.error('Server connection lost, restarting...');
await ensureServer(port);
await client.connect(transport);
};
```
**Port Conflict**:
```typescript
if (error.code === 'EADDRINUSE') {
console.error(`Port ${port} is already in use by another process`);
process.exit(1);
}
```
**Network Timeout**:
```typescript
async function waitForServer(port: number, timeout: number): Promise<void> {
const start = Date.now();
let delay = 100; // Start with 100ms
while (Date.now() - start < timeout) {
if (await isServerRunning(port)) return;
await sleep(delay);
delay = Math.min(delay * 1.5, 1000); // Exponential backoff, max 1s
}
throw new Error(`Server failed to start within ${timeout}ms`);
}
```
### Testing Strategy
**Unit Tests**:
- `isServerRunning()` - Mock HTTP requests
- `getServerVersion()` - Mock HTTP responses
- `spawnServer()` - Mock child_process.spawn
- `shutdownServer()` - Mock HTTP POST
**Integration Tests**:
1. **Auto-spawn**: No server running → bridge spawns it
2. **Version upgrade**: Old server running → bridge detects mismatch → shuts down → spawns new
3. **Multi-client**: Start 3 stdio clients simultaneously → all share one server
4. **Server crash**: Kill server during operation → bridge detects → restarts
5. **Port conflict**: Port already in use → bridge fails gracefully
**Performance Tests**:
- Measure latency: stdin → bridge → HTTP → bridge → stdout
- Target: < 50ms overhead
- Test with large responses (streaming)
### Configuration
**Environment Variables**:
- `BACKLOG_DATA_DIR` - Data directory path
- `BACKLOG_VIEWER_PORT` - HTTP server port (default: 3030)
- `BACKLOG_HTTP_HOST` - HTTP server host (default: localhost)
### Migration Path
**For existing users**:
- No config change needed
- `npx backlog-mcp` now uses bridge (auto-spawns HTTP server)
- Viewer persists across sessions (improvement)
- Automatic version upgrades (improvement)
**For new users**:
- `npx backlog-mcp` - Default mode (bridge)
- `npx backlog-mcp serve` - Explicit HTTP server mode
## Related ADRs
- [0013. HTTP MCP Server Architecture with Built-in stdio Bridge](./0013-http-mcp-server-architecture.md) - Parent ADR defining overall architecture
- [0011. Viewer Version Management](./0011-viewer-version-management.md) - Superseded by HTTP architecture
## References
- [MCP SDK Client Documentation](https://github.com/modelcontextprotocol/typescript-sdk)
- [SSE Specification](https://html.spec.whatwg.org/multipage/server-sent-events.html)
- [Phase 1 Artifact](mcp://backlog/resources/backlog-mcp-engineer/http-architecture-2026-01-25/artifact.md)
- [Phase 1 Critical Review](mcp://backlog/resources/backlog-mcp-engineer/bridge-phase2-2026-01-25/critical-review.md)