Skip to main content
Glama

MCP Server Code Execution Mode

by elusznik

MCP Server Code Execution Mode

An MCP server that executes Python code in isolated rootless containers with optional MCP server proxying.

Anthropic Engineering Cloudflare Blog MCP Protocol

Overview

This bridge implements the "Code Execution with MCP" patternβ€”a revolutionary approach to using Model Context Protocol tools. Instead of exposing all MCP tools directly to Claude (consuming massive context), the bridge:

  1. Auto-discovers configured MCP servers

  2. Proxies tools into sandboxed code execution

  3. Eliminates context overhead (95%+ reduction)

  4. Enables complex workflows through Python code

Key Features

πŸ”’ Security First

  • Rootless containers - No privileged helpers required

  • Network isolation - No network access

  • Read-only filesystem - Immutable root

  • Dropped capabilities - No system access

  • Unprivileged user - Runs as UID 65534

  • Resource limits - Memory, PIDs, CPU, time

  • Auto-cleanup - Temporary IPC directories

⚑ Performance

  • Persistent clients - MCP servers stay warm

  • Context efficiency - 95%+ reduction vs traditional MCP

  • Async execution - Proper resource management

  • Single tool - Only run_python in Claude's context

πŸ”§ Developer Experience

  • Multiple access patterns:

    mcp_servers["server"] # Dynamic lookup mcp_server_name # Attribute access from mcp.servers.server import * # Module import
  • Top-level await - Modern Python patterns

  • Type-safe - Proper signatures and docs

  • TOON responses - Tool outputs are emitted as TOON code blocks for token-efficient prompting

TOON Response Format

  • We encode every MCP bridge response using Token-Oriented Object Notation (TOON).

  • TOON collapses repetitive JSON keys and emits newline-aware arrays, trimming token counts 30-60% for uniform tables so LLM bills stay lower.

  • Clients that expect plain JSON can still recover the structured payload: the TOON code block includes the same fields (status, stdout, stderr, etc.) and we fall back to JSON automatically if the encoder is unavailable.

Quick Start

1. Prerequisites (macOS or Linux)

  • Install a rootless container runtime (Podman or Docker).

    • macOS: brew install podman or brew install --cask docker

    • Ubuntu/Debian: sudo apt-get install -y podman or curl -fsSL https://get.docker.com | sh

  • Install uv to manage this project:

    curl -LsSf https://astral.sh/uv/install.sh | sh
  • Pull a Python base image once your runtime is ready:

    podman pull python:3.12-slim # or docker pull python:3.12-slim

2. Install Dependencies

Use uv to sync the project environment:

uv sync

3. Launch Bridge

uv run python mcp_server_code_execution_mode.py

4. Register with Claude Code

File: ~/.config/mcp/servers/mcp-server-code-execution-mode.json

{ "mcpServers": { "mcp-server-code-execution-mode": { "command": "uv", "args": ["run", "python", "/absolute/path/to/mcp_server_code_execution_mode.py"], "env": { "MCP_BRIDGE_RUNTIME": "podman" } } } }

Restart Claude Code

5. Execute Code

# Use MCP tools in sandboxed code result = await mcp_filesystem.read_file(path='/tmp/test.txt') # Complex workflows data = await mcp_search.search(query="TODO") await mcp_github.create_issue(repo='owner/repo', title=data.title)

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MCP Client β”‚ (Claude Code) β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚ stdio β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MCP Code Exec β”‚ ← Discovers, proxies, manages β”‚ Bridge β”‚ β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜ β”‚ container β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Container β”‚ ← Executes with strict isolation β”‚ Sandbox β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Process:

  1. Client calls run_python(code, servers, timeout)

  2. Bridge loads requested MCP servers

  3. Prepares a sandbox invocation: collects MCP tool metadata, writes an entrypoint into a shared /ipc volume, and exports MCP_AVAILABLE_SERVERS

  4. Generated entrypoint rewires stdio into JSON-framed messages and proxies MCP calls over the container's stdin/stdout pipe

  5. Runs container with security constraints

  6. Host stream handler processes JSON frames, forwards MCP traffic, enforces timeouts, and cleans up

Configuration

Environment Variables

Variable

Default

Description

MCP_BRIDGE_RUNTIME

auto

Container runtime (podman/docker)

MCP_BRIDGE_IMAGE

python:3.12-slim

Container image

MCP_BRIDGE_TIMEOUT

30s

Default timeout

MCP_BRIDGE_MAX_TIMEOUT

120s

Max timeout

MCP_BRIDGE_MEMORY

512m

Memory limit

MCP_BRIDGE_PIDS

128

Process limit

MCP_BRIDGE_CPUS

-

CPU limit

MCP_BRIDGE_CONTAINER_USER

65534:65534

Run as UID:GID

MCP_BRIDGE_RUNTIME_IDLE_TIMEOUT

300s

Shutdown delay

MCP_BRIDGE_STATE_DIR

./.mcp-bridge

Host directory for IPC sockets and temp state

Server Discovery

Scanned Locations:

  • ~/.claude.json

  • ~/Library/Application Support/Claude Code/claude_code_config.json

  • ~/Library/Application Support/Claude/claude_code_config.json (early Claude Code builds)

  • ~/Library/Application Support/Claude/claude_desktop_config.json (Claude Desktop fallback)

  • ~/.config/mcp/servers/*.json

  • ./claude_code_config.json

  • ./claude_desktop_config.json (project-local fallback)

  • ./mcp-servers/*.json

Example Server (~/.config/mcp/servers/filesystem.json):

{ "mcpServers": { "filesystem": { "command": "npx", "args": ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"] } } }

Docker MCP Gateway Integration

When you rely on docker mcp gateway run to expose third-party MCP servers, the bridge simply executes the gateway binary. The gateway is responsible for pulling tool images and wiring stdio transports, so make sure the host environment is ready:

  • Run docker login for every registry referenced in the gateway catalog (e.g. Docker Hub mcp/* images, ghcr.io/github/github-mcp-server). Without cached credentials the pull step fails before any tools come online.

  • Provide required secrets for those serversβ€”github-official needs github.personal_access_token, others may expect API keys or auth tokens. Use docker mcp secret set <name> (or whichever mechanism your gateway is configured with) so the container sees the values at start-up.

  • Mirror any volume mounts or environment variables that the catalog expects (filesystem paths, storage volumes, etc.). Missing mounts or credentials commonly surface as failed to connect: calling "initialize": EOF during the stdio handshake.

  • If list_tools only returns the internal management helpers (mcp-add, code-mode, …), the gateway never finished initializing the external serversβ€”check the gateway logs for missing secrets or registry access errors.

State Directory & Volume Sharing

  • Runtime artifacts (including the generated /ipc/entrypoint.py and related handshake metadata) live under ./.mcp-bridge/ by default. Set MCP_BRIDGE_STATE_DIR to relocate them.

  • When the selected runtime is Podman, the bridge automatically issues podman machine set --rootful --now --volume <state_dir>:<state_dir> so the VM can mount the directory.

  • Docker Desktop does not expose a CLI for file sharing; ensure the chosen state directory is marked as shared in Docker Desktop β†’ Settings β†’ Resources β†’ File Sharing before running the bridge.

  • To verify a share manually, run docker run --rm -v $PWD/.mcp-bridge:/ipc alpine ls /ipc (or the Podman equivalent) and confirm the files are visible.

Usage Examples

File Processing

# List and filter files files = await mcp_filesystem.list_directory(path='/tmp') for file in files: content = await mcp_filesystem.read_file(path=file) if 'TODO' in content: print(f"TODO in {file}")

Data Pipeline

# Extract data transcript = await mcp_google_drive.get_document(documentId='abc123') # Process summary = transcript[:500] + "..." # Store await mcp_salesforce.update_record( objectType='SalesMeeting', recordId='00Q5f000001abcXYZ', data={'Notes': summary} )

Multi-System Workflow

# Jira β†’ GitHub migration issues = await mcp_jira.search_issues(project='API', status='Open') for issue in issues: details = await mcp_jira.get_issue(id=issue.id) if 'bug' in details.description.lower(): await mcp_github.create_issue( repo='owner/repo', title=f"Bug: {issue.title}", body=details.description )

Inspect Available Servers

from mcp import runtime print("Discovered:", runtime.discovered_servers()) print("Loaded metadata:", runtime.list_loaded_server_metadata()) print("Selectable via RPC:", await runtime.list_servers()) # Peek at tool docs for a server that's already loaded in this run loaded = runtime.list_loaded_server_metadata() if loaded: first = runtime.describe_server(loaded[0]["name"]) for tool in first["tools"]: print(tool["alias"], "β†’", tool.get("description", ""))

Example output seen by the LLM when running the snippet above with the stub server:

Discovered: ('stub',) Loaded metadata: ({'name': 'stub', 'alias': 'stub', 'tools': [{'name': 'echo', 'alias': 'echo', 'description': 'Echo the provided message', 'input_schema': {...}}]},) Selectable via RPC: ('stub',)

Security

Container Constraints

Constraint

Setting

Purpose

Network

--network none

No external access

Filesystem

--read-only

Immutable base

Capabilities

--cap-drop ALL

No system access

Privileges

no-new-privileges

No escalation

User

65534:65534

Unprivileged

Memory

--memory 512m

Resource cap

PIDs

--pids-limit 128

Process cap

Workspace

tmpfs, noexec

Safe temp storage

Capabilities Matrix

Action

Allowed

Details

Import stdlib

βœ…

Python standard library

Access MCP tools

βœ…

Via proxies

Memory ops

βœ…

Process data

Write to disk

βœ…

Only /tmp, /workspace

Network

❌

Completely blocked

Host access

❌

No system calls

Privilege escalation

❌

Prevented by sandbox

Container escape

❌

Rootless + isolation

Documentation

Resources

External

Status

βœ… Implemented

  • Rootless container sandbox

  • Single run_python tool

  • MCP server proxying

  • Persistent clients

  • Comprehensive docs

πŸ”„ In Progress

  • Automated testing

  • Observability (logging, metrics)

  • Policy controls

  • Runtime diagnostics

πŸ“‹ Roadmap

  • Connection pooling

  • Web UI

  • Multi-language support

  • Workflow orchestration

License

GPLv3 License

Support

For issues or questions, see the documentation or file an issue.

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/elusznik/mcp-server-code-execution-mode'

If you have feedback or need assistance with the MCP directory API, please join our Discord server