Enables creation and management of GitHub issues, allowing automated workflows like migrating issues from other systems or creating issues based on processed data.
Enables retrieval of Google Drive documents, allowing workflows that extract and process content from Drive.
Provides access to Jira issues through search and retrieval operations, enabling workflows like issue migration and data extraction from Jira projects.
Allows updating Salesforce records with processed data, enabling integration workflows that store information in Salesforce objects.
MCP Code Execution Server: Zero-Context Discovery for 100+ MCP Tools
Stop paying 30,000 tokens per query. This bridge implements Anthropic's discovery pattern with rootless security—reducing MCP context from 30K to 200 tokens while proxying any stdio server.
Overview
This bridge implements the "Code Execution with MCP" pattern, a convergence of ideas from industry leaders:
Apple's : "Your LLM Agent Acts Better when Generating Code."
Anthropic's : "Building more efficient agents."
Cloudflare's : "LLMs are better at writing code to call MCP, than at calling MCP directly."
Docker's : "Stop Hardcoding Your Agents’ World."
Terminal Bench: "A realistic terminal environment for evaluating LLM agents."
Instead of exposing hundreds of individual tools to the LLM (which consumes massive context and confuses the model), this bridge exposes one tool: run_python. The LLM writes Python code to discover, call, and compose other tools.
Why This vs. JS "Code Mode"?
While there are JavaScript-based alternatives (like universal-tool-calling-protocol/code-mode), this project is built for Data Science and Security:
Feature | This Project (Python) | JS Code Mode (Node.js) |
Native Language | Python (The language of AI/ML) | TypeScript/JavaScript |
Data Science | Native (
,
,
) | Impossible / Hacky |
Isolation | Hard (Podman/Docker Containers) | Soft (Node.js VM) |
Security | Enterprise (Rootless, No Net, Read-Only) | Process-level |
Philosophy | Infrastructure (Standalone Bridge) | Library (Embeddable) |
Choose this if: You want your agent to analyze data, generate charts, use scientific libraries, or if you require strict container-based isolation for running untrusted code.
What This Solves (That Others Don't)
The Pain: MCP Token Bankruptcy
Connect Claude to 11 MCP servers with ~100 tools = 30,000 tokens of tool schemas loaded into every prompt. That's $0.09 per query before you ask a single question. Scale to 50 servers and your context window breaks.
Why Existing "Solutions" Fail
Docker MCP Gateway: Manages containers beautifully, but still streams all tool schemas into Claude's context. No token optimization.
Cloudflare Code Mode: V8 isolates are fast, but you can't proxy your existing MCP servers (Serena, Wolfram, custom tools). Platform lock-in.
Academic Papers: Describe Anthropic's discovery pattern, but provide no hardened implementation.
Proofs of Concept: Skip security (no rootless), skip persistence (cold starts), skip proxying edge cases.
The Fix: Discovery-First Architecture
Constant 200-token overhead regardless of server count
Proxy any stdio MCP server into rootless containers
Fuzzy search across servers without preloading schemas
Production-hardened with capability dropping and security isolation
Architecture: How It Differs
Result: constant overhead. Whether you manage 10 or 1000 tools, the system prompt stays right-sized and schemas flow only when requested.
Comparison At A Glance
Capability | Docker MCP Gateway | Cloudflare Code Mode | Research Patterns | This Bridge |
Solves token bloat | ❌ Manual preload | ❌ Fixed catalog | ❌ Theory only | ✅ Discovery runtime |
Universal MCP proxying | ✅ Containers | ⚠️ Platform-specific | ❌ Not provided | ✅ Any stdio server |
Rootless security | ⚠️ Optional | ✅ V8 isolate | ❌ Not addressed | ✅ Cap-dropped sandbox |
Auto-discovery | ⚠️ Catalog-bound | ❌ N/A | ❌ Not implemented | ✅ 12+ config paths |
Tool doc search | ❌ | ❌ | ⚠️ Conceptual | ✅
|
Production hardening | ⚠️ Depends on you | ✅ Managed service | ❌ Prototype | ✅ Tested bridge |
Vs. Dynamic Toolsets (Speakeasy)
Speakeasy's Dynamic Toolsets use a 3-step flow: search_tools → describe_tools → execute_tool. While this saves tokens, it forces the agent into a "chatty" loop:
Search: "Find tools for GitHub issues"
Describe: "Get schema for
create_issue"Execute: "Call
create_issue"
This Bridge (Code-First) collapses that loop:
Code: "Import
mcp_github, search for 'issues', and create one if missing."
The agent writes a single Python script that performs discovery, logic, and execution in one round-trip. It's faster, cheaper (fewer intermediate LLM calls), and handles complex logic (loops, retries) that a simple "execute" tool cannot.
Vs. OneMCP (Gentoro)
OneMCP provides a "Handbook" chat interface where you ask questions and it plans execution. This is great for simple queries but turns the execution into a black box.
This Bridge gives the agent raw, sandboxed control. The agent isn't asking a black box to "do it"; the agent is the programmer, writing the exact code to interact with the API. This allows for precise edge-case handling and complex data processing that a natural language planner might miss.
Unique Features
Two-stage discovery –
discovered_servers()reveals what exists;query_tool_docs(name)loads only the schemas you need.Fuzzy search across servers – let the model find tools without memorising catalog names:
from mcp import runtime matches = await runtime.search_tool_docs("calendar events", limit=5) for hit in matches: print(hit["server"], hit["tool"], hit.get("description", ""))Zero-copy proxying – every tool call stays within the sandbox, mirrored over stdio with strict timeouts.
Rootless by default – Podman/Docker containers run with
--cap-drop=ALL, read-only root, no-new-privileges, and explicit memory/PID caps.Compact + TOON output – minimal plain-text responses for most runs, with deterministic TOON blocks available via
MCP_BRIDGE_OUTPUT_MODE=toon.
Who This Helps
Teams juggling double-digit MCP servers who cannot afford context bloat.
Agents that orchestrate loops, retries, and conditionals rather than single tool invocations.
Security-conscious operators who need rootless isolation for LLM-generated code.
Practitioners who want to reuse existing MCP catalogs without hand-curating manifests.
Philosophy: The "No-MCP" Approach
This server aligns with the philosophy that you might not need MCP at all for every little tool. Instead of building rigid MCP servers for simple tasks, you can use this server to give your agent raw, sandboxed access to Bash and Python.
Ad-Hoc Tools: Need a script to scrape a site or parse a file? Just write it and run it. No need to deploy a new MCP server.
Composability: Pipe outputs between commands, save intermediate results to files, and use standard Unix tools.
Safety: Unlike giving an agent raw shell access to your machine, this server runs everything in a secure, rootless container. You get the power of "Bash/Code" without the risk.
Key Features
🛡️ Robustness & Reliability
Lazy Runtime Detection: Starts up instantly even if Podman/Docker isn't ready. Checks for runtime only when code execution is requested.
Self-Reference Prevention: Automatically detects and skips configurations that would launch the bridge recursively.
Noise Filtering: Ignores benign JSON parse errors (like blank lines) from chatty MCP clients.
Smart Volume Sharing: Probes Podman VMs to ensure volume sharing works, even on older versions.
🔒 Security First
Rootless containers - No privileged helpers required
Network isolation - No network access
Read-only filesystem - Immutable root
Dropped capabilities - No system access
Unprivileged user - Runs as UID 65534
Resource limits - Memory, PIDs, CPU, time
Auto-cleanup - Temporary IPC directories
⚡ Performance
Persistent sessions - Variables and state retained across calls
Persistent clients - MCP servers stay warm
Context efficiency - 95%+ reduction vs traditional MCP
Async execution - Proper resource management
Single tool - Only
run_pythonin Claude's context
🔧 Developer Experience
Multiple access patterns:
mcp_servers["server"] # Dynamic lookup mcp_server_name # Attribute access from mcp.servers.server import * # Module importTop-level await - Modern Python patterns
Type-safe - Proper signatures and docs
Compact responses - Plain-text output by default with optional TOON blocks when requested
Response Formats
Default (compact) – responses render as plain text plus a minimal
structuredContentpayload containing only non-empty fields.stdout/stderrlines stay intact, so prompts remain lean without sacrificing content.Optional TOON – set
MCP_BRIDGE_OUTPUT_MODE=toonto emit Token-Oriented Object Notation blocks. We still drop empty fields and mirror the same structure instructuredContent; TOON is handy when you want deterministic tokenisation for downstream prompts.Fallback JSON – if the TOON encoder is unavailable we automatically fall back to pretty JSON blocks while preserving the trimmed payload.
🧠 Persistent Memory System
Cross-session persistence - Memory data survives container restarts and sessions
JSON-based storage - Flexible value types (strings, dicts, lists, etc.)
Metadata support - Add tags and custom metadata to memory entries
Atomic updates - Update memory values with custom functions
Discovery-friendly - List all memories and check existence
Memory files are stored in /projects/memory/ inside the container, which maps to ~/MCPs/user_tools/memory/ on the host. This persists across sessions and container restarts.
Discovery Workflow
SANDBOX_HELPERS_SUMMARYin the tool schema only advertises the discovery helpers (discovered_servers(),list_servers(),query_tool_docs(),search_tool_docs(), etc.). It never includes individual server or tool documentation.On first use the LLM typically calls
discovered_servers()(orlist_servers_sync()for the cached list) to enumerate MCP servers, thenquery_tool_docs(server)/query_tool_docs_sync(server)orsearch_tool_docs("keyword")/search_tool_docs_sync("keyword")to fetch the relevant subset of documentation.Tool metadata is streamed on demand, keeping the system prompt at roughly 200 tokens regardless of how many servers or tools are installed.
Once the LLM has the docs it needs, it writes Python that uses the generated
mcp_<alias>proxies ormcp.runtimehelpers to invoke tools.
Need a short description without probing the helpers? Call runtime.capability_summary() to print a one-paragraph overview suitable for replying to questions such as “what can the code-execution MCP do?”
Quick Start
1. Prerequisites (macOS or Linux)
Check version:
python3 --version(Python 3.11+ required)If needed, install Python via package manager or python.org
macOS:
brew install podmanorbrew install --cask dockerUbuntu/Debian:
sudo apt-get install -y podmanorcurl -fsSL https://get.docker.com | sh
Note on Pydantic compatibility:
If you use Python 3.14+, ensure you have a modern Pydantic release installed (for example,
pydantic >= 2.12.0). Some older Pydantic versions or environments that install a separatetypingpackage from PyPI may raise errors such as:
If you see this error, run:
And re-run the project setup (e.g. remove .venv/ and uv sync).
2. Install Dependencies
Use uv to sync the project environment:
3. Launch Bridge
If you prefer to run from a local checkout, the equivalent command is:
4. Register with Your Agent
Add the following server configuration to your agent's MCP settings file (e.g., mcp_config.json, claude_desktop_config.json, etc.):
5. Execute Code
Load Servers Explicitly
run_python only loads the MCP servers you request. Pass them via the servers array when you invoke the tool so proxies such as mcp_serena or mcp_filesystem become available inside the sandbox:
If you omit the list the discovery helpers still enumerate everything, but any RPC call that targets an unloaded server returns Server '<name>' is not available.
Note: The servers array only controls which proxies are generated for a sandbox invocation. It does not set server configuration fields such as cwd. The cwd property is part of the host/server config and LLMs should call runtime.describe_server(name) or inspect runtime.list_loaded_server_metadata() to discover the configured cwd before assuming the server's working directory.
Note: server configurations can include an optional cwd property. If present the bridge will start the host MCP server process in that working directory; agents should check runtime.describe_server(name) to discover a server's configured cwd before making assumptions.
Testing
Project environments support CPython 3.11+. Ensure your local environment uses a compatible Python version:
Runtime dependencies stay lean; dev dependencies (pytest, etc.) are available via the dev extra:
Architecture
Zero-Context Discovery
Unlike traditional MCP servers that preload every tool definition (sometimes 30k+ tokens), this bridge pins its system prompt to roughly 200 tokens and trains the LLM to discover what it needs on demand:
LLM calls
discovered_servers()→ learns which bridges are available without loading schemas.LLM calls
query_tool_docs("serena")→ hydrates just that server's tool docs, optionally filtered per tool.LLM writes orchestration code → invokes helpers like
mcp_serena.search()ormcp.runtime.call_tool().
Result: context usage stays effectively constant no matter how many MCP servers you configure.
Process:
Client calls
run_python(code, servers, timeout)Bridge loads requested MCP servers
Prepares a sandbox invocation: collects MCP tool metadata, writes an entrypoint into a shared
/ipcvolume, and exportsMCP_AVAILABLE_SERVERSGenerated entrypoint rewires stdio into JSON-framed messages and proxies MCP calls over the container's stdin/stdout pipe
Persistent Execution: The container is started once (if not running) and stays active.
State Retention: Variables, imports, and functions defined in one call are available in subsequent calls.
Host stream handler processes JSON frames, forwards MCP traffic, enforces timeouts, and keeps the container alive for the next request.
Configuration
Environment Variables
Variable | Default | Description |
| auto | Container runtime (podman/docker) |
| python:3.14-slim | Container image |
| 30s | Default timeout |
| 120s | Max timeout |
| 512m | Memory limit |
| 128 | Process limit |
| - | CPU limit |
| 65534:65534 | Run as UID:GID |
| 300s | Shutdown delay |
|
| Host directory for IPC sockets and temp state |
|
| Response text format (
or
) |
|
| Bridge logging verbosity |
Server Discovery
The bridge automatically discovers MCP servers from multiple configuration sources:
Supported Locations:
Location | Name | Priority |
| User MCPs | Highest |
| Standard MCP | |
| Local Project | |
| VS Code Workspace | |
| Claude CLI | |
| Cursor | |
| OpenCode CLI | |
| Windsurf | |
| Claude Code (macOS) | |
| Claude Desktop (macOS) | |
| VS Code Global (macOS) | |
| VS Code Global (Linux) | Lowest |
Note: Earlier sources take precedence. If the same server is defined in multiple locations, the first one wins.
Example Server (~/MCPs/filesystem.json):
Note: To prevent recursive launches, the bridge automatically skips any config entry that appears to start
mcp-server-code-execution-modeagain (includinguvx … mcp-server-code-execution-mode run). SetMCP_BRIDGE_ALLOW_SELF_SERVER=1if you intentionally need to expose the bridge as a nested MCP server.
Docker MCP Gateway Integration
When you rely on docker mcp gateway run to expose third-party MCP servers, the bridge simply executes the gateway binary. The gateway is responsible for pulling tool images and wiring stdio transports, so make sure the host environment is ready:
Run
docker loginfor every registry referenced in the gateway catalog (e.g. Docker Hubmcp/*images,ghcr.io/github/github-mcp-server). Without cached credentials the pull step fails before any tools come online.Provide required secrets for those servers—
github-officialneedsgithub.personal_access_token, others may expect API keys or auth tokens. Usedocker mcp secret set <name>(or whichever mechanism your gateway is configured with) so the container sees the values at start-up.Mirror any volume mounts or environment variables that the catalog expects (filesystem paths, storage volumes, etc.). Missing mounts or credentials commonly surface as
failed to connect: calling "initialize": EOFduring the stdio handshake.If
list_toolsonly returns the internal management helpers (mcp-add,code-mode, …), the gateway never finished initializing the external servers—check the gateway logs for missing secrets or registry access errors.
State Directory & Volume Sharing
Runtime artifacts (including the generated
/ipc/entrypoint.pyand related handshake metadata) live under~/MCPs/by default. SetMCP_BRIDGE_STATE_DIRto relocate them.When the selected runtime is Podman, the bridge automatically issues
podman machine set --rootful --now --volume <state_dir>:<state_dir>so the VM can mount the directory. On olderpodman machinebuilds that do not support--volume, the bridge now probes the VM withpodman machine ssh test -d <state_dir>and proceeds if the share is already available.Docker Desktop does not expose a CLI for file sharing; ensure the chosen state directory is marked as shared in Docker Desktop → Settings → Resources → File Sharing before running the bridge.
To verify a share manually, run
docker run --rm -v ~/MCPs:/ipc alpine ls /ipc(or the Podman equivalent) and confirm the files are visible.
Usage Examples
File Processing
Data Pipeline
Multi-System Workflow
Inspect Available Servers
Example output seen by the LLM when running the snippet above with the stub server:
Clients that prefer listMcpResources can skip executing the helper snippet and instead request the
resource://mcp-server-code-execution-mode/capabilities resource. The server advertises it via
resources/list, and reading it returns the same helper summary plus a short checklist for loading
servers explicitly.
Security
Container Constraints
Constraint | Setting | Purpose |
Network |
| No external access |
Filesystem |
| Immutable base |
Capabilities |
| No system access |
Privileges |
| No escalation |
User |
| Unprivileged |
Memory |
| Resource cap |
PIDs |
| Process cap |
Workspace | tmpfs, noexec | Safe temp storage |
Capabilities Matrix
Action | Allowed | Details |
Import stdlib | ✅ | Python standard library |
Access MCP tools | ✅ | Via proxies |
Memory ops | ✅ | Process data |
Write to disk | ✅ | Only /tmp, /workspace |
Network | ❌ | Completely blocked |
Host access | ❌ | No system calls |
Privilege escalation | ❌ | Prevented by sandbox |
Container escape | ❌ | Rootless + isolation |
Documentation
README.md - This file, quick start
GUIDE.md - Comprehensive user guide
ARCHITECTURE.md - Technical deep dive
HISTORY.md - Evolution and lessons
STATUS.md - Current state and roadmap
Resources
External
Status
✅ Implemented
Rootless container sandbox
Single
run_pythontoolMCP server proxying
Persistent sessions (state retention)
Persistent clients (warm MCP servers)
Comprehensive docs
🔄 In Progress
Automated testing
Observability (logging, metrics)
Policy controls
Runtime diagnostics
📋 Roadmap
Connection pooling
Web UI
Multi-language support
Workflow orchestration
Agent-visible discovery channel (host-proxied
mcp-find/mcp-add)Execution telemetry (structured logs, metrics, traces)
Persistent and shareable code-mode artifacts
License
GPLv3 License
Support
For issues or questions, see the documentation or file an issue.