Skip to main content
Glama
PKG-INFO3.82 kB
Metadata-Version: 2.4 Name: mcp-code-mode Version: 0.1.0 Summary: Code Execution MCP Server using DSpy and MCP Author: Codex Requires-Python: <3.13,>=3.11 Description-Content-Type: text/markdown Requires-Dist: fastmcp>=2.0.0 Requires-Dist: dspy-ai>=2.5.0 Requires-Dist: mcp>=1.0.0 Provides-Extra: dev Requires-Dist: pytest>=7.4.0; extra == "dev" Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev" Requires-Dist: pytest-cov>=4.1.0; extra == "dev" Requires-Dist: black>=23.0.0; extra == "dev" Requires-Dist: ruff>=0.1.0; extra == "dev" Requires-Dist: mypy>=1.7.0; extra == "dev" # MCP Code Mode Prototype implementation for the Code Execution MCP Server with DSpy. This repo follows the implementation plan in `docs/implementation-plan.md`. ## Toolchain Requirements - Python 3.11 (3.11.0 or newer, <3.13 recommended) - Node.js 20+ with `npx` available (needed for the reference MCP servers) - `pip` for installing the Python dependencies listed in `pyproject.toml` / `requirements*.txt` ## Quick Start ```bash python3.11 -m venv .venv source .venv/bin/activate pip install -r requirements-dev.txt pip install -e . ``` To keep the Node-based MCP servers current, run: ```bash npm install -g npm@latest ``` The `mcp_servers.json` file enumerates the default MCP servers (filesystem, memory, fetch). Update this file to point at any additional servers you want available during experimentation. ## Phase 1 Executor Server The Phase 1 milestone introduces a minimal FastMCP server that exposes a single `execute_code` tool backed by DSpy's sandboxed Python interpreter. 1. Activate your virtual environment. 2. Launch the server with: ```bash python -m mcp_code_mode.executor_server ``` 3. Point an MCP-compatible client at the process (stdio transport) and call the `execute_code` tool with arbitrary Python snippets. Every invocation returns a structured payload: | Field | Description | |-------|-------------| | `success` | `True` if the snippet finished without exceptions or timeouts. | | `stdout` / `stderr` | Captured output streams (truncated to 64 kB). | | `duration_ms` | Total runtime in milliseconds. | | `diagnostics` | Optional metadata describing errors/timeouts. | Timeouts and invalid arguments are reported cleanly, and failures are echoed through the FastMCP context log for easier debugging. ## Testing Status The Phase 1 executor server has been tested with the following scenarios: ### ✅ Completed Tests 1. **Basic Execution**: Successfully executes simple Python snippets with correct stdout capture - Test: `print('hello from sandbox')` - Result: `{"success":true,"stdout":"hello from sandbox\n","stderr":"","duration_ms":1978,"diagnostics":null}` 2. **Error Handling**: Properly captures and reports Python exceptions with diagnostic information - Test: `raise ValueError("boom")` - Result: `{"success":false,"stdout":"","stderr":"ValueError: ['boom']","duration_ms":20,"diagnostics":{"error_type":"InterpreterError","traceback":"..."}}` 3. **Timeout Detection**: Correctly detects and reports execution timeouts - Test: `while True: pass` (2s timeout) - Result: `{"success":false,"stdout":"","stderr":"Execution timed out after 2.00s","duration_ms":2001,"diagnostics":{"error_type":"TIMEOUT","timeout_seconds":2.0}}` ### ⚠️ Known Issues 1. **Interpreter State Management**: After a timeout occurs, the interpreter instance enters a bad state where all subsequent executions immediately timeout. This requires disconnecting and reconnecting to the MCP server to obtain a fresh interpreter instance. ### 🔄 Next Steps 1. Fix interpreter state management after timeouts 2. Implement proper interpreter recycling/recreation 3. Add tool formatter + integration utilities for Phase 2 4. Enable generated code to discover/use remote MCP tools

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/draphonix/mcp-code-mode'

If you have feedback or need assistance with the MCP directory API, please join our Discord server