Allows deploying the PRIMS server in a containerized environment for isolation and reproducibility
Mentioned in the roadmap for future CI implementation, but not currently integrated
Provides a secure sandbox for executing arbitrary Python code, installing packages, and managing virtual environments
PRIMS – Python Runtime Interpreter MCP Server
PRIMS is a tiny open-source Model Context Protocol (MCP) server that lets LLM agents run arbitrary Python code in a secure, throw-away sandbox.
• One tool, one job. Exposes a single MCP tool – run_code
– that executes user-supplied Python and streams back stdout / stderr
.
• Isolated & reproducible. Each call spins up a fresh virtual-env, installs any requested pip packages, mounts optional read-only files, then nukes the workspace.
• Zero config. Works over MCP/stdio or drop it in Docker.
Quick-start
1. Local development environment
2. Launch the server
3. Docker
Examples
List available tools
You can use the provided script to list all tools exposed by the server:
Expected output (tool names and descriptions may vary):
Run code via the MCP server
Mount a dataset once & reuse it
This mounts a CSV with mount_file
and then reads it inside run_code
without re-supplying the URL.
Inspect your session workspace
This shows how to use the list_dir
and preview_file
tools to browse files your code created.
Persist an artifact to permanent storage
The persist_artifact
tool uploads a file from your output/
directory to a presigned URL.
Example (Python):
Download an artifact
Small artifacts can be fetched directly:
Available tools
Tool | Purpose |
---|---|
run_code | Execute Python in an isolated sandbox with optional pip deps. |
list_dir | List files/directories inside your session workspace. |
preview_file | Return up to 8 KB of a text file for quick inspection. |
persist_artifact | Upload an output/ file to a client-provided presigned URL. |
mount_file | Download a remote file once per session to mounts/<path> . |
See the examples/
directory for end-to-end demos.
Roadmap
- Speed up venv creation, use cached venvs
- Strict sandboxing (prevent accesing files beyond the venv folder, use user groups, firecrack vm)
- Harden CPU / memory limits
- Artifact storage backend (S3, local disk)
- Unit tests & CI (GitHub Actions)
- Dependency resolution recommendations via LLM sampling
- Automated code debugging & error-fix suggestions via LLM sampling
- Auth and security
- OAuth 2.0 support for remote deployments
- Health-check and metrics endpoint for orchestration
PRs welcome! See LICENSE
(MIT).
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
PRIMS is a lightweight, open-source Model Context Protocol (MCP) server that lets LLM agents safely execute arbitrary Python code in a secure, throw-away sandbox.
Related MCP Servers
- AsecurityAlicenseAqualityA secure terminal execution server that enables controlled command execution with security features and resource limits via the Model Context Protocol (MCP).Last updated -1121JavaScriptMIT License
- AsecurityAlicenseAqualityA server that uses the Model Context Protocol (MCP) to allow AI agents to safely execute shell commands on a host system.Last updated -1762TypeScriptMIT License
- -securityFlicense-qualityA Model Context Protocol server implementation that enables seamless integration with Claude and other MCP-compatible clients to access Prem AI's language models, RAG capabilities, and document management features.Last updated -JavaScript
mcp-run-pythonofficial
-securityAlicense-qualityModel Context Protocol server to run Python code in a sandbox.Last updated -1,46210,542PythonMIT License