Enables the execution of Python code on GPU compute nodes by managing JupyterLab instances and IPython kernels within a SLURM-managed environment.
Allows for the addition of Markdown cells to notebooks to provide documentation and structure alongside executed code.
Provides tools for executing Python code, managing session kernels, and manipulating notebook cells on high-performance compute clusters.
Supports GPU-accelerated computing workloads by facilitating the use of PyTorch on compute nodes allocated via job schedulers.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@jlab-mcpTrain a PyTorch model on a GPU node and display the training loss"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
jlab-mcp
A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.
Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.
Architecture
Claude Code (login node)
↕ stdio
MCP Server (login node)
↕ HTTP/WebSocket
JupyterLab (compute node, via sbatch)
↕
IPython Kernel (GPU access)Login and compute nodes share a filesystem. The MCP server submits a SLURM job that starts JupyterLab on a compute node, then communicates with it over HTTP/WebSocket. Connection info (hostname, port, token) is exchanged via a file on the shared filesystem.
Setup
# Clone and install
git clone https://github.com/kdkyum/jlab-mcp.git
cd jlab-mcp
uv sync
# Install PyTorch separately (GPU support, not in pyproject.toml)
uv pip install torch --index-url https://download.pytorch.org/whl/cu126Configuration
All settings are configurable via environment variables. No values are hardcoded for a specific cluster.
Environment Variable | Default | Description |
|
| Base working directory |
|
| Notebook storage |
|
| SLURM job logs |
|
| Connection info files |
|
| SLURM partition |
|
| SLURM generic resource |
|
| CPUs per task |
|
| Memory in MB |
|
| Wall clock time limit |
| (empty) | Space-separated modules to load (e.g. |
|
| Port range lower bound |
|
| Port range upper bound |
Example: Cluster with A100 GPUs and CUDA module
export JLAB_MCP_SLURM_PARTITION=gpu1
export JLAB_MCP_SLURM_GRES=gpu:a100:1
export JLAB_MCP_SLURM_CPUS=18
export JLAB_MCP_SLURM_MEM=125000
export JLAB_MCP_SLURM_TIME=1-00:00:00
export JLAB_MCP_SLURM_MODULES="cuda/12.6"Claude Code Integration
Add to ~/.claude.json or project .mcp.json:
{
"mcpServers": {
"jlab-mcp": {
"command": "uv",
"args": ["run", "--directory", "/path/to/jlab-mcp", "python", "-m", "jlab_mcp"],
"env": {
"JLAB_MCP_SLURM_PARTITION": "gpu1",
"JLAB_MCP_SLURM_GRES": "gpu:a100:1",
"JLAB_MCP_SLURM_MODULES": "cuda/12.6"
}
}
}
}MCP Tools
Tool | Description |
| Submit SLURM job, start kernel, create empty notebook |
| Resume existing notebook (re-executes all cells) |
| Fork notebook with fresh kernel |
| Run Python code, append cell to notebook |
| Edit and re-execute a cell (supports negative indexing) |
| Add markdown cell to notebook |
| Stop kernel, cancel SLURM job, clean up |
Resource: jlab-mcp://server/status — returns active sessions and job states.
Testing
# Unit tests (no SLURM needed)
uv run python -m pytest tests/test_slurm.py tests/test_notebook.py tests/test_image_utils.py -v
# Integration tests (requires SLURM cluster)
uv run python -m pytest tests/test_tools.py -v -s --timeout=300Acknowledgments
This project is inspired by goodfire-ai/scribe, which provides MCP-based notebook code execution for Claude. The tool interface design, image resizing approach, and notebook management patterns are adapted from scribe for use on HPC/SLURM clusters.
License
MIT