The jlab-mcp server enables Claude Code to execute Python code with GPU access on SLURM HPC clusters or local machines by orchestrating JupyterLab instances and managing notebook-based sessions.
Session Management
start_new_session: Submit a SLURM job (or local subprocess), start a new IPython kernel, and create a fresh notebookshutdown_session: Stop the kernel and cancel the associated SLURM jobSLURM jobs persist across Claude Code restarts, allowing long-running computations without interruption
Notebook Workflows
start_session_resume_notebook: Re-attach to an existing notebook and re-execute all cells to restore kernel statestart_session_continue_notebook: Fork an existing notebook into a new file with a fresh kernel, without re-executing cells
Code Execution & Editing
execute_code: Run Python code in an active kernel and append it as a new cell, capturing outputsedit_cell: Modify an existing cell's source, re-execute it, and update its outputs (supports negative indexing)add_markdown: Insert markdown documentation cells into the notebook
Other Features
Works in both SLURM cluster mode (auto-detected via
sbatch) and local mode for laptops/workstationsResource monitoring for CPU, memory, and GPU usage on compute nodes
Highly configurable via environment variables (partitions, GPU resources, time limits, modules)
Uses project-specific
.venvdirectories for dependency management
Enables the execution of Python code on GPU compute nodes by managing JupyterLab instances and IPython kernels within a SLURM-managed environment.
Allows for the addition of Markdown cells to notebooks to provide documentation and structure alongside executed code.
Provides tools for executing Python code, managing session kernels, and manipulating notebook cells on high-performance compute clusters.
Supports GPU-accelerated computing workloads by facilitating the use of PyTorch on compute nodes allocated via job schedulers.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@jlab-mcpTrain a PyTorch model on a GPU node and display the training loss"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
jlab-mcp
A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.
Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.
Architecture
Claude Code
↕ stdio
MCP Server
↕ HTTP/WebSocket
JupyterLab (SLURM compute node or local subprocess) ← one server, many kernels
↕
IPython Kernels (GPU access)JupyterLab runs either on a SLURM compute node (HPC clusters) or as a local subprocess (laptops/workstations). The server is managed separately from the MCP server — you start it with jlab-mcp start and it keeps running across Claude Code sessions. All sessions create separate kernels on this shared server.
Local Mode
On machines without SLURM (laptops, workstations), jlab-mcp automatically runs JupyterLab as a local subprocess. Mode is auto-detected: if sbatch is on PATH, SLURM mode is used; otherwise, local mode.
Override with an environment variable:
export JLAB_MCP_RUN_MODE=local # force local mode
export JLAB_MCP_RUN_MODE=slurm # force SLURM modeIn local mode, jlab-mcp start runs in the foreground — press Ctrl+C to stop. The status file uses the same format as SLURM mode, so the MCP server works identically in both modes.
Setup
# Install (no git clone needed)
uv tool install git+https://github.com/kdkyum/jlab-mcp.gitThe SLURM job activates .venv in the current working directory. Set up your project's venv on the shared filesystem with the compute dependencies:
cd /shared/fs/my-project
uv venv
uv pip install jupyterlab ipykernel matplotlib numpy
uv pip install torch --index-url https://download.pytorch.org/whl/cu126 # GPU supportUsage
1. Start the compute node
In a separate terminal, start the SLURM job:
jlab-mcp start # uses default time limit (4h)
jlab-mcp start 24:00:00 # 24 hour time limit
jlab-mcp start 1-00:00:00 # 1 dayThis submits the job and waits until JupyterLab is ready:
SLURM job 24215408 submitted, waiting in queue...
Job running on ravg1011, JupyterLab starting...
JupyterLab ready at http://ravg1011:184322. Use Claude Code
In another terminal, start Claude Code. The MCP server connects to the running JupyterLab automatically.
3. Stop when done
jlab-mcp stopCLI Commands
Command | Description |
| Submit SLURM job and wait until ready. Optional TIME overrides |
| Cancel the SLURM job |
| Poll status (check from another terminal) |
| Print server state, active kernels, and GPU memory |
| Run MCP server (used by Claude Code, not run manually) |
The SLURM job survives Claude Code restarts. You only need to run jlab-mcp start once per work session.
Configuration
All settings are configurable via environment variables. No values are hardcoded for a specific cluster.
Environment Variable | Default | Description |
|
| Base working directory |
|
| Notebook storage (relative to cwd) |
|
| SLURM job logs |
|
| Connection info files |
|
| SLURM partition |
|
| SLURM generic resource |
|
| CPUs per task |
|
| Memory in MB |
|
| Wall clock time limit |
| (empty) | Space-separated modules to load (e.g. |
|
| Port range lower bound |
|
| Port range upper bound |
| (auto) |
|
|
| Bind address for local mode |
Example: Cluster with A100 GPUs and CUDA module
export JLAB_MCP_SLURM_PARTITION=gpu1
export JLAB_MCP_SLURM_GRES=gpu:a100:1
export JLAB_MCP_SLURM_CPUS=18
export JLAB_MCP_SLURM_MEM=125000
export JLAB_MCP_SLURM_TIME=1-00:00:00
export JLAB_MCP_SLURM_MODULES="cuda/12.6"Claude Code Integration
Add to ~/.claude.json or project .mcp.json:
{
"mcpServers": {
"jlab-mcp": {
"command": "jlab-mcp",
"env": {
"JLAB_MCP_SLURM_PARTITION": "gpu1",
"JLAB_MCP_SLURM_GRES": "gpu:a100:1",
"JLAB_MCP_SLURM_MODULES": "cuda/12.6"
}
}
}
}The MCP server uses the working directory to find .venv for the compute node. Claude Code launches from your project directory, so it picks up the right venv automatically.
MCP Tools
Tool | Description |
| Start kernel on shared server, create empty notebook |
| Attach fresh kernel to existing notebook, returns cell contents |
| Insert new code cell and execute it (supports positional insertion) |
| Edit cell source only, no execution (clears stale outputs) |
| Run existing cell without modifying its source |
| Add markdown cell to notebook (supports positional insertion) |
| Run code on a utility kernel (no notebook save, no session state) |
| Interrupt running execution without shutting down the session |
| Stop kernel (SLURM job stays alive for other sessions) |
| Lightweight health check — verify JupyterLab is reachable (no kernel needed) |
| Check CPU, memory, and GPU usage on the compute node (no session needed) |
Resource: jlab-mcp://server/status — returns shared server info and active sessions.
Session Lifecycle
start_new_notebook: Creates a new kernel and a new notebookstart_notebook: Attaches a fresh kernel to an existing notebookRestart kernel:
shutdown_session+start_notebook(same_path)= fresh kernel on same notebookshutdown_session: Kills the kernel only. The SLURM job keeps running.SLURM job dies: Next tool call returns an error. Run
jlab-mcp startto restart.
Testing
# Unit tests (no SLURM needed)
uv run python -m pytest tests/test_slurm.py tests/test_notebook.py tests/test_image_utils.py -v
# Integration tests (requires running `jlab-mcp start` first)
uv run python -m pytest tests/test_tools.py -v -s --timeout=600Acknowledgments
This project is inspired by goodfire-ai/scribe, which provides MCP-based notebook code execution for Claude. The tool interface design, image resizing approach, and notebook management patterns are adapted from scribe for use on HPC/SLURM clusters.
License
MIT
Resources
Looking for Admin?
Admins can modify the Dockerfile, update the server description, and track usage metrics. If you are the server author, to access the admin panel.