Enables the execution of Python code on GPU compute nodes by managing JupyterLab instances and IPython kernels within a SLURM-managed environment.
Allows for the addition of Markdown cells to notebooks to provide documentation and structure alongside executed code.
Provides tools for executing Python code, managing session kernels, and manipulating notebook cells on high-performance compute clusters.
Supports GPU-accelerated computing workloads by facilitating the use of PyTorch on compute nodes allocated via job schedulers.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@jlab-mcpTrain a PyTorch model on a GPU node and display the training loss"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
jlab-mcp
A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.
Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.
Architecture
Login and compute nodes share a filesystem. The MCP server submits a SLURM job that starts JupyterLab on a compute node, then communicates with it over HTTP/WebSocket. Connection info (hostname, port, token) is exchanged via a file on the shared filesystem.
Setup
Configuration
All settings are configurable via environment variables. No values are hardcoded for a specific cluster.
Environment Variable | Default | Description |
|
| Base working directory |
|
| Notebook storage |
|
| SLURM job logs |
|
| Connection info files |
|
| SLURM partition |
|
| SLURM generic resource |
|
| CPUs per task |
|
| Memory in MB |
|
| Wall clock time limit |
| (empty) | Space-separated modules to load (e.g. |
|
| Port range lower bound |
|
| Port range upper bound |
Example: Cluster with A100 GPUs and CUDA module
Claude Code Integration
Add to ~/.claude.json or project .mcp.json:
MCP Tools
Tool | Description |
| Submit SLURM job, start kernel, create empty notebook |
| Resume existing notebook (re-executes all cells) |
| Fork notebook with fresh kernel |
| Run Python code, append cell to notebook |
| Edit and re-execute a cell (supports negative indexing) |
| Add markdown cell to notebook |
| Stop kernel, cancel SLURM job, clean up |
Resource: jlab-mcp://server/status — returns active sessions and job states.
Testing
Acknowledgments
This project is inspired by goodfire-ai/scribe, which provides MCP-based notebook code execution for Claude. The tool interface design, image resizing approach, and notebook management patterns are adapted from scribe for use on HPC/SLURM clusters.
License
MIT