Skip to main content
Glama

jlab-mcp

A Model Context Protocol (MCP) server that enables Claude Code to execute Python code on GPU compute nodes via JupyterLab running on a SLURM cluster.

Inspired by and adapted from goodfire-ai/scribe, which provides notebook-based code execution for Claude. This project adapts that approach for HPC/SLURM environments where GPU resources are allocated via job schedulers.

Architecture

Claude Code (login node) ↕ stdio MCP Server (login node) ↕ HTTP/WebSocket JupyterLab (compute node, via sbatch) ↕ IPython Kernel (GPU access)

Login and compute nodes share a filesystem. The MCP server submits a SLURM job that starts JupyterLab on a compute node, then communicates with it over HTTP/WebSocket. Connection info (hostname, port, token) is exchanged via a file on the shared filesystem.

Setup

# Clone and install git clone https://github.com/kdkyum/jlab-mcp.git cd jlab-mcp uv sync # Install PyTorch separately (GPU support, not in pyproject.toml) uv pip install torch --index-url https://download.pytorch.org/whl/cu126

Configuration

All settings are configurable via environment variables. No values are hardcoded for a specific cluster.

Environment Variable

Default

Description

JLAB_MCP_DIR

~/.jlab-mcp

Base working directory

JLAB_MCP_NOTEBOOK_DIR

~/.jlab-mcp/notebooks

Notebook storage

JLAB_MCP_LOG_DIR

~/.jlab-mcp/logs

SLURM job logs

JLAB_MCP_CONNECTION_DIR

~/.jlab-mcp/connections

Connection info files

JLAB_MCP_SLURM_PARTITION

gpu

SLURM partition

JLAB_MCP_SLURM_GRES

gpu:1

SLURM generic resource

JLAB_MCP_SLURM_CPUS

4

CPUs per task

JLAB_MCP_SLURM_MEM

32000

Memory in MB

JLAB_MCP_SLURM_TIME

4:00:00

Wall clock time limit

JLAB_MCP_SLURM_MODULES

(empty)

Space-separated modules to load (e.g. cuda/12.6)

JLAB_MCP_PORT_MIN

18000

Port range lower bound

JLAB_MCP_PORT_MAX

19000

Port range upper bound

Example: Cluster with A100 GPUs and CUDA module

export JLAB_MCP_SLURM_PARTITION=gpu1 export JLAB_MCP_SLURM_GRES=gpu:a100:1 export JLAB_MCP_SLURM_CPUS=18 export JLAB_MCP_SLURM_MEM=125000 export JLAB_MCP_SLURM_TIME=1-00:00:00 export JLAB_MCP_SLURM_MODULES="cuda/12.6"

Claude Code Integration

Add to ~/.claude.json or project .mcp.json:

{ "mcpServers": { "jlab-mcp": { "command": "uv", "args": ["run", "--directory", "/path/to/jlab-mcp", "python", "-m", "jlab_mcp"], "env": { "JLAB_MCP_SLURM_PARTITION": "gpu1", "JLAB_MCP_SLURM_GRES": "gpu:a100:1", "JLAB_MCP_SLURM_MODULES": "cuda/12.6" } } } }

MCP Tools

Tool

Description

start_new_session

Submit SLURM job, start kernel, create empty notebook

start_session_resume_notebook

Resume existing notebook (re-executes all cells)

start_session_continue_notebook

Fork notebook with fresh kernel

execute_code

Run Python code, append cell to notebook

edit_cell

Edit and re-execute a cell (supports negative indexing)

add_markdown

Add markdown cell to notebook

shutdown_session

Stop kernel, cancel SLURM job, clean up

Resource: jlab-mcp://server/status — returns active sessions and job states.

Testing

# Unit tests (no SLURM needed) uv run python -m pytest tests/test_slurm.py tests/test_notebook.py tests/test_image_utils.py -v # Integration tests (requires SLURM cluster) uv run python -m pytest tests/test_tools.py -v -s --timeout=300

Acknowledgments

This project is inspired by goodfire-ai/scribe, which provides MCP-based notebook code execution for Claude. The tool interface design, image resizing approach, and notebook management patterns are adapted from scribe for use on HPC/SLURM clusters.

License

MIT

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kdkyum/jlab-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server