Skip to main content
Glama

ns-hpc — Namespaced HPC MCP Server

A Python-based Model Context Protocol (MCP) server that provides a secure, sandboxed interface for LLM Agents to interact with an HPC cluster. The core isolation mechanism is bubblewrap (bwrap) for unprivileged user namespace isolation.

Architecture

LLM Agent (Claude, etc.)
    │  MCP over STDIO (SSH)
    ▼
┌─────────────────────────────┐
│      ns-hpc MCP Server      │
│  ┌───────────────────────┐  │
│  │   Managed MCP Proxy   │──┼──► child MCP servers (filesystem, git, …)
│  └───────────────────────┘  │       inside bwrap container
│  ┌───────────────────────┐  │
│  │    Instance Manager   │──┼──► ~/mcp_instances/{id}/workspace/
│  └───────────────────────┘  │       + metadata.json + audit.log
│  ┌───────────────────────┐  │
│  │      Task Manager     │──┼──► local (Popen + bwrap)
│  │                       │  │    or Slurm (sbatch + bwrap)
│  └───────────────────────┘  │
└─────────────────────────────┘

Requirements

  • Python ≥ 3.11

  • bubblewrap (bwrap) — install: apt install bubblewrap or dnf install bubblewrap

  • User namespaces enabledsysctl kernel.unprivileged_userns_clone=1

  • Slurm (optional) — for submitting jobs to the cluster

Quick Start

# Install ns-hpc
cd ns-hpc
uv sync

# Run diagnostics
uv run ns-hpc doctor

# Start the MCP server (over STDIO — connect via SSH)
uv run ns-hpc run

CLI

ns-hpc run       # Start the MCP server over STDIO
ns-hpc doctor    # Check bwrap, namespaces, and Slurm availability
ns-hpc --version # Show version

Configuration

Edit config.toml to customize:

  • namespace_defaults — bwrap flags (read-only dirs, dev/proc/tmpfs, environment)

  • proxied_mcps — child MCP servers to spawn inside bwrap containers

  • resource_defaults — Slurm walltime, CPUs, memory defaults

  • data_dir — where instance workspaces are stored (default: ~/mcp_instances)

  • context_dir — directory with Markdown documentation files

MCP Tools

Tool

Description

create_instance

Create a bwrap-sandboxed workspace

destroy_instance

Remove a workspace and all its data

list_instances

List all active instances

read_audit_log

Read the audit trail for an instance

run_command

Execute a command inside bwrap (local or Slurm)

get_task

Query task status and output

list_tasks

List tasks for an instance

cancel_task

Cancel a running task

list_context_files

List available HPC documentation

read_context_file

Read a documentation file

Security

  • All commands run inside a bwrap user namespace — no root required.

  • The audit log is written by the host process, never from inside the sandbox.

  • Network is disabled by default (--unshare-net).

  • Instance isolation via per-instance directories with workspace bind mounts.

Project Structure

ns-hpc/
├── config.toml              # Main configuration
├── context/                 # HPC documentation (exposed as MCP resources)
├── pyproject.toml           # Project metadata & dependencies
└── src/ns_hpc/
    ├── __init__.py
    ├── __main__.py
    ├── cli.py               # CLI entry point (run, doctor)
    ├── config.py            # Pydantic config models + TOML loader
    ├── server.py            # MCP server with all tool handlers + proxy
    └── core/
        ├── bwrap_builder.py  # bwrap argument list construction
        ├── instance_manager.py # Workspace CRUD + audit log
        └── task_manager.py  # Local & Slurm task execution
A
license - permissive license
-
quality - not tested
C
maintenance

Resources

Unclaimed servers have limited discoverability.

Looking for Admin?

If you are the server author, to access and configure the admin panel.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/li-yq/namespaced-hpc-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server