ns-hpc
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@ns-hpcCreate a new sandbox workspace for testing."
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
ns-hpc — Namespaced HPC MCP Server
A Python-based Model Context Protocol (MCP) server that provides a secure, sandboxed interface for LLM Agents to interact with an HPC cluster. The core isolation mechanism is bubblewrap (bwrap) for unprivileged user namespace isolation.
Architecture
LLM Agent (Claude, etc.)
│ MCP over STDIO (SSH)
▼
┌─────────────────────────────┐
│ ns-hpc MCP Server │
│ ┌───────────────────────┐ │
│ │ Managed MCP Proxy │──┼──► child MCP servers (filesystem, git, …)
│ └───────────────────────┘ │ inside bwrap container
│ ┌───────────────────────┐ │
│ │ Instance Manager │──┼──► ~/mcp_instances/{id}/workspace/
│ └───────────────────────┘ │ + metadata.json + audit.log
│ ┌───────────────────────┐ │
│ │ Task Manager │──┼──► local (Popen + bwrap)
│ │ │ │ or Slurm (sbatch + bwrap)
│ └───────────────────────┘ │
└─────────────────────────────┘Requirements
Python ≥ 3.11
bubblewrap (
bwrap) — install:apt install bubblewrapordnf install bubblewrapUser namespaces enabled —
sysctl kernel.unprivileged_userns_clone=1Slurm (optional) — for submitting jobs to the cluster
Quick Start
# Install ns-hpc
cd ns-hpc
uv sync
# Run diagnostics
uv run ns-hpc doctor
# Start the MCP server (over STDIO — connect via SSH)
uv run ns-hpc runCLI
ns-hpc run # Start the MCP server over STDIO
ns-hpc doctor # Check bwrap, namespaces, and Slurm availability
ns-hpc --version # Show versionConfiguration
Edit config.toml to customize:
namespace_defaults— bwrap flags (read-only dirs, dev/proc/tmpfs, environment)proxied_mcps— child MCP servers to spawn inside bwrap containersresource_defaults— Slurm walltime, CPUs, memory defaultsdata_dir— where instance workspaces are stored (default:~/mcp_instances)context_dir— directory with Markdown documentation files
MCP Tools
Tool | Description |
| Create a bwrap-sandboxed workspace |
| Remove a workspace and all its data |
| List all active instances |
| Read the audit trail for an instance |
| Execute a command inside bwrap (local or Slurm) |
| Query task status and output |
| List tasks for an instance |
| Cancel a running task |
| List available HPC documentation |
| Read a documentation file |
Security
All commands run inside a bwrap user namespace — no root required.
The audit log is written by the host process, never from inside the sandbox.
Network is disabled by default (
--unshare-net).Instance isolation via per-instance directories with workspace bind mounts.
Project Structure
ns-hpc/
├── config.toml # Main configuration
├── context/ # HPC documentation (exposed as MCP resources)
├── pyproject.toml # Project metadata & dependencies
└── src/ns_hpc/
├── __init__.py
├── __main__.py
├── cli.py # CLI entry point (run, doctor)
├── config.py # Pydantic config models + TOML loader
├── server.py # MCP server with all tool handlers + proxy
└── core/
├── bwrap_builder.py # bwrap argument list construction
├── instance_manager.py # Workspace CRUD + audit log
└── task_manager.py # Local & Slurm task executionThis server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/li-yq/namespaced-hpc-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server