Skip to main content
Glama

Synapse MCP

MCP (Model Context Protocol) server providing Flux (Docker management) and Scout (SSH operations) tools for homelab infrastructure. The neural connection point for your distributed systems.

Designed for use with Claude Code and other MCP-compatible clients.

Installation

# Add the synapse marketplace /plugin marketplace add jmagar/synapse-mcp # Install the synapse-mcp plugin /plugin install synapse-mcp@synapse

What you get:

  • /flux and /scout commands

  • ✅ Auto-configured MCP server

  • ✅ Complete documentation and examples

  • ✅ SSH host auto-discovery

Usage

# List Docker containers /flux list containers # Check SSH hosts /scout list hosts # Monitor system resources /flux show resources

Direct MCP Server Setup

For non-Claude Code MCP clients, see Transport Quick Start below.

Transport Quick Start

Choose one:

  1. Local use: stdio (default)

  2. Secure remote with minimal setup: stdio over SSH

  3. Remote HTTP: API key auth and/or Tailscale Serve auth

See docs/TRANSPORTS.md for exact setup and configs for all transport modes.

Features

Flux Tool (Docker Infrastructure Management)

  • Container lifecycle: Start, stop, restart, pause/resume, pull, recreate, exec

  • Docker Compose: Full project management (up, down, restart, logs, build, pull, recreate)

  • Image operations: List, pull, build, remove Docker images

  • Host operations: Status checks, resource monitoring, systemd services, network info

  • Log retrieval: Advanced filtering with time ranges, grep (safe patterns only), stream selection

  • Resource monitoring: Real-time CPU, memory, network, I/O statistics

  • Smart search: Find containers by name, image, or labels across all hosts

  • Pagination & filtering: All list operations support limits, offsets, and filtering

Scout Tool (SSH Remote Operations)

  • File operations: Read files, directory trees, file transfer (beam), diff comparison

  • Remote execution: Execute commands with allowlist security

  • Process monitoring: List and filter processes by user, CPU, memory

  • ZFS management: Pools, datasets, snapshots with health monitoring

  • System logs: Access syslog, journald, dmesg, auth logs with filtering (safe grep patterns only)

  • Disk monitoring: Filesystem usage across all mounts

  • Multi-host operations: Execute commands or read files across multiple hosts (emit)

Infrastructure

  • Multi-host support: Manage Docker and SSH across Unraid, Proxmox, bare metal

  • Auto-detect local Docker: Automatically adds local Docker socket if available

  • Dual transport: stdio for Claude Code, HTTP for remote access

  • O(1) validation: Discriminated union pattern for instant schema validation

  • SSH connection pooling: 50× faster repeated operations

Tools

The server provides two powerful tools with discriminated union schemas for O(1) validation:

Available Tools

flux

Docker infrastructure management - container, compose, docker, and host operations

scout

SSH remote operations - file, process, and system inspection

Getting Help

Both tools include auto-generated help:

{ "action": "help" } { "action": "help", "topic": "container:resume" } { "action": "help", "format": "json" }

Breaking change from V2: The unified tool has been completely removed and replaced with flux and scout.


Tool 1: flux - Docker Infrastructure Management

43 operations across 5 actions - Container lifecycle, compose orchestration, system management

FLUX OPERATIONS: Container (14 operations) ● exec - Execute command inside a container ● inspect - Get detailed container information ● list - List containers with optional filtering ● logs - Get container logs with optional filtering ● pause - Pause a running container ● pull - Pull latest image for a container ⚠️ recreate - Recreate a container with optional image pull ● restart - Restart a container ● resume - Resume a paused container ● search - Search containers by query string ● start - Start a stopped container ● stats - Get resource usage statistics ● stop - Stop a running container ● top - Show running processes in a container Compose (10 operations) ● build - Build Docker Compose project images ⚠️ down - Stop a Docker Compose project ● list - List all Docker Compose projects ● logs - Get Docker Compose project logs ● pull - Pull Docker Compose project images ⚠️ recreate - Recreate Docker Compose project containers ● refresh - Refresh compose project cache by scanning filesystem ● restart - Restart a Docker Compose project ● status - Get Docker Compose project status ● up - Start a Docker Compose project Docker (9 operations) ● build - Build a Docker image ● df - Get Docker disk usage information ● images - List Docker images ● info - Get Docker daemon information ● networks - List Docker networks ⚠️ prune - Remove unused Docker resources ● pull - Pull a Docker image ⚠️ rmi - Remove a Docker image ● volumes - List Docker volumes Host (9 operations) ✓ doctor - Run diagnostic checks on host Docker configuration ● info - Get OS, kernel, architecture, and hostname information ● mounts - Get mounted filesystems ● network - Get network interfaces and IP addresses ● ports - List all port mappings for containers on a host ● resources - Get CPU, memory, and disk usage via SSH ● services - Get systemd service status ✓ status - Check Docker connectivity to host ● uptime - Get system uptime


Tool 2: scout - SSH Remote Operations

16 operations across 11 actions - File operations, process inspection, system logs

SCOUT OPERATIONS: Simple Actions (9 operations) ● beam - File transfer between local and remote hosts ● delta - Compare files or content between locations ● df - Disk usage information for a remote host ● emit - Multi-host operations ● exec - Execute command on a remote host ● find - Find files by glob pattern on a remote host ● nodes - List all configured SSH hosts ● peek - Read file or directory contents on a remote host ● ps - List and search processes on a remote host ZFS (3 operations) ● pools - List ZFS storage pools ● datasets - List ZFS datasets ● snapshots - List ZFS snapshots Logs (4 operations) ● syslog - Access system log files (/var/log) ● journal - Access systemd journal logs ● dmesg - Access kernel ring buffer logs ● auth - Access authentication logs

Legend:

  • State-changing operation

  • ⚠️ Destructive operation (requires force: true)

  • Diagnostic/health check

  • Port mapping notation (host→container/protocol)

Simple Actions (9)

Action

Description

nodes

List all configured SSH hosts

peek

Read file or directory contents (with tree mode)

exec

Execute command on remote host (allowlist validated)

find

Find files by glob pattern

delta

Compare files or content between locations

emit

Multi-host operations (read files or execute commands)

beam

File transfer between local/remote or remote/remote

ps

List and search processes with filtering

df

Disk usage information

ZFS Operations (action: "zfs") - 3 subactions

Subaction

Description

pools

List ZFS storage pools with health status

datasets

List ZFS datasets (filesystems and volumes)

snapshots

List ZFS snapshots

Log Operations (action: "logs") - 4 subactions

Subaction

Description

syslog

Access system log files (/var/log)

journal

Access systemd journal logs with unit filtering

dmesg

Access kernel ring buffer logs

auth

Access authentication logs


Compose Auto-Discovery

The MCP server automatically discovers and caches Docker Compose project locations, eliminating the need to specify file paths for every operation.

How It Works

The discovery system uses a multi-layer approach:

  1. Cache Check: Looks up project in local cache (.cache/compose-projects/)

  2. Docker List: Queries docker compose ls for running projects

  3. Filesystem Scan: Scans configured search paths for compose files

  4. Error: Returns error if project not found in any layer

Discovery results are cached for 24 hours (configurable via COMPOSE_CACHE_TTL_HOURS environment variable).

Configuration

Add optional composeSearchPaths to your host configuration:

{ "hosts": [ { "name": "my-host", "host": "192.168.1.100", "protocol": "ssh", "composeSearchPaths": ["/opt/stacks", "/srv/docker"] } ] }

Default search paths: ["/compose", "/mnt/cache/compose", "/mnt/cache/code"] if not specified.

Optional Host Parameter

Most compose operations accept an optional host parameter. When omitted, the system automatically searches all configured hosts in parallel to find the project:

// Explicit host (faster - no search needed) { "action": "compose", "subaction": "up", "project": "plex", "host": "server1" } // Auto-discover (searches all hosts in parallel) { "action": "compose", "subaction": "up", "project": "plex" }

Auto-discovery times out after 30 seconds if the project cannot be found on any host. If a project exists on multiple hosts, you'll receive an error asking you to specify the host parameter explicitly.

Cache Management

  • TTL: 24 hours (default, configurable)

  • Storage: .cache/compose-projects/ directory (gitignored)

  • Invalidation: Automatic when operations fail due to stale paths

  • Manual Refresh: Use compose:refresh subaction

Manual Cache Refresh

Force a cache refresh by scanning the filesystem:

// Refresh all hosts { "action": "compose", "subaction": "refresh" } // Refresh specific host { "action": "compose", "subaction": "refresh", "host": "server1" }

Returns a list of discovered projects with their paths and discovery source (docker-ls or filesystem scan).

Architecture

┌─────────────┐ │ Handler │ └──────┬──────┘ │ v ┌──────────────┐ ┌──────────────┐ │ HostResolver │─────>│ Discovery │ └──────────────┘ └──────┬───────┘ │ ┌────────┴────────┐ v v ┌──────────┐ ┌──────────┐ │ Cache │ │ Scanner │ └──────────┘ └──────────┘

Components:

  • HostResolver: Finds which host contains the project (parallel search)

  • ComposeDiscovery: Orchestrates cache, docker-ls, and filesystem scanning

  • ComposeProjectCache: File-based cache with TTL validation

  • ComposeScanner: Filesystem scanning for compose files (respects max depth of 3)


Example Usage

Flux Tool Examples

// List running containers { "tool": "flux", "action": "container", "subaction": "list", "state": "running" } // Restart a container { "tool": "flux", "action": "container", "subaction": "restart", "container_id": "plex", "host": "tootie" } // Start a compose project (auto-discovers location and host) { "tool": "flux", "action": "compose", "subaction": "up", "project": "media-stack" } // Start a compose project on specific host { "tool": "flux", "action": "compose", "subaction": "up", "host": "tootie", "project": "media-stack" } // Refresh compose project cache { "tool": "flux", "action": "compose", "subaction": "refresh" } // Get host resources { "tool": "flux", "action": "host", "subaction": "resources", "host": "tootie" } // Pull an image { "tool": "flux", "action": "docker", "subaction": "pull", "host": "tootie", "image": "nginx:latest" } // Execute command in container { "tool": "flux", "action": "container", "subaction": "exec", "container_id": "nginx", "command": "nginx -t" }

Scout Tool Examples

// List configured SSH hosts { "tool": "scout", "action": "nodes" } // Read a remote file { "tool": "scout", "action": "peek", "target": "tootie:/etc/nginx/nginx.conf" } // Show directory tree { "tool": "scout", "action": "peek", "target": "dookie:/var/log", "tree": true } // Execute remote command { "tool": "scout", "action": "exec", "target": "tootie:/var/www", "command": "du -sh *" } // Transfer file between hosts { "tool": "scout", "action": "beam", "source": "tootie:/tmp/backup.tar.gz", "destination": "dookie:/backup/" } // Check ZFS pool health { "tool": "scout", "action": "zfs", "subaction": "pools", "host": "dookie" } // View systemd journal { "tool": "scout", "action": "logs", "subaction": "journal", "host": "tootie", "unit": "docker.service" } // Multi-host command execution { "tool": "scout", "action": "emit", "targets": ["tootie:/tmp", "dookie:/tmp"], "command": "df -h" }

Installation

# Clone or copy the server files cd synapse-mcp # Install dependencies pnpm install # Build pnpm run build

The server will create a .cache/compose-projects/ directory for storing discovered project locations. This directory is automatically gitignored.

Configuration

SSH Config Auto-Loading

Zero configuration required! Synapse-MCP automatically discovers hosts from your ~/.ssh/config file.

All SSH hosts with a HostName directive are automatically available for Docker management via SSH tunneling to the remote Docker socket. Manual configuration is completely optional.

Priority order:

  1. Manual config file (highest) - synapse.config.json

  2. SYNAPSE_HOSTS_CONFIG environment variable

  3. SSH config auto-discovery - ~/.ssh/config

  4. Local Docker socket (fallback)

Example SSH config:

Host production HostName 192.168.1.100 User admin Port 22 IdentityFile ~/.ssh/id_ed25519 Host staging HostName 192.168.1.101 User deploy Port 2222 IdentityFile ~/.ssh/staging_key

Both hosts are immediately available as flux targets with SSH tunneling to /var/run/docker.sock. No additional configuration needed!

Manual override: If you create a synapse.config.json entry with the same name as an SSH host, the manual config completely replaces the SSH config (no merging).

Manual Configuration (Optional)

Create a config file at one of these locations (checked in order):

  1. Path in SYNAPSE_CONFIG_FILE env var

  2. ./synapse.config.json (current directory)

  3. ~/.config/synapse-mcp/config.json

  4. ~/.synapse-mcp.json

Example Config

{ "hosts": [ { "name": "local", "host": "localhost", "protocol": "ssh", "dockerSocketPath": "/var/run/docker.sock", "tags": ["development"] }, { "name": "production", "host": "192.168.1.100", "port": 22, "protocol": "ssh", "sshUser": "admin", "sshKeyPath": "~/.ssh/id_rsa", "tags": ["production"] }, { "name": "unraid", "host": "unraid.local", "port": 2375, "protocol": "http", "tags": ["media", "storage"] } ] }

Copy config/synapse.config.example.json as a starting point:

cp config/synapse.config.example.json ~/.config/synapse-mcp/config.json # or cp config/synapse.config.example.json ~/.synapse-mcp.json

Note: If /var/run/docker.sock exists and isn't already in your config, it will be automatically added as a host using your machine's hostname. This means the server works out-of-the-box for local Docker without any configuration.

Host Configuration Options

Field

Type

Description

name

string

Unique identifier for the host

host

string

Hostname or IP address

port

number

Docker API port (default: 2375)

protocol

"http" / "https" / "ssh"

Connection protocol

dockerSocketPath

string

Path to Docker socket (for local connections)

sshUser

string

SSH username for remote connections (protocol: "ssh")

sshKeyPath

string

Path to SSH private key for authentication

tags

string[]

Optional tags for filtering

Environment Variables Reference

Complete reference for all environment variables that control server behavior.

Server Configuration

Variable

Type

Default

Description

SYNAPSE_CONFIG_FILE

string

Auto-detect

Path to config file. Overrides default search paths.

SYNAPSE_HOSTS_CONFIG

JSON string

undefined

JSON config as environment variable. Fallback if no config file found.

SYNAPSE_PORT

number

53000

HTTP server port (only used with --http flag).

SYNAPSE_HOST

string

127.0.0.1

HTTP server bind address. Use 0.0.0.0 to expose to all interfaces (requires authentication).

NODE_ENV

string

production

Node environment. Affects stack traces and error verbosity.

Performance Tuning

Variable

Type

Default

Description

SSH_POOL_MAX_CONNECTIONS

number

5

Maximum SSH connections per host. Increase for high-concurrency workloads (10-20 for 100+ containers).

SSH_POOL_IDLE_TIMEOUT_MS

number

60000 (60s)

Close idle connections after this duration. Reduce to save resources (30000 for low-usage).

SSH_POOL_CONNECTION_TIMEOUT_MS

number

5000 (5s)

SSH connection timeout. Increase for slow networks (10000-15000).

SSH_POOL_HEALTH_CHECK_INTERVAL_MS

number

30000 (30s)

Health check interval. Set to 0 to disable health checks.

COMPOSE_CACHE_TTL_HOURS

number

24

Compose project cache lifetime in hours. Lower for frequently changing projects (6-12 hours).

Security Options

Variable

Type

Default

Description

⚠️ Security Impact

SYNAPSE_API_KEY

string

undefined

Enables HTTP API key authentication when set.

If unset, /mcp does not require API key auth. Set this in all non-local deployments.

SYNAPSE_ALLOWED_ORIGINS

string

undefined

Comma-separated allowlist of trusted CORS origins for browser clients.

If unset, cross-origin browser access is blocked (Access-Control-Allow-Origin: null).

SYNAPSE_ALLOW_ANY_COMMAND

boolean

false

DANGEROUS: Disables command allowlist for scout:exec.

CRITICAL: Allows arbitrary command execution. Only use in trusted development environments. Never set in production.

Security Warning for

When set to true, this variable completely bypasses the command allowlist, allowing execution of ANY command on managed hosts via scout:exec. This includes destructive commands like rm -rf /, privilege escalation, and backdoor installation.

Default allowed commands (when false):

  • Read operations: cat, head, tail, grep, rg, find, ls, tree

  • Info operations: stat, file, du, df, pwd, hostname, uptime, whoami

  • Text processing: wc, sort, uniq, diff

When to use

  • Local development only

  • Single-user environments

  • When you fully trust all MCP clients

  • When NODE_ENV=development

Detection:

# Check if variable is set printenv | grep SYNAPSE_ALLOW_ANY_COMMAND # Check systemd service sudo grep SYNAPSE_ALLOW_ANY_COMMAND /etc/systemd/system/synapse-mcp.service # Check Docker Compose grep SYNAPSE_ALLOW_ANY_COMMAND docker-compose.yml

Debug and Logging

Variable

Type

Default

Description

DEBUG

string

undefined

Enable debug logging. Set to synapse:* for all debug output or specific namespaces like synapse:ssh, synapse:docker.

LOG_LEVEL

string

info

Logging level: error, warn, info, debug, trace.

Example Configurations

Development (Local):

export NODE_ENV=development export SYNAPSE_CONFIG_FILE=~/.config/synapse-mcp/config.json export DEBUG=synapse:* export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=0 # Disable health checks export LOG_LEVEL=debug node dist/index.js

Production (HTTP Mode with High Concurrency):

export NODE_ENV=production export SYNAPSE_PORT=53000 export SYNAPSE_HOST=127.0.0.1 # Localhost only, behind reverse proxy export SSH_POOL_MAX_CONNECTIONS=10 # Higher concurrency export COMPOSE_CACHE_TTL_HOURS=12 # Refresh more frequently export LOG_LEVEL=info node dist/index.js --http

Production (Stdio Mode for Claude Code):

export NODE_ENV=production export SYNAPSE_CONFIG_FILE=/etc/synapse-mcp/config.json export SSH_POOL_MAX_CONNECTIONS=5 export COMPOSE_CACHE_TTL_HOURS=24 export LOG_LEVEL=warn node dist/index.js

High-Latency Network:

export SSH_POOL_CONNECTION_TIMEOUT_MS=15000 # 15s timeout export SSH_POOL_IDLE_TIMEOUT_MS=120000 # 2min idle timeout export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=60000 # 1min health checks node dist/index.js

Local vs Remote Execution

The server automatically determines whether to use local execution or SSH based on your host configuration:

Local Execution (No SSH)

Commands run directly on localhost using Node.js for best performance:

{ "name": "local", "host": "localhost", "protocol": "ssh", "dockerSocketPath": "/var/run/docker.sock" }

Requirements: Host must be localhost/127.x.x.x/::1 AND no sshUser specified.

Benefits:

  • ~10x faster than SSH for Compose and host operations

  • No SSH key management needed

  • Works out of the box

Remote Execution (SSH)

Commands run via SSH on remote hosts or when sshUser is specified:

{ "name": "production", "host": "192.168.1.100", "protocol": "ssh", "sshUser": "admin", "sshKeyPath": "~/.ssh/id_rsa" }

When SSH is used:

  • Host is NOT localhost/127.x.x.x

  • sshUser is specified (even for localhost)

  • For all Scout operations (file operations always use SSH)

Docker API vs Command Execution

These are independent:

Operation

Local Host

Remote Host

Docker API (container list, stats)

Unix socket

HTTP

Commands (compose, systemctl)

Local execFile

SSH

See .docs/local-vs-remote-execution.md for detailed architecture documentation.

Resource Limits & Defaults

Setting

Value

Description

CHARACTER_LIMIT

40,000

Maximum response size (~12.5k tokens)

DEFAULT_LIMIT

20

Default pagination limit for list operations

MAX_LIMIT

100

Maximum pagination limit

DEFAULT_LOG_LINES

50

Default number of log lines to fetch

MAX_LOG_LINES

500

Maximum log lines allowed

API_TIMEOUT

30s

Docker API operation timeout

STATS_TIMEOUT

5s

Stats collection timeout

Performance Characteristics

Understanding performance expectations helps optimize your usage and troubleshoot slow operations.

Response Time Expectations

Operation Type

Expected Latency

Notes

Single-host operations

50-150ms

Container list, stats, logs, inspect

Multi-host container discovery

100-500ms

Depends on host count and network latency

Compose auto-discovery

1-500ms

Cache hit: 1ms, docker-ls: 50-100ms, filesystem scan: 200-500ms

SSH connection (warm)

<10ms

Connection pool hit

SSH connection (cold)

200-300ms

New connection establishment

Container exec

100ms-30s

Depends on command execution time

Configuration Loading

  • Config files: Loaded at server startup (synchronous read)

  • Config changes: Require server restart (no hot reload)

  • SSH config: Changes detected automatically on next operation

  • Cache: Compose project cache has 24-hour TTL (configurable via COMPOSE_CACHE_TTL_HOURS)

Buffer and Output Limits

Resource

Limit

Behavior on Exceed

Response character limit

40,000 chars (~12.5k tokens)

Truncated with warning

Container exec output

10MB per stream (stdout/stderr)

Stream terminated with error

Log lines

50 default, 500 maximum

Paginate with lines parameter

Find results

100 default, 1000 maximum

Paginate with limit parameter

Connection Pooling

Setting

Default

Tuning

SSH connections per host

5

SSH_POOL_MAX_CONNECTIONS

Idle timeout

60 seconds

SSH_POOL_IDLE_TIMEOUT_MS

Connection timeout

5 seconds

SSH_POOL_CONNECTION_TIMEOUT_MS

Health check interval

30 seconds

SSH_POOL_HEALTH_CHECK_INTERVAL_MS

Performance Impact:

  • Warm connections: 20-30× faster than establishing new connections

  • Pool exhaustion: Operations queue until connection available

  • Health checks: Detect and remove stale connections automatically

Compose Discovery Cache

Three-tier strategy:

  1. Cache check (fastest, 0-1ms) - .cache/compose-projects/

  2. docker compose ls (medium, 50-100ms) - running projects only

  3. Filesystem scan (slowest, 200-500ms) - all projects

Cache behavior:

  • TTL: 24 hours (default, configurable via COMPOSE_CACHE_TTL_HOURS)

  • Invalidation: Automatic on stale path detection

  • Storage: Local filesystem (.cache/compose-projects/)

  • Refresh: Manual via compose:refresh or automatic on cache miss

Scaling Characteristics

Host Count Impact:

  • 1-5 hosts: Optimal performance, minimal latency

  • 6-10 hosts: Good performance, consider explicit host parameter for frequent operations

  • 11-15 hosts: Increased latency, recommend explicit host for all operations

  • 16+ hosts: Consider splitting into multiple MCP server instances

Container Count Impact:

  • 1-50 containers: No impact, all operations fast

  • 51-100 containers: Pagination recommended for list operations

  • 101-500 containers: Always paginate, avoid state: "all" without filters

  • 500+ containers: Use host-specific operations, increase SSH_POOL_MAX_CONNECTIONS

Network Latency Impact:

  • Low latency (<10ms): Minimal impact on multi-host operations

  • Medium latency (10-50ms): 2-3× slower for multi-host discovery

  • High latency (>50ms): Explicitly specify host parameter to avoid discovery overhead

Tuning for Large Deployments

If managing 15+ hosts with 100+ containers:

# Increase connection pool size export SSH_POOL_MAX_CONNECTIONS=10 # Reduce cache TTL for frequently changing projects export COMPOSE_CACHE_TTL_HOURS=12 # Disable health checks if connections are stable export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=0

Operational strategies:

  • Always specify Avoid auto-discovery overhead for known locations

  • Use pagination: Set limit: 20 for list operations

  • Batch operations: Group related operations to reuse warm connections

  • Split by environment: Run separate MCP instances for dev/staging/prod hosts

Performance Monitoring

Monitor response times:

# Watch logs for slow operations journalctl -u synapse-mcp.service | grep -E "took [0-9]{3,}ms" # Check connection pool utilization # (Low availability = need more connections)

Health check:

# Monitor server health curl http://localhost:53000/health

Enabling Docker API on Hosts

Unraid

Docker API is typically available at port 2375 by default.

Standard Docker (systemd)

Edit /etc/docker/daemon.json:

{ "hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"] }

Or override the systemd service:

sudo systemctl edit docker.service
[Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375

⚠️ Security Note: Exposing Docker API without TLS is insecure. Use on trusted networks only, or set up TLS certificates.

Usage

With Claude Code

Add to ~/.claude/claude_code_config.json:

{ "mcpServers": { "synapse": { "command": "node", "args": ["/absolute/path/to/synapse-mcp/dist/index.js"], "env": { "SYNAPSE_CONFIG_FILE": "/home/youruser/.config/synapse-mcp/config.json" } } } }

Or if your config is in one of the default locations, you can skip the env entirely:

{ "mcpServers": { "synapse": { "command": "node", "args": ["/absolute/path/to/synapse-mcp/dist/index.js"] } } }

Then in Claude Code:

> List all running containers on tootie (uses flux tool) > Restart the plex container (uses flux tool) > Show me the logs from sonarr with errors in the last hour (uses flux tool) > Which containers are using the most memory? (uses flux tool) > Read the nginx config on tootie (uses scout tool) > Check ZFS pool health on dookie (uses scout tool) > Show me systemd journal errors from the last hour (uses scout tool)

HTTP Mode

For remote access or multi-client scenarios:

# Start HTTP server node dist/index.js --http # Server runs on http://127.0.0.1:53000/mcp # Health check: http://127.0.0.1:53000/health

Environment variables for HTTP mode:

  • PORT: Server port (default: 53000)

  • HOST: Bind address (default: 127.0.0.1)

CLI Help

node dist/index.js --help

Example Interactions

Flux Tool - Container Management

User: What containers are running on tootie? Claude: [calls flux with action="container", subaction="list", host="tootie", state="running"] I found 23 running containers on tootie: 🟢 plex (tootie) - Image: linuxserver/plex | Up 3 days 🟢 sonarr (tootie) - Image: linuxserver/sonarr | Up 3 days 🟢 radarr (tootie) - Image: linuxserver/radarr | Up 3 days ...

Flux Tool - Log Analysis

User: Show me any errors from nginx in the last hour Claude: [calls flux with action="container", subaction="logs", container_id="nginx", since="1h", grep="error"] Found 3 error entries in nginx logs: [14:23:15] 2024/12/15 14:23:15 [error] connect() failed...

Scout Tool - Remote File Access

User: Read the nginx config on tootie Claude: [calls scout with action="peek", target="tootie:/etc/nginx/nginx.conf"] Here's the nginx configuration from tootie: user nginx; worker_processes auto; ...

Scout Tool - ZFS Health Check

User: Check ZFS pool health on dookie Claude: [calls scout with action="zfs", subaction="pools", host="dookie"] ZFS Pools on dookie: tank - ONLINE | Size: 24TB | Free: 8.2TB | Health: 100% backup - ONLINE | Size: 12TB | Free: 5.1TB | Health: 100%

Scout Tool - System Logs

User: Show me Docker service errors from systemd journal Claude: [calls scout with action="logs", subaction="journal", host="tootie", unit="docker.service", priority="err"] Recent errors from docker.service: [15:42:10] Failed to allocate directory watch: Too many open files [15:42:15] containerd: connection error: desc = "transport: error while dialing"

Troubleshooting

Common issues and their solutions. For additional help, see the operational runbooks in docs/runbooks/.

Service Won't Start

Port Already in Use

Symptom:

Error: listen EADDRINUSE: address already in use :::53000

Cause: Another process is using port 53000 (HTTP mode) or stdout/stdin are not available (stdio mode).

Solution:

For HTTP mode:

# Find process using port 53000 lsof -i :53000 # or ss -tulpn | grep :53000 # Kill the process or change port SYNAPSE_PORT=53001 node dist/index.js --http # Or set permanently export SYNAPSE_PORT=53001

For stdio mode:

# Check if running in terminal (stdio requires parent process) # Don't run stdio mode directly in terminal - use via MCP client only

Missing Dependencies

Symptom:

Error: Cannot find module '@modelcontextprotocol/sdk'

Cause: Dependencies not installed or node_modules corrupted.

Solution:

# Reinstall dependencies rm -rf node_modules pnpm-lock.yaml pnpm install # Rebuild pnpm run build # Verify installation pnpm list @modelcontextprotocol/sdk

Permission Denied on Startup

Symptom:

Error: EACCES: permission denied, open '/var/run/docker.sock'

Cause: User not in docker group.

Solution:

# Add user to docker group sudo usermod -aG docker $USER # Log out and back in for group change to take effect # Or use newgrp to activate immediately newgrp docker # Verify docker access docker ps

SSH Connection Failures

Host Key Verification Failed

Symptom:

[SSH] [Host: production] Permission denied (publickey) # or Host key verification failed

Cause: SSH key not in ~/.ssh/known_hosts or key mismatch.

Solution:

Option 1: Pre-seed known_hosts (Recommended)

# Add host key to known_hosts ssh-keyscan -H hostname >> ~/.ssh/known_hosts # For all configured hosts for host in production staging dev; do ssh-keyscan -H $host >> ~/.ssh/known_hosts done

Option 2: Manual verification

# Connect manually first to accept key ssh user@hostname # Verify fingerprint matches (check console/IPMI) ssh-keygen -l -f ~/.ssh/known_hosts | grep hostname

Option 3: Remove stale key (if host key changed)

# Remove old key ssh-keygen -R hostname # Re-add current key ssh-keyscan -H hostname >> ~/.ssh/known_hosts

SSH Key Permission Errors

Symptom:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: UNPROTECTED PRIVATE KEY FILE! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ Permissions 0644 for '/home/user/.ssh/id_rsa' are too open.

Cause: SSH private key has insecure permissions.

Solution:

# Fix key permissions (required: 600) chmod 600 ~/.ssh/id_rsa chmod 600 ~/.ssh/id_ed25519 # Fix directory permissions chmod 700 ~/.ssh # Verify ls -la ~/.ssh/ # Should show: -rw------- for keys

Connection Timeout

Symptom:

[SSH] [Host: production] SSH command timeout after 5000ms

Cause: Network latency, firewall blocking, or host unreachable.

Solution:

Increase timeout:

# Set longer connection timeout (15 seconds) export SSH_POOL_CONNECTION_TIMEOUT_MS=15000 node dist/index.js

Check network connectivity:

# Test SSH access manually ssh -v user@hostname # Check network latency ping hostname # Check firewall rules sudo ufw status # or sudo iptables -L

Verify host is reachable:

# Test basic connectivity nc -zv hostname 22 # Check if SSH daemon is running ssh user@hostname 'systemctl status sshd'

SSH Agent Not Running

Symptom:

Could not open a connection to your authentication agent

Cause: SSH agent not started or key not added.

Solution:

# Start SSH agent eval $(ssh-agent) # Add key to agent ssh-add ~/.ssh/id_rsa # Verify key is loaded ssh-add -l # Add to shell startup (~/.bashrc or ~/.zshrc) if [ -z "$SSH_AUTH_SOCK" ]; then eval $(ssh-agent) ssh-add ~/.ssh/id_rsa fi

Docker API Connection Errors

Socket Permission Denied

Symptom:

Error: connect EACCES /var/run/docker.sock

Cause: User not in docker group or socket permissions incorrect.

Solution:

Add user to docker group:

# Add current user sudo usermod -aG docker $USER # Log out and back in # Verify group membership groups | grep docker # Test docker access docker ps

Check socket permissions:

# Socket should be owned by docker group ls -la /var/run/docker.sock # Should show: srw-rw---- root docker # If permissions wrong, fix ownership sudo chown root:docker /var/run/docker.sock sudo chmod 660 /var/run/docker.sock

Connection Refused

Symptom:

Error: connect ECONNREFUSED 192.168.1.100:2375

Cause: Docker daemon not running, wrong port, or firewall blocking.

Solution:

Check Docker daemon status:

# On target host systemctl status docker # Start if not running sudo systemctl start docker sudo systemctl enable docker

Verify Docker API port:

# Check if Docker listening on expected port ss -tulpn | grep 2375 # If not exposed, edit daemon config sudo vi /etc/docker/daemon.json # Add: {"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]} sudo systemctl restart docker

Check firewall:

# Allow Docker API port (if using HTTP protocol) sudo ufw allow from 192.168.1.0/24 to any port 2375 # Or specific IP only (more secure) sudo ufw allow from 192.168.1.10 to any port 2375

Docker Daemon Not Ready

Symptom:

Cannot connect to the Docker daemon. Is the docker daemon running?

Cause: Docker service not started or crashed.

Solution:

# Check status systemctl status docker # View logs journalctl -u docker.service -n 50 # Restart daemon sudo systemctl restart docker # Check for errors docker info

High Latency Issues

Slow Container Discovery

Symptom: Container operations taking 5-30 seconds across multiple hosts.

Cause: Sequential host scanning without explicit host parameter.

Solution:

Always specify host when known:

// Instead of: { "action": "container", "subaction": "start", "container_id": "plex" } // Use: { "action": "container", "subaction": "start", "container_id": "plex", "host": "production" }

Reduce host count:

# Split large deployments into multiple MCP instances # Production hosts: synapse-mcp-prod # Development hosts: synapse-mcp-dev

Increase connection pool:

export SSH_POOL_MAX_CONNECTIONS=10 node dist/index.js

Slow Configuration Loading

Symptom: Every request takes 5-10ms longer than expected.

Cause: Config loaded synchronously on every request (PERF-C1).

Solution:

Optimize config file size:

# Keep config under 10KB # Split large host lists into multiple files # Use SSH config auto-discovery instead # (parsed once at startup)

Use SSH config auto-discovery:

# ~/.ssh/config Host production HostName 192.168.1.100 User admin IdentityFile ~/.ssh/id_rsa # No manual synapse.config.json needed

Network Latency

Symptom: Operations on remote hosts much slower than local.

Cause: High network latency (>50ms).

Solution:

Increase timeouts for slow networks:

export SSH_POOL_CONNECTION_TIMEOUT_MS=15000 # 15s export SSH_POOL_IDLE_TIMEOUT_MS=120000 # 2min

Use local cache more aggressively:

export COMPOSE_CACHE_TTL_HOURS=48 # 2 days

Deploy MCP server closer to hosts:

# Run synapse-mcp on same network segment as managed hosts # Or use VPN to reduce latency

Container Not Found Errors

Container ID Too Short

Symptom:

Container "abc" not found on any host

Cause: Multiple containers match short prefix, or ID doesn't exist.

Solution:

Use longer container ID:

// Instead of: { "container_id": "abc" } // Use at least 8 characters: { "container_id": "abc12345" }

Use container name:

{ "container_id": "plex" }

List all containers to find correct ID:

{ "action": "container", "subaction": "list", "state": "all" }

Container on Unexpected Host

Symptom: Container exists but not found by auto-discovery.

Cause: Discovery timeout before reaching correct host.

Solution:

Specify host explicitly:

{ "action": "container", "subaction": "start", "container_id": "plex", "host": "media-server" }

Increase discovery timeout:

# Increase SSH connection timeout export SSH_POOL_CONNECTION_TIMEOUT_MS=10000

Check host is reachable:

ssh user@hostname docker ps

Compose Project Not Detected

Project Not in Cache

Symptom:

Project "media-stack" not found on any configured host

Cause: Cache miss, project in non-standard location, or project name mismatch.

Solution:

Refresh cache:

{ "action": "compose", "subaction": "refresh" }

Check actual project name:

# SSH to host docker compose ls # Or check compose.yaml cat /path/to/compose.yaml | grep "^name:"

Add search path to host config:

{ "name": "production", "host": "192.168.1.100", "protocol": "ssh", "sshUser": "admin", "composeSearchPaths": [ "/compose", "/opt/stacks", // Add custom path "/srv/docker" // Add another path ] }

Specify explicit path (bypass discovery):

{ "action": "compose", "subaction": "up", "project": "media-stack", "host": "production", "path": "/opt/stacks/media" // Explicit path }

Stopped Project Not Found

Symptom: Project exists but not detected by docker compose ls.

Cause: docker compose ls only shows running projects.

Solution:

Force filesystem scan:

// Refresh cache triggers full scan { "action": "compose", "subaction": "refresh" }

Or use explicit path:

{ "action": "compose", "subaction": "up", "path": "/path/to/project", "host": "production" }

Search Depth Too Shallow

Symptom: Deeply nested compose projects not found.

Cause: Default max depth is 3 levels.

Solution:

Organize projects at shallower depth:

# Instead of: /compose/apps/production/services/media/plex/ # Use: /compose/media-plex/

Or manually add specific paths:

{ "composeSearchPaths": ["/compose/apps/production/services/media/plex"] }

Debug Logging

Enable detailed logging for troubleshooting:

Enable all debug output:

DEBUG=* node dist/index.js 2>debug.log

Enable specific namespaces:

# SSH operations only DEBUG=synapse:ssh node dist/index.js # Docker operations only DEBUG=synapse:docker node dist/index.js # Multiple namespaces DEBUG=synapse:ssh,synapse:docker node dist/index.js

Increase log level:

export LOG_LEVEL=debug node dist/index.js

Monitor logs in real-time:

# Systemd service journalctl -u synapse-mcp.service -f # Or write to file node dist/index.js 2>&1 | tee -a synapse.log

Getting Help

If you can't resolve the issue:

  1. Check logs:

    journalctl -u synapse-mcp.service -n 100
  2. Review runbooks: See docs/runbooks/ for detailed procedures

  3. Check docs/SECURITY.md: For security-related issues

  4. Open GitHub issue: Include:

    • Error message and full stack trace

    • Steps to reproduce

    • Environment details (Node version, OS, host count)

    • Relevant config (redact sensitive info)

  5. Community support: Tag maintainers in issues for faster response

Security

HTTP Transport Authentication

HTTP POST /mcp always requires the X-Synapse-Client header for CSRF protection. API key authentication is enabled only when SYNAPSE_API_KEY is set.

# Enable API key authentication export SYNAPSE_API_KEY="your-secret-key-here" # Recommended: 32+ characters # Start server with HTTP transport node dist/index.js --transport http

If SYNAPSE_API_KEY is not set, requests are allowed without X-API-Key (local/dev behavior).

Required Headers:

  • X-Synapse-Client: Always required (for CSRF protection)

  • X-API-Key: Required when SYNAPSE_API_KEY is configured

Security Features:

  • Timing-safe comparison prevents timing attacks

  • CSRF protection blocks cross-origin requests without proper headers

  • 100KB body size limit prevents DoS attacks

Example Request (API key enabled):

curl -X POST "http://127.0.0.1:53000/mcp" \ -H "Content-Type: application/json" \ -H "X-Synapse-Client: mcp" \ -H "X-API-Key: your-secret-key-here" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

Example Request (local/dev, no API key configured):

unset SYNAPSE_API_KEY node dist/index.js --transport http curl -X POST "http://127.0.0.1:53000/mcp" \ -H "Content-Type: application/json" \ -H "X-Synapse-Client: mcp" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'

Command Allowlist (CWE-78)

Scout exec operations are restricted to a curated allowlist of read-only commands:

Allowed commands: df, uptime, hostname, uname, ps, free, top, htop, netstat, ss, lsof, systemctl status, journalctl, dmesg, tail, cat, grep (and more - see src/config/command-allowlist.json)

Security guarantees:

  • No destructive operations allowed

  • Shell argument escaping prevents injection

  • No environment variable bypass available

  • All commands validated before execution

Path Traversal Protection (CWE-22)

The image_build tool implements strict path validation to prevent directory traversal attacks:

  • Absolute paths required: All paths (context, dockerfile) must start with /

  • Traversal blocked: Paths containing .. or . components are rejected

  • Character validation: Only alphanumeric, dots (in filenames), hyphens, underscores, and forward slashes allowed

  • Pre-execution validation: Paths validated before SSH commands are executed

Example of rejected paths:

# Rejected: Directory traversal ../../../etc/passwd /app/../../../etc/passwd # Rejected: Relative paths ./build relative/path # Accepted: Absolute paths without traversal /home/user/docker/build /opt/myapp/Dockerfile.prod

General Security Notes

  • Docker API on port 2375 is insecure without TLS

  • Always use execFile for shell commands (prevents injection)

  • Validate host config fields with regex

  • Require force=true for destructive operations

Development

# Watch mode for development pnpm run dev # Build pnpm run build # Run tests pnpm test # Run tests with coverage pnpm run test:coverage # Run performance benchmarks (opt-in) RUN_SSH_BENCHMARKS=true pnpm test src/services/ssh-pool.benchmark.test.ts RUN_CACHE_BENCHMARKS=true pnpm test src/services/cache-layer.benchmark.test.ts # Test with MCP Inspector npx @modelcontextprotocol/inspector node dist/index.js

Architecture

Core Components

Event System (src/events/)

  • Type-safe EventEmitter with discriminated unions

  • Events: container_state_changed, cache_invalidated

  • Decouples cross-cutting concerns (cache invalidation, audit trail, metrics)

Lifecycle Management (src/services/container.ts)

  • State machine: uninitializedinitializingreadyshutting_downshutdown

  • Hooks: initialize(), healthCheck(), shutdown()

  • Graceful cleanup on process termination

Tool Registry (src/tools/registry.ts)

  • Plugin-style tool registration

  • Zero modification required to add new tools

  • Declarative tool definitions in src/tools/definitions/

Formatter Strategy (src/formatters/strategy.ts)

  • IFormatter interface for output formats

  • Implementations: MarkdownFormatter, JSONFormatter

  • FormatterFactory for format selection

  • Open/Closed Principle: Add formats without modifying handlers

For detailed architecture documentation, see:

  • src/services/LIFECYCLE.md - Lifecycle management guide

  • src/tools/EXTENDING.md - Tool extension guide

  • src/formatters/EXTENDING.md - Formatter extension guide

  • docs/HANDLERS.md - Handler patterns and implementation guidance

  • docs/TRANSPORTS.md - Transport options (stdio, HTTP, SSH stdio, Tailscale Serve)

Directory Structure

synapse-mcp/ ├── src/ │ ├── index.ts # Entry point, transport setup │ ├── types.ts # TypeScript interfaces │ ├── constants.ts # Configuration constants │ ├── config/ │ │ └── command-allowlist.json # Allowed commands for scout:exec │ ├── formatters/ │ │ ├── index.ts # Response formatting utilities │ │ └── formatters.test.ts # Formatter tests │ ├── tools/ │ │ ├── index.ts # Tool registration router │ │ ├── flux.ts # Flux tool handler + routing │ │ ├── scout.ts # Scout tool handler + routing │ │ ├── container.ts # handleContainerAction() │ │ ├── compose.ts # handleComposeAction() │ │ ├── docker.ts # handleDockerAction() │ │ └── host.ts # handleHostAction() │ ├── services/ │ │ ├── docker.ts # DockerService │ │ ├── compose.ts # ComposeService │ │ ├── ssh.ts # SSHService │ │ └── scout/ # Scout-specific services │ │ ├── pool.ts # SSH connection pool │ │ ├── executors.ts # Command execution │ │ └── transfer.ts # File transfer (beam) │ ├── schemas/ │ │ ├── index.ts # FluxSchema + ScoutSchema exports │ │ ├── common.ts # Shared schemas (pagination, response_format) │ │ ├── container.ts # Container subaction schemas │ │ ├── compose.ts # Compose subaction schemas │ │ ├── docker.ts # Docker subaction schemas │ │ ├── host.ts # Host subaction schemas │ │ └── scout.ts # Scout action schemas │ └── lint.test.ts # Linting tests ├── dist/ # Compiled JavaScript ├── package.json ├── tsconfig.json └── README.md

Key Architectural Decisions

V3 Schema Refactor - Two Tools Pattern:

  • Flux: 5 actions (help, container, compose, docker, host) with 41 total subactions

  • Scout: 11 actions (9 simple + 2 with subactions) for 16 total operations

  • Clean separation: Flux = Docker/state changes, Scout = SSH/read operations

  • Total: 57 operations across both tools

Discriminated Union for O(1) Validation:

  • Flux: action + subaction fields with per-action nested discriminated unions

  • Scout: Primary action discriminator with nested discriminators for zfs and logs

  • Validation latency: <0.005ms average across all operations

  • Zero performance degradation regardless of which operation is called

Help System:

  • Auto-generated help handlers for both tools

  • Introspects Zod schemas using .describe() metadata

  • Supports topic-specific help (e.g., flux help container:logs)

  • Available in markdown or JSON format

SSH Connection Pooling:

  • 50× faster for repeated operations

  • Automatic idle timeout and health checks

  • Configurable pool size and connection reuse

  • Transparent integration (no code changes required)

Test Coverage:

  • Unit tests for all services, schemas, and tools

  • Integration tests for end-to-end workflows

  • Performance benchmarks for schema validation

  • TDD approach for all new features

Performance

Schema Validation

Both Flux and Scout tools use Zod discriminated unions for constant-time schema dispatch:

  • Validation latency: <0.005ms average across all operations

  • Flux optimization: action + subaction with nested subaction discriminators

  • Scout optimization: Primary action discriminator with nested discriminators for zfs/logs

  • Consistency: All operations perform identically fast (no worst-case scenarios)

SSH Connection Pooling

All SSH operations use connection pooling for optimal performance:

  • 50× faster for repeated operations

  • Connections reused across compose operations

  • Automatic idle timeout and health checks

  • Configurable via environment variables

See docs/ssh-connection-pooling.md for details.

Key Benefits:

  • Eliminate 250ms connection overhead per operation

  • Support high-concurrency scenarios (configurable pool size)

  • Automatic connection cleanup and health monitoring

  • Zero code changes required (transparent integration)

Benchmarks

Run performance benchmarks:

npm run test:bench

Expected results:

  • Worst-case validation: <0.005ms (0.003ms typical)

  • Average-case validation: <0.005ms (0.003ms typical)

  • Performance variance: <0.001ms (proves O(1) consistency)

Troubleshooting

Common Issues

"Cannot connect to Docker socket"

Symptoms:

  • Error: connect EACCES /var/run/docker.sock

  • Error: connect ENOENT /var/run/docker.sock

Solutions:

  1. Permissions - Add your user to the docker group:

    sudo usermod -aG docker $USER newgrp docker # Apply without logout
  2. Socket path - Check if Docker socket exists:

    ls -la /var/run/docker.sock # If not found, Docker may not be installed or running sudo systemctl status docker
  3. Docker not running - Start Docker daemon:

    sudo systemctl start docker sudo systemctl enable docker # Start on boot

"SSH connection failed" / "All configured authentication methods failed"

Symptoms:

  • Error: HostOperationError: SSH connection failed

  • Operations timeout on remote hosts

Solutions:

  1. Test SSH manually - Verify SSH access works:

    ssh -i ~/.ssh/id_rsa user@hostname # Should connect without password prompt
  2. Check SSH key permissions - Keys must not be world-readable:

    chmod 600 ~/.ssh/id_rsa chmod 644 ~/.ssh/id_rsa.pub
  3. Verify host config - Ensure sshUser and sshKeyPath are correct:

    { "name": "remote", "host": "192.168.1.100", "protocol": "ssh", "sshUser": "admin", "sshKeyPath": "~/.ssh/id_rsa" // or absolute: "/home/user/.ssh/id_rsa" }
  4. SSH agent - If using SSH agent, ensure key is loaded:

    ssh-add -l # List loaded keys ssh-add ~/.ssh/id_rsa # Add if missing

"Docker Compose project not found"

Symptoms:

  • Error: No Docker Compose projects found on host

  • Expected projects don't appear in listings

Solutions:

  1. Check search paths - Add custom compose paths to config:

    { "name": "myhost", "composeSearchPaths": ["/opt/appdata", "/mnt/docker", "/home/user/compose"] }
  2. Verify compose files exist - SSH to host and check:

    find /path/to/search -name "docker-compose.y*ml" -o -name "compose.y*ml"
  3. Force cache refresh - Use the refresh subaction:

    { "action": "compose", "subaction": "refresh", "host": "myhost" }
  4. Check file permissions - Ensure compose files are readable:

    ls -la /path/to/docker-compose.yml # Should show -rw-r--r-- (at minimum)

"Operation timed out"

Symptoms:

  • Requests hang for 30+ seconds then fail

  • Stats collection fails intermittently

Solutions:

  1. Check host connectivity - Test network latency:

    ping -c 4 hostname # Should show <50ms latency for local network
  2. Docker daemon responsive - Check if Docker is overloaded:

    ssh user@host "docker ps" # Should respond in <1s
  3. Reduce parallelism - Query hosts sequentially instead of "all":

    { "action": "container", "subaction": "list", "host": "specific-host" }

"Command not in allowed list"

Symptoms:

  • Error: Command 'X' not in allowed list

  • Container exec or Scout commands fail

Solutions:

  1. Use allowed commands only - Check the allowlist:

    // Allowed commands (src/constants.ts): [ "docker", "docker-compose", "systemctl", "cat", "head", "tail", "grep", "rg", "find", "ls", "tree", "wc", "sort", "uniq", "diff", "stat", "file", "du", "df", "pwd", "hostname", "uptime", "whoami" ]
  2. Development bypass - For testing only (NOT production):

    export NODE_ENV=development export SYNAPSE_ALLOW_ANY_COMMAND=true # Restart server
  3. Request addition - If command is needed, open an issue with:

    • Command name and purpose

    • Security justification

    • Example use cases

Diagnostic Steps

1. Check MCP Server Logs

# If running via stdio tail -f ~/.mcp/logs/synapse-mcp.log # If running via HTTP curl http://localhost:53001/health

2. Test Host Configuration

# Test Docker API connection docker -H ssh://user@host ps # Test SSH command execution ssh user@host "docker ps" # Test Docker Compose ssh user@host "docker compose -f /path/to/compose.yml ps"

3. Validate Configuration

# Check config file syntax cat ~/.config/synapse-mcp/config.json | jq . # Verify paths exist ls -la ~/.ssh/id_rsa ls -la /var/run/docker.sock

4. Enable Debug Logging

# Set environment variable export DEBUG=synapse:* # Or for specific modules export DEBUG=synapse:ssh,synapse:docker # Restart server to see detailed logs

Recovery Procedures

Service Down

  1. Check process - Ensure server is running:

    ps aux | grep synapse-mcp
  2. Restart server - Restart via your MCP client (Claude Code):

    • Restart Claude Code application

    • Or restart specific MCP server from settings

  3. Check config - Validate JSON syntax:

    jq . ~/.config/synapse-mcp/config.json # Should output formatted JSON (no errors)

High Error Rate

  1. Check Docker health - All hosts:

    for host in host1 host2 host3; do ssh user@$host "docker info" done
  2. Reduce load - Limit concurrent operations:

    • Use specific host instead of "all"

    • Add delays between operations

    • Reduce pagination limits

  3. Clear cache - Force SSH connection pool reset:

    # Restart server (closes all connections) # Connections will be recreated on demand

Rollback Procedure

If an update causes issues:

  1. Check version - Note current version:

    cd ~/path/to/synapse-mcp git log --oneline -1
  2. Rollback - Revert to previous working version:

    git log --oneline -10 # Find last working commit git checkout <commit-hash> pnpm install pnpm run build
  3. Restart - Restart MCP server in Claude Code

  4. Report issue - Open GitHub issue with:

    • Version that failed

    • Error messages

    • Steps to reproduce

Getting Help

  • GitHub Issues: https://github.com/anthropics/claude-code/issues

  • Documentation: See .docs/ directory for detailed architecture docs

  • Transport setup: See docs/TRANSPORTS.md for secure local/remote connection patterns

  • Logs: Check ~/.mcp/logs/ for detailed error traces

License

MIT

Install Server
A
security – no known vulnerabilities
F
license - not found
A
quality - confirmed to work

Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jmagar/homelab-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server