This server provides unified Docker management and SSH remote operations for homelab infrastructure through two main tools: Flux for Docker operations and Scout for SSH operations.
Docker Management (Flux Tool)
Container lifecycle: Start, stop, restart, pause/resume, pull, recreate, and exec into containers
Docker Compose: Full project management (up, down, restart, logs, build, pull, recreate) with auto-discovery and caching
Image operations: List, pull, build, and remove Docker images
System operations: Docker daemon info, disk usage, prune unused resources (images, containers, volumes, cache)
Monitoring: Real-time resource statistics (CPU, memory, network, I/O), detailed container inspection
Logs: Advanced filtering with time ranges and safe grep patterns
Host operations: Check connectivity, monitor resources, view systemd services, network info, and mounted filesystems
Smart search: Find containers by name, image, or labels across all hosts
Remote Operations (Scout Tool)
File operations: Read files, list directory trees, find by glob patterns, compare files (delta), and transfer between hosts (beam)
Command execution: Execute allowlist-validated commands on single or multiple hosts
Process monitoring: List and filter processes by user, CPU, or memory usage
System logs: Access syslog, journald, dmesg, and auth logs with filtering
ZFS management: Monitor pools, datasets, and snapshots with health status
Disk monitoring: Filesystem usage across all mounts
Infrastructure Features
Multi-host support: Manage Unraid, Proxmox, and bare metal systems
Auto-discovery: Local Docker socket and SSH hosts from
~/.ssh/configSSH connection pooling: 50× faster repeated operations
Dual transport: stdio for Claude Code and HTTP for remote access
Security: Path traversal protection, command allowlists, and safe execution patterns
Performance: O(1) schema validation (<0.005ms), pagination support, and configurable resource limits
Manages Docker containers across multiple homelab hosts, providing tools for container lifecycle management (start, stop, restart, pause/unpause), log retrieval, resource monitoring (CPU, memory, network, I/O), container search and inspection, and Docker system operations including disk usage analysis and resource pruning.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Homelab MCP Servershow me running containers on my unraid server"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Synapse MCP
MCP (Model Context Protocol) server providing Flux (Docker management) and Scout (SSH operations) tools for homelab infrastructure. The neural connection point for your distributed systems.
Designed for use with Claude Code and other MCP-compatible clients.
Installation
Claude Code Plugin (Recommended)
# Add the synapse marketplace
/plugin marketplace add jmagar/synapse-mcp
# Install the synapse-mcp plugin
/plugin install synapse-mcp@synapseWhat you get:
✅
/fluxand/scoutcommands✅ Auto-configured MCP server
✅ Complete documentation and examples
✅ SSH host auto-discovery
Usage
# List Docker containers
/flux list containers
# Check SSH hosts
/scout list hosts
# Monitor system resources
/flux show resourcesDirect MCP Server Setup
For non-Claude Code MCP clients, see Transport Quick Start below.
Transport Quick Start
Choose one:
Local use:
stdio(default)Secure remote with minimal setup:
stdioover SSHRemote HTTP: API key auth and/or Tailscale Serve auth
See docs/TRANSPORTS.md for exact setup and configs for all transport modes.
Features
Flux Tool (Docker Infrastructure Management)
Container lifecycle: Start, stop, restart, pause/resume, pull, recreate, exec
Docker Compose: Full project management (up, down, restart, logs, build, pull, recreate)
Image operations: List, pull, build, remove Docker images
Host operations: Status checks, resource monitoring, systemd services, network info
Log retrieval: Advanced filtering with time ranges, grep (safe patterns only), stream selection
Resource monitoring: Real-time CPU, memory, network, I/O statistics
Smart search: Find containers by name, image, or labels across all hosts
Pagination & filtering: All list operations support limits, offsets, and filtering
Scout Tool (SSH Remote Operations)
File operations: Read files, directory trees, file transfer (beam), diff comparison
Remote execution: Execute commands with allowlist security
Process monitoring: List and filter processes by user, CPU, memory
ZFS management: Pools, datasets, snapshots with health monitoring
System logs: Access syslog, journald, dmesg, auth logs with filtering (safe grep patterns only)
Disk monitoring: Filesystem usage across all mounts
Multi-host operations: Execute commands or read files across multiple hosts (emit)
Infrastructure
Multi-host support: Manage Docker and SSH across Unraid, Proxmox, bare metal
Auto-detect local Docker: Automatically adds local Docker socket if available
Dual transport: stdio for Claude Code, HTTP for remote access
O(1) validation: Discriminated union pattern for instant schema validation
SSH connection pooling: 50× faster repeated operations
Tools
The server provides two powerful tools with discriminated union schemas for O(1) validation:
Available Tools
flux
Docker infrastructure management - container, compose, docker, and host operations
scout
SSH remote operations - file, process, and system inspection
Getting Help
Both tools include auto-generated help:
{ "action": "help" }
{ "action": "help", "topic": "container:resume" }
{ "action": "help", "format": "json" }Breaking change from V2: The unified tool has been completely removed and replaced with flux and scout.
Tool 1: flux - Docker Infrastructure Management
43 operations across 5 actions - Container lifecycle, compose orchestration, system management
FLUX OPERATIONS:
Container (14 operations)
● exec - Execute command inside a container
● inspect - Get detailed container information
● list - List containers with optional filtering
● logs - Get container logs with optional filtering
● pause - Pause a running container
● pull - Pull latest image for a container
⚠️ recreate - Recreate a container with optional image pull
● restart - Restart a container
● resume - Resume a paused container
● search - Search containers by query string
● start - Start a stopped container
● stats - Get resource usage statistics
● stop - Stop a running container
● top - Show running processes in a container
Compose (10 operations)
● build - Build Docker Compose project images
⚠️ down - Stop a Docker Compose project
● list - List all Docker Compose projects
● logs - Get Docker Compose project logs
● pull - Pull Docker Compose project images
⚠️ recreate - Recreate Docker Compose project containers
● refresh - Refresh compose project cache by scanning filesystem
● restart - Restart a Docker Compose project
● status - Get Docker Compose project status
● up - Start a Docker Compose project
Docker (9 operations)
● build - Build a Docker image
● df - Get Docker disk usage information
● images - List Docker images
● info - Get Docker daemon information
● networks - List Docker networks
⚠️ prune - Remove unused Docker resources
● pull - Pull a Docker image
⚠️ rmi - Remove a Docker image
● volumes - List Docker volumes
Host (9 operations)
✓ doctor - Run diagnostic checks on host Docker configuration
● info - Get OS, kernel, architecture, and hostname information
● mounts - Get mounted filesystems
● network - Get network interfaces and IP addresses
● ports - List all port mappings for containers on a host
● resources - Get CPU, memory, and disk usage via SSH
● services - Get systemd service status
✓ status - Check Docker connectivity to host
● uptime - Get system uptime
Tool 2: scout - SSH Remote Operations
16 operations across 11 actions - File operations, process inspection, system logs
SCOUT OPERATIONS:
Simple Actions (9 operations)
● beam - File transfer between local and remote hosts
● delta - Compare files or content between locations
● df - Disk usage information for a remote host
● emit - Multi-host operations
● exec - Execute command on a remote host
● find - Find files by glob pattern on a remote host
● nodes - List all configured SSH hosts
● peek - Read file or directory contents on a remote host
● ps - List and search processes on a remote host
ZFS (3 operations)
● pools - List ZFS storage pools
● datasets - List ZFS datasets
● snapshots - List ZFS snapshots
Logs (4 operations)
● syslog - Access system log files (/var/log)
● journal - Access systemd journal logs
● dmesg - Access kernel ring buffer logs
● auth - Access authentication logs
Legend:
●State-changing operation⚠️Destructive operation (requiresforce: true)✓Diagnostic/health check→Port mapping notation (host→container/protocol)
Simple Actions (9)
Action | Description |
| List all configured SSH hosts |
| Read file or directory contents (with tree mode) |
| Execute command on remote host (allowlist validated) |
| Find files by glob pattern |
| Compare files or content between locations |
| Multi-host operations (read files or execute commands) |
| File transfer between local/remote or remote/remote |
| List and search processes with filtering |
| Disk usage information |
ZFS Operations (action: "zfs") - 3 subactions
Subaction | Description |
| List ZFS storage pools with health status |
| List ZFS datasets (filesystems and volumes) |
| List ZFS snapshots |
Log Operations (action: "logs") - 4 subactions
Subaction | Description |
| Access system log files (/var/log) |
| Access systemd journal logs with unit filtering |
| Access kernel ring buffer logs |
| Access authentication logs |
Compose Auto-Discovery
The MCP server automatically discovers and caches Docker Compose project locations, eliminating the need to specify file paths for every operation.
How It Works
The discovery system uses a multi-layer approach:
Cache Check: Looks up project in local cache (
.cache/compose-projects/)Docker List: Queries
docker compose lsfor running projectsFilesystem Scan: Scans configured search paths for compose files
Error: Returns error if project not found in any layer
Discovery results are cached for 24 hours (configurable via COMPOSE_CACHE_TTL_HOURS environment variable).
Configuration
Add optional composeSearchPaths to your host configuration:
{
"hosts": [
{
"name": "my-host",
"host": "192.168.1.100",
"protocol": "ssh",
"composeSearchPaths": ["/opt/stacks", "/srv/docker"]
}
]
}Default search paths: ["/compose", "/mnt/cache/compose", "/mnt/cache/code"] if not specified.
Optional Host Parameter
Most compose operations accept an optional host parameter. When omitted, the system automatically searches all configured hosts in parallel to find the project:
// Explicit host (faster - no search needed)
{ "action": "compose", "subaction": "up", "project": "plex", "host": "server1" }
// Auto-discover (searches all hosts in parallel)
{ "action": "compose", "subaction": "up", "project": "plex" }Auto-discovery times out after 30 seconds if the project cannot be found on any host. If a project exists on multiple hosts, you'll receive an error asking you to specify the host parameter explicitly.
Cache Management
TTL: 24 hours (default, configurable)
Storage:
.cache/compose-projects/directory (gitignored)Invalidation: Automatic when operations fail due to stale paths
Manual Refresh: Use
compose:refreshsubaction
Manual Cache Refresh
Force a cache refresh by scanning the filesystem:
// Refresh all hosts
{ "action": "compose", "subaction": "refresh" }
// Refresh specific host
{ "action": "compose", "subaction": "refresh", "host": "server1" }Returns a list of discovered projects with their paths and discovery source (docker-ls or filesystem scan).
Architecture
┌─────────────┐
│ Handler │
└──────┬──────┘
│
v
┌──────────────┐ ┌──────────────┐
│ HostResolver │─────>│ Discovery │
└──────────────┘ └──────┬───────┘
│
┌────────┴────────┐
v v
┌──────────┐ ┌──────────┐
│ Cache │ │ Scanner │
└──────────┘ └──────────┘Components:
HostResolver: Finds which host contains the project (parallel search)
ComposeDiscovery: Orchestrates cache, docker-ls, and filesystem scanning
ComposeProjectCache: File-based cache with TTL validation
ComposeScanner: Filesystem scanning for compose files (respects max depth of 3)
Example Usage
Flux Tool Examples
// List running containers
{ "tool": "flux", "action": "container", "subaction": "list", "state": "running" }
// Restart a container
{ "tool": "flux", "action": "container", "subaction": "restart", "container_id": "plex", "host": "tootie" }
// Start a compose project (auto-discovers location and host)
{ "tool": "flux", "action": "compose", "subaction": "up", "project": "media-stack" }
// Start a compose project on specific host
{ "tool": "flux", "action": "compose", "subaction": "up", "host": "tootie", "project": "media-stack" }
// Refresh compose project cache
{ "tool": "flux", "action": "compose", "subaction": "refresh" }
// Get host resources
{ "tool": "flux", "action": "host", "subaction": "resources", "host": "tootie" }
// Pull an image
{ "tool": "flux", "action": "docker", "subaction": "pull", "host": "tootie", "image": "nginx:latest" }
// Execute command in container
{ "tool": "flux", "action": "container", "subaction": "exec", "container_id": "nginx", "command": "nginx -t" }Scout Tool Examples
// List configured SSH hosts
{ "tool": "scout", "action": "nodes" }
// Read a remote file
{ "tool": "scout", "action": "peek", "target": "tootie:/etc/nginx/nginx.conf" }
// Show directory tree
{ "tool": "scout", "action": "peek", "target": "dookie:/var/log", "tree": true }
// Execute remote command
{ "tool": "scout", "action": "exec", "target": "tootie:/var/www", "command": "du -sh *" }
// Transfer file between hosts
{ "tool": "scout", "action": "beam", "source": "tootie:/tmp/backup.tar.gz", "destination": "dookie:/backup/" }
// Check ZFS pool health
{ "tool": "scout", "action": "zfs", "subaction": "pools", "host": "dookie" }
// View systemd journal
{ "tool": "scout", "action": "logs", "subaction": "journal", "host": "tootie", "unit": "docker.service" }
// Multi-host command execution
{ "tool": "scout", "action": "emit", "targets": ["tootie:/tmp", "dookie:/tmp"], "command": "df -h" }Installation
# Clone or copy the server files
cd synapse-mcp
# Install dependencies
pnpm install
# Build
pnpm run buildThe server will create a .cache/compose-projects/ directory for storing discovered project locations. This directory is automatically gitignored.
Configuration
SSH Config Auto-Loading
Zero configuration required! Synapse-MCP automatically discovers hosts from your ~/.ssh/config file.
All SSH hosts with a HostName directive are automatically available for Docker management via SSH tunneling to the remote Docker socket. Manual configuration is completely optional.
Priority order:
Manual config file (highest) -
synapse.config.jsonSYNAPSE_HOSTS_CONFIGenvironment variableSSH config auto-discovery -
~/.ssh/configLocal Docker socket (fallback)
Example SSH config:
Host production
HostName 192.168.1.100
User admin
Port 22
IdentityFile ~/.ssh/id_ed25519
Host staging
HostName 192.168.1.101
User deploy
Port 2222
IdentityFile ~/.ssh/staging_keyBoth hosts are immediately available as flux targets with SSH tunneling to /var/run/docker.sock. No additional configuration needed!
Manual override: If you create a synapse.config.json entry with the same name as an SSH host, the manual config completely replaces the SSH config (no merging).
Manual Configuration (Optional)
Create a config file at one of these locations (checked in order):
Path in
SYNAPSE_CONFIG_FILEenv var./synapse.config.json(current directory)~/.config/synapse-mcp/config.json~/.synapse-mcp.json
Example Config
{
"hosts": [
{
"name": "local",
"host": "localhost",
"protocol": "ssh",
"dockerSocketPath": "/var/run/docker.sock",
"tags": ["development"]
},
{
"name": "production",
"host": "192.168.1.100",
"port": 22,
"protocol": "ssh",
"sshUser": "admin",
"sshKeyPath": "~/.ssh/id_rsa",
"tags": ["production"]
},
{
"name": "unraid",
"host": "unraid.local",
"port": 2375,
"protocol": "http",
"tags": ["media", "storage"]
}
]
}Copy config/synapse.config.example.json as a starting point:
cp config/synapse.config.example.json ~/.config/synapse-mcp/config.json
# or
cp config/synapse.config.example.json ~/.synapse-mcp.jsonNote: If
/var/run/docker.sockexists and isn't already in your config, it will be automatically added as a host using your machine's hostname. This means the server works out-of-the-box for local Docker without any configuration.
Host Configuration Options
Field | Type | Description |
|
| Unique identifier for the host |
|
| Hostname or IP address |
|
| Docker API port (default: 2375) |
|
| Connection protocol |
|
| Path to Docker socket (for local connections) |
|
| SSH username for remote connections (protocol: "ssh") |
|
| Path to SSH private key for authentication |
|
| Optional tags for filtering |
Environment Variables Reference
Complete reference for all environment variables that control server behavior.
Server Configuration
Variable | Type | Default | Description |
|
| Auto-detect | Path to config file. Overrides default search paths. |
|
|
| JSON config as environment variable. Fallback if no config file found. |
|
|
| HTTP server port (only used with |
|
|
| HTTP server bind address. Use |
|
|
| Node environment. Affects stack traces and error verbosity. |
Performance Tuning
Variable | Type | Default | Description |
|
|
| Maximum SSH connections per host. Increase for high-concurrency workloads (10-20 for 100+ containers). |
|
|
| Close idle connections after this duration. Reduce to save resources (30000 for low-usage). |
|
|
| SSH connection timeout. Increase for slow networks (10000-15000). |
|
|
| Health check interval. Set to |
|
|
| Compose project cache lifetime in hours. Lower for frequently changing projects (6-12 hours). |
Security Options
Variable | Type | Default | Description | ⚠️ Security Impact |
|
|
| Enables HTTP API key authentication when set. | If unset, |
|
|
| Comma-separated allowlist of trusted CORS origins for browser clients. | If unset, cross-origin browser access is blocked ( |
|
|
| DANGEROUS: Disables command allowlist for | CRITICAL: Allows arbitrary command execution. Only use in trusted development environments. Never set in production. |
Security Warning for
When set to true, this variable completely bypasses the command allowlist, allowing execution of ANY command on managed hosts via scout:exec. This includes destructive commands like rm -rf /, privilege escalation, and backdoor installation.
Default allowed commands (when false):
Read operations:
cat,head,tail,grep,rg,find,ls,treeInfo operations:
stat,file,du,df,pwd,hostname,uptime,whoamiText processing:
wc,sort,uniq,diff
When to use
Local development only
Single-user environments
When you fully trust all MCP clients
When
NODE_ENV=development
Detection:
# Check if variable is set
printenv | grep SYNAPSE_ALLOW_ANY_COMMAND
# Check systemd service
sudo grep SYNAPSE_ALLOW_ANY_COMMAND /etc/systemd/system/synapse-mcp.service
# Check Docker Compose
grep SYNAPSE_ALLOW_ANY_COMMAND docker-compose.ymlDebug and Logging
Variable | Type | Default | Description |
|
|
| Enable debug logging. Set to |
|
|
| Logging level: |
Example Configurations
Development (Local):
export NODE_ENV=development
export SYNAPSE_CONFIG_FILE=~/.config/synapse-mcp/config.json
export DEBUG=synapse:*
export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=0 # Disable health checks
export LOG_LEVEL=debug
node dist/index.jsProduction (HTTP Mode with High Concurrency):
export NODE_ENV=production
export SYNAPSE_PORT=53000
export SYNAPSE_HOST=127.0.0.1 # Localhost only, behind reverse proxy
export SSH_POOL_MAX_CONNECTIONS=10 # Higher concurrency
export COMPOSE_CACHE_TTL_HOURS=12 # Refresh more frequently
export LOG_LEVEL=info
node dist/index.js --httpProduction (Stdio Mode for Claude Code):
export NODE_ENV=production
export SYNAPSE_CONFIG_FILE=/etc/synapse-mcp/config.json
export SSH_POOL_MAX_CONNECTIONS=5
export COMPOSE_CACHE_TTL_HOURS=24
export LOG_LEVEL=warn
node dist/index.jsHigh-Latency Network:
export SSH_POOL_CONNECTION_TIMEOUT_MS=15000 # 15s timeout
export SSH_POOL_IDLE_TIMEOUT_MS=120000 # 2min idle timeout
export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=60000 # 1min health checks
node dist/index.jsLocal vs Remote Execution
The server automatically determines whether to use local execution or SSH based on your host configuration:
Local Execution (No SSH)
Commands run directly on localhost using Node.js for best performance:
{
"name": "local",
"host": "localhost",
"protocol": "ssh",
"dockerSocketPath": "/var/run/docker.sock"
}Requirements: Host must be localhost/127.x.x.x/::1 AND no sshUser specified.
Benefits:
~10x faster than SSH for Compose and host operations
No SSH key management needed
Works out of the box
Remote Execution (SSH)
Commands run via SSH on remote hosts or when sshUser is specified:
{
"name": "production",
"host": "192.168.1.100",
"protocol": "ssh",
"sshUser": "admin",
"sshKeyPath": "~/.ssh/id_rsa"
}When SSH is used:
Host is NOT localhost/127.x.x.x
sshUseris specified (even for localhost)For all Scout operations (file operations always use SSH)
Docker API vs Command Execution
These are independent:
Operation | Local Host | Remote Host |
Docker API (container list, stats) | Unix socket | HTTP |
Commands (compose, systemctl) | Local | SSH |
See .docs/local-vs-remote-execution.md for detailed architecture documentation.
Resource Limits & Defaults
Setting | Value | Description |
| 40,000 | Maximum response size (~12.5k tokens) |
| 20 | Default pagination limit for list operations |
| 100 | Maximum pagination limit |
| 50 | Default number of log lines to fetch |
| 500 | Maximum log lines allowed |
| 30s | Docker API operation timeout |
| 5s | Stats collection timeout |
Performance Characteristics
Understanding performance expectations helps optimize your usage and troubleshoot slow operations.
Response Time Expectations
Operation Type | Expected Latency | Notes |
Single-host operations | 50-150ms | Container list, stats, logs, inspect |
Multi-host container discovery | 100-500ms | Depends on host count and network latency |
Compose auto-discovery | 1-500ms | Cache hit: 1ms, docker-ls: 50-100ms, filesystem scan: 200-500ms |
SSH connection (warm) | <10ms | Connection pool hit |
SSH connection (cold) | 200-300ms | New connection establishment |
Container exec | 100ms-30s | Depends on command execution time |
Configuration Loading
Config files: Loaded at server startup (synchronous read)
Config changes: Require server restart (no hot reload)
SSH config: Changes detected automatically on next operation
Cache: Compose project cache has 24-hour TTL (configurable via
COMPOSE_CACHE_TTL_HOURS)
Buffer and Output Limits
Resource | Limit | Behavior on Exceed |
Response character limit | 40,000 chars (~12.5k tokens) | Truncated with warning |
Container exec output | 10MB per stream (stdout/stderr) | Stream terminated with error |
Log lines | 50 default, 500 maximum | Paginate with |
Find results | 100 default, 1000 maximum | Paginate with |
Connection Pooling
Setting | Default | Tuning |
SSH connections per host | 5 |
|
Idle timeout | 60 seconds |
|
Connection timeout | 5 seconds |
|
Health check interval | 30 seconds |
|
Performance Impact:
Warm connections: 20-30× faster than establishing new connections
Pool exhaustion: Operations queue until connection available
Health checks: Detect and remove stale connections automatically
Compose Discovery Cache
Three-tier strategy:
Cache check (fastest, 0-1ms) -
.cache/compose-projects/docker compose ls (medium, 50-100ms) - running projects only
Filesystem scan (slowest, 200-500ms) - all projects
Cache behavior:
TTL: 24 hours (default, configurable via
COMPOSE_CACHE_TTL_HOURS)Invalidation: Automatic on stale path detection
Storage: Local filesystem (
.cache/compose-projects/)Refresh: Manual via
compose:refreshor automatic on cache miss
Scaling Characteristics
Host Count Impact:
1-5 hosts: Optimal performance, minimal latency
6-10 hosts: Good performance, consider explicit
hostparameter for frequent operations11-15 hosts: Increased latency, recommend explicit
hostfor all operations16+ hosts: Consider splitting into multiple MCP server instances
Container Count Impact:
1-50 containers: No impact, all operations fast
51-100 containers: Pagination recommended for list operations
101-500 containers: Always paginate, avoid
state: "all"without filters500+ containers: Use host-specific operations, increase
SSH_POOL_MAX_CONNECTIONS
Network Latency Impact:
Low latency (<10ms): Minimal impact on multi-host operations
Medium latency (10-50ms): 2-3× slower for multi-host discovery
High latency (>50ms): Explicitly specify
hostparameter to avoid discovery overhead
Tuning for Large Deployments
If managing 15+ hosts with 100+ containers:
# Increase connection pool size
export SSH_POOL_MAX_CONNECTIONS=10
# Reduce cache TTL for frequently changing projects
export COMPOSE_CACHE_TTL_HOURS=12
# Disable health checks if connections are stable
export SSH_POOL_HEALTH_CHECK_INTERVAL_MS=0Operational strategies:
Always specify Avoid auto-discovery overhead for known locations
Use pagination: Set
limit: 20for list operationsBatch operations: Group related operations to reuse warm connections
Split by environment: Run separate MCP instances for dev/staging/prod hosts
Performance Monitoring
Monitor response times:
# Watch logs for slow operations
journalctl -u synapse-mcp.service | grep -E "took [0-9]{3,}ms"
# Check connection pool utilization
# (Low availability = need more connections)Health check:
# Monitor server health
curl http://localhost:53000/healthEnabling Docker API on Hosts
Unraid
Docker API is typically available at port 2375 by default.
Standard Docker (systemd)
Edit /etc/docker/daemon.json:
{
"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]
}Or override the systemd service:
sudo systemctl edit docker.service[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375⚠️ Security Note: Exposing Docker API without TLS is insecure. Use on trusted networks only, or set up TLS certificates.
Usage
With Claude Code
Add to ~/.claude/claude_code_config.json:
{
"mcpServers": {
"synapse": {
"command": "node",
"args": ["/absolute/path/to/synapse-mcp/dist/index.js"],
"env": {
"SYNAPSE_CONFIG_FILE": "/home/youruser/.config/synapse-mcp/config.json"
}
}
}
}Or if your config is in one of the default locations, you can skip the env entirely:
{
"mcpServers": {
"synapse": {
"command": "node",
"args": ["/absolute/path/to/synapse-mcp/dist/index.js"]
}
}
}Then in Claude Code:
> List all running containers on tootie (uses flux tool)
> Restart the plex container (uses flux tool)
> Show me the logs from sonarr with errors in the last hour (uses flux tool)
> Which containers are using the most memory? (uses flux tool)
> Read the nginx config on tootie (uses scout tool)
> Check ZFS pool health on dookie (uses scout tool)
> Show me systemd journal errors from the last hour (uses scout tool)HTTP Mode
For remote access or multi-client scenarios:
# Start HTTP server
node dist/index.js --http
# Server runs on http://127.0.0.1:53000/mcp
# Health check: http://127.0.0.1:53000/healthEnvironment variables for HTTP mode:
PORT: Server port (default: 53000)HOST: Bind address (default: 127.0.0.1)
CLI Help
node dist/index.js --helpExample Interactions
Flux Tool - Container Management
User: What containers are running on tootie?
Claude: [calls flux with action="container", subaction="list", host="tootie", state="running"]
I found 23 running containers on tootie:
🟢 plex (tootie) - Image: linuxserver/plex | Up 3 days
🟢 sonarr (tootie) - Image: linuxserver/sonarr | Up 3 days
🟢 radarr (tootie) - Image: linuxserver/radarr | Up 3 days
...Flux Tool - Log Analysis
User: Show me any errors from nginx in the last hour
Claude: [calls flux with action="container", subaction="logs",
container_id="nginx", since="1h", grep="error"]
Found 3 error entries in nginx logs:
[14:23:15] 2024/12/15 14:23:15 [error] connect() failed...Scout Tool - Remote File Access
User: Read the nginx config on tootie
Claude: [calls scout with action="peek", target="tootie:/etc/nginx/nginx.conf"]
Here's the nginx configuration from tootie:
user nginx;
worker_processes auto;
...Scout Tool - ZFS Health Check
User: Check ZFS pool health on dookie
Claude: [calls scout with action="zfs", subaction="pools", host="dookie"]
ZFS Pools on dookie:
tank - ONLINE | Size: 24TB | Free: 8.2TB | Health: 100%
backup - ONLINE | Size: 12TB | Free: 5.1TB | Health: 100%Scout Tool - System Logs
User: Show me Docker service errors from systemd journal
Claude: [calls scout with action="logs", subaction="journal",
host="tootie", unit="docker.service", priority="err"]
Recent errors from docker.service:
[15:42:10] Failed to allocate directory watch: Too many open files
[15:42:15] containerd: connection error: desc = "transport: error while dialing"Troubleshooting
Common issues and their solutions. For additional help, see the operational runbooks in docs/runbooks/.
Service Won't Start
Port Already in Use
Symptom:
Error: listen EADDRINUSE: address already in use :::53000Cause: Another process is using port 53000 (HTTP mode) or stdout/stdin are not available (stdio mode).
Solution:
For HTTP mode:
# Find process using port 53000
lsof -i :53000
# or
ss -tulpn | grep :53000
# Kill the process or change port
SYNAPSE_PORT=53001 node dist/index.js --http
# Or set permanently
export SYNAPSE_PORT=53001For stdio mode:
# Check if running in terminal (stdio requires parent process)
# Don't run stdio mode directly in terminal - use via MCP client onlyMissing Dependencies
Symptom:
Error: Cannot find module '@modelcontextprotocol/sdk'Cause: Dependencies not installed or node_modules corrupted.
Solution:
# Reinstall dependencies
rm -rf node_modules pnpm-lock.yaml
pnpm install
# Rebuild
pnpm run build
# Verify installation
pnpm list @modelcontextprotocol/sdkPermission Denied on Startup
Symptom:
Error: EACCES: permission denied, open '/var/run/docker.sock'Cause: User not in docker group.
Solution:
# Add user to docker group
sudo usermod -aG docker $USER
# Log out and back in for group change to take effect
# Or use newgrp to activate immediately
newgrp docker
# Verify docker access
docker psSSH Connection Failures
Host Key Verification Failed
Symptom:
[SSH] [Host: production] Permission denied (publickey)
# or
Host key verification failedCause: SSH key not in ~/.ssh/known_hosts or key mismatch.
Solution:
Option 1: Pre-seed known_hosts (Recommended)
# Add host key to known_hosts
ssh-keyscan -H hostname >> ~/.ssh/known_hosts
# For all configured hosts
for host in production staging dev; do
ssh-keyscan -H $host >> ~/.ssh/known_hosts
doneOption 2: Manual verification
# Connect manually first to accept key
ssh user@hostname
# Verify fingerprint matches (check console/IPMI)
ssh-keygen -l -f ~/.ssh/known_hosts | grep hostnameOption 3: Remove stale key (if host key changed)
# Remove old key
ssh-keygen -R hostname
# Re-add current key
ssh-keyscan -H hostname >> ~/.ssh/known_hostsSSH Key Permission Errors
Symptom:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/home/user/.ssh/id_rsa' are too open.Cause: SSH private key has insecure permissions.
Solution:
# Fix key permissions (required: 600)
chmod 600 ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_ed25519
# Fix directory permissions
chmod 700 ~/.ssh
# Verify
ls -la ~/.ssh/
# Should show: -rw------- for keysConnection Timeout
Symptom:
[SSH] [Host: production] SSH command timeout after 5000msCause: Network latency, firewall blocking, or host unreachable.
Solution:
Increase timeout:
# Set longer connection timeout (15 seconds)
export SSH_POOL_CONNECTION_TIMEOUT_MS=15000
node dist/index.jsCheck network connectivity:
# Test SSH access manually
ssh -v user@hostname
# Check network latency
ping hostname
# Check firewall rules
sudo ufw status
# or
sudo iptables -LVerify host is reachable:
# Test basic connectivity
nc -zv hostname 22
# Check if SSH daemon is running
ssh user@hostname 'systemctl status sshd'SSH Agent Not Running
Symptom:
Could not open a connection to your authentication agentCause: SSH agent not started or key not added.
Solution:
# Start SSH agent
eval $(ssh-agent)
# Add key to agent
ssh-add ~/.ssh/id_rsa
# Verify key is loaded
ssh-add -l
# Add to shell startup (~/.bashrc or ~/.zshrc)
if [ -z "$SSH_AUTH_SOCK" ]; then
eval $(ssh-agent)
ssh-add ~/.ssh/id_rsa
fiDocker API Connection Errors
Socket Permission Denied
Symptom:
Error: connect EACCES /var/run/docker.sockCause: User not in docker group or socket permissions incorrect.
Solution:
Add user to docker group:
# Add current user
sudo usermod -aG docker $USER
# Log out and back in
# Verify group membership
groups | grep docker
# Test docker access
docker psCheck socket permissions:
# Socket should be owned by docker group
ls -la /var/run/docker.sock
# Should show: srw-rw---- root docker
# If permissions wrong, fix ownership
sudo chown root:docker /var/run/docker.sock
sudo chmod 660 /var/run/docker.sockConnection Refused
Symptom:
Error: connect ECONNREFUSED 192.168.1.100:2375Cause: Docker daemon not running, wrong port, or firewall blocking.
Solution:
Check Docker daemon status:
# On target host
systemctl status docker
# Start if not running
sudo systemctl start docker
sudo systemctl enable dockerVerify Docker API port:
# Check if Docker listening on expected port
ss -tulpn | grep 2375
# If not exposed, edit daemon config
sudo vi /etc/docker/daemon.json
# Add: {"hosts": ["unix:///var/run/docker.sock", "tcp://0.0.0.0:2375"]}
sudo systemctl restart dockerCheck firewall:
# Allow Docker API port (if using HTTP protocol)
sudo ufw allow from 192.168.1.0/24 to any port 2375
# Or specific IP only (more secure)
sudo ufw allow from 192.168.1.10 to any port 2375Docker Daemon Not Ready
Symptom:
Cannot connect to the Docker daemon. Is the docker daemon running?Cause: Docker service not started or crashed.
Solution:
# Check status
systemctl status docker
# View logs
journalctl -u docker.service -n 50
# Restart daemon
sudo systemctl restart docker
# Check for errors
docker infoHigh Latency Issues
Slow Container Discovery
Symptom: Container operations taking 5-30 seconds across multiple hosts.
Cause: Sequential host scanning without explicit host parameter.
Solution:
Always specify host when known:
// Instead of:
{ "action": "container", "subaction": "start", "container_id": "plex" }
// Use:
{ "action": "container", "subaction": "start", "container_id": "plex", "host": "production" }Reduce host count:
# Split large deployments into multiple MCP instances
# Production hosts: synapse-mcp-prod
# Development hosts: synapse-mcp-devIncrease connection pool:
export SSH_POOL_MAX_CONNECTIONS=10
node dist/index.jsSlow Configuration Loading
Symptom: Every request takes 5-10ms longer than expected.
Cause: Config loaded synchronously on every request (PERF-C1).
Solution:
Optimize config file size:
# Keep config under 10KB
# Split large host lists into multiple files
# Use SSH config auto-discovery instead
# (parsed once at startup)Use SSH config auto-discovery:
# ~/.ssh/config
Host production
HostName 192.168.1.100
User admin
IdentityFile ~/.ssh/id_rsa
# No manual synapse.config.json neededNetwork Latency
Symptom: Operations on remote hosts much slower than local.
Cause: High network latency (>50ms).
Solution:
Increase timeouts for slow networks:
export SSH_POOL_CONNECTION_TIMEOUT_MS=15000 # 15s
export SSH_POOL_IDLE_TIMEOUT_MS=120000 # 2minUse local cache more aggressively:
export COMPOSE_CACHE_TTL_HOURS=48 # 2 daysDeploy MCP server closer to hosts:
# Run synapse-mcp on same network segment as managed hosts
# Or use VPN to reduce latencyContainer Not Found Errors
Container ID Too Short
Symptom:
Container "abc" not found on any hostCause: Multiple containers match short prefix, or ID doesn't exist.
Solution:
Use longer container ID:
// Instead of:
{ "container_id": "abc" }
// Use at least 8 characters:
{ "container_id": "abc12345" }Use container name:
{ "container_id": "plex" }List all containers to find correct ID:
{ "action": "container", "subaction": "list", "state": "all" }Container on Unexpected Host
Symptom: Container exists but not found by auto-discovery.
Cause: Discovery timeout before reaching correct host.
Solution:
Specify host explicitly:
{
"action": "container",
"subaction": "start",
"container_id": "plex",
"host": "media-server"
}Increase discovery timeout:
# Increase SSH connection timeout
export SSH_POOL_CONNECTION_TIMEOUT_MS=10000Check host is reachable:
ssh user@hostname docker psCompose Project Not Detected
Project Not in Cache
Symptom:
Project "media-stack" not found on any configured hostCause: Cache miss, project in non-standard location, or project name mismatch.
Solution:
Refresh cache:
{ "action": "compose", "subaction": "refresh" }Check actual project name:
# SSH to host
docker compose ls
# Or check compose.yaml
cat /path/to/compose.yaml | grep "^name:"Add search path to host config:
{
"name": "production",
"host": "192.168.1.100",
"protocol": "ssh",
"sshUser": "admin",
"composeSearchPaths": [
"/compose",
"/opt/stacks", // Add custom path
"/srv/docker" // Add another path
]
}Specify explicit path (bypass discovery):
{
"action": "compose",
"subaction": "up",
"project": "media-stack",
"host": "production",
"path": "/opt/stacks/media" // Explicit path
}Stopped Project Not Found
Symptom: Project exists but not detected by docker compose ls.
Cause: docker compose ls only shows running projects.
Solution:
Force filesystem scan:
// Refresh cache triggers full scan
{ "action": "compose", "subaction": "refresh" }Or use explicit path:
{
"action": "compose",
"subaction": "up",
"path": "/path/to/project",
"host": "production"
}Search Depth Too Shallow
Symptom: Deeply nested compose projects not found.
Cause: Default max depth is 3 levels.
Solution:
Organize projects at shallower depth:
# Instead of:
/compose/apps/production/services/media/plex/
# Use:
/compose/media-plex/Or manually add specific paths:
{
"composeSearchPaths": ["/compose/apps/production/services/media/plex"]
}Debug Logging
Enable detailed logging for troubleshooting:
Enable all debug output:
DEBUG=* node dist/index.js 2>debug.logEnable specific namespaces:
# SSH operations only
DEBUG=synapse:ssh node dist/index.js
# Docker operations only
DEBUG=synapse:docker node dist/index.js
# Multiple namespaces
DEBUG=synapse:ssh,synapse:docker node dist/index.jsIncrease log level:
export LOG_LEVEL=debug
node dist/index.jsMonitor logs in real-time:
# Systemd service
journalctl -u synapse-mcp.service -f
# Or write to file
node dist/index.js 2>&1 | tee -a synapse.logGetting Help
If you can't resolve the issue:
Check logs:
journalctl -u synapse-mcp.service -n 100Review runbooks: See
docs/runbooks/for detailed proceduresCheck docs/SECURITY.md: For security-related issues
Open GitHub issue: Include:
Error message and full stack trace
Steps to reproduce
Environment details (Node version, OS, host count)
Relevant config (redact sensitive info)
Community support: Tag maintainers in issues for faster response
Security
HTTP Transport Authentication
HTTP POST /mcp always requires the X-Synapse-Client header for CSRF protection.
API key authentication is enabled only when SYNAPSE_API_KEY is set.
# Enable API key authentication
export SYNAPSE_API_KEY="your-secret-key-here" # Recommended: 32+ characters
# Start server with HTTP transport
node dist/index.js --transport httpIf SYNAPSE_API_KEY is not set, requests are allowed without X-API-Key (local/dev behavior).
Required Headers:
X-Synapse-Client: Always required (for CSRF protection)
X-API-Key: Required when
SYNAPSE_API_KEYis configured
Security Features:
Timing-safe comparison prevents timing attacks
CSRF protection blocks cross-origin requests without proper headers
100KB body size limit prevents DoS attacks
Example Request (API key enabled):
curl -X POST "http://127.0.0.1:53000/mcp" \
-H "Content-Type: application/json" \
-H "X-Synapse-Client: mcp" \
-H "X-API-Key: your-secret-key-here" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'Example Request (local/dev, no API key configured):
unset SYNAPSE_API_KEY
node dist/index.js --transport http
curl -X POST "http://127.0.0.1:53000/mcp" \
-H "Content-Type: application/json" \
-H "X-Synapse-Client: mcp" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list","params":{}}'Command Allowlist (CWE-78)
Scout exec operations are restricted to a curated allowlist of read-only commands:
Allowed commands: df, uptime, hostname, uname, ps, free, top, htop, netstat, ss, lsof, systemctl status, journalctl, dmesg, tail, cat, grep (and more - see src/config/command-allowlist.json)
Security guarantees:
No destructive operations allowed
Shell argument escaping prevents injection
No environment variable bypass available
All commands validated before execution
Path Traversal Protection (CWE-22)
The image_build tool implements strict path validation to prevent directory traversal attacks:
Absolute paths required: All paths (context, dockerfile) must start with
/Traversal blocked: Paths containing
..or.components are rejectedCharacter validation: Only alphanumeric, dots (in filenames), hyphens, underscores, and forward slashes allowed
Pre-execution validation: Paths validated before SSH commands are executed
Example of rejected paths:
# Rejected: Directory traversal
../../../etc/passwd
/app/../../../etc/passwd
# Rejected: Relative paths
./build
relative/path
# Accepted: Absolute paths without traversal
/home/user/docker/build
/opt/myapp/Dockerfile.prodGeneral Security Notes
Docker API on port 2375 is insecure without TLS
Always use execFile for shell commands (prevents injection)
Validate host config fields with regex
Require force=true for destructive operations
Development
# Watch mode for development
pnpm run dev
# Build
pnpm run build
# Run tests
pnpm test
# Run tests with coverage
pnpm run test:coverage
# Run performance benchmarks (opt-in)
RUN_SSH_BENCHMARKS=true pnpm test src/services/ssh-pool.benchmark.test.ts
RUN_CACHE_BENCHMARKS=true pnpm test src/services/cache-layer.benchmark.test.ts
# Test with MCP Inspector
npx @modelcontextprotocol/inspector node dist/index.jsArchitecture
Core Components
Event System (src/events/)
Type-safe EventEmitter with discriminated unions
Events:
container_state_changed,cache_invalidatedDecouples cross-cutting concerns (cache invalidation, audit trail, metrics)
Lifecycle Management (src/services/container.ts)
State machine:
uninitialized→initializing→ready→shutting_down→shutdownHooks:
initialize(),healthCheck(),shutdown()Graceful cleanup on process termination
Tool Registry (src/tools/registry.ts)
Plugin-style tool registration
Zero modification required to add new tools
Declarative tool definitions in
src/tools/definitions/
Formatter Strategy (src/formatters/strategy.ts)
IFormatterinterface for output formatsImplementations:
MarkdownFormatter,JSONFormatterFormatterFactoryfor format selectionOpen/Closed Principle: Add formats without modifying handlers
For detailed architecture documentation, see:
src/services/LIFECYCLE.md- Lifecycle management guidesrc/tools/EXTENDING.md- Tool extension guidesrc/formatters/EXTENDING.md- Formatter extension guidedocs/HANDLERS.md- Handler patterns and implementation guidancedocs/TRANSPORTS.md- Transport options (stdio, HTTP, SSHstdio, Tailscale Serve)
Directory Structure
synapse-mcp/
├── src/
│ ├── index.ts # Entry point, transport setup
│ ├── types.ts # TypeScript interfaces
│ ├── constants.ts # Configuration constants
│ ├── config/
│ │ └── command-allowlist.json # Allowed commands for scout:exec
│ ├── formatters/
│ │ ├── index.ts # Response formatting utilities
│ │ └── formatters.test.ts # Formatter tests
│ ├── tools/
│ │ ├── index.ts # Tool registration router
│ │ ├── flux.ts # Flux tool handler + routing
│ │ ├── scout.ts # Scout tool handler + routing
│ │ ├── container.ts # handleContainerAction()
│ │ ├── compose.ts # handleComposeAction()
│ │ ├── docker.ts # handleDockerAction()
│ │ └── host.ts # handleHostAction()
│ ├── services/
│ │ ├── docker.ts # DockerService
│ │ ├── compose.ts # ComposeService
│ │ ├── ssh.ts # SSHService
│ │ └── scout/ # Scout-specific services
│ │ ├── pool.ts # SSH connection pool
│ │ ├── executors.ts # Command execution
│ │ └── transfer.ts # File transfer (beam)
│ ├── schemas/
│ │ ├── index.ts # FluxSchema + ScoutSchema exports
│ │ ├── common.ts # Shared schemas (pagination, response_format)
│ │ ├── container.ts # Container subaction schemas
│ │ ├── compose.ts # Compose subaction schemas
│ │ ├── docker.ts # Docker subaction schemas
│ │ ├── host.ts # Host subaction schemas
│ │ └── scout.ts # Scout action schemas
│ └── lint.test.ts # Linting tests
├── dist/ # Compiled JavaScript
├── package.json
├── tsconfig.json
└── README.mdKey Architectural Decisions
V3 Schema Refactor - Two Tools Pattern:
Flux: 5 actions (help, container, compose, docker, host) with 41 total subactions
Scout: 11 actions (9 simple + 2 with subactions) for 16 total operations
Clean separation: Flux = Docker/state changes, Scout = SSH/read operations
Total: 57 operations across both tools
Discriminated Union for O(1) Validation:
Flux:
action+subactionfields with per-action nested discriminated unionsScout: Primary
actiondiscriminator with nested discriminators forzfsandlogsValidation latency: <0.005ms average across all operations
Zero performance degradation regardless of which operation is called
Help System:
Auto-generated help handlers for both tools
Introspects Zod schemas using
.describe()metadataSupports topic-specific help (e.g.,
flux help container:logs)Available in markdown or JSON format
SSH Connection Pooling:
50× faster for repeated operations
Automatic idle timeout and health checks
Configurable pool size and connection reuse
Transparent integration (no code changes required)
Test Coverage:
Unit tests for all services, schemas, and tools
Integration tests for end-to-end workflows
Performance benchmarks for schema validation
TDD approach for all new features
Performance
Schema Validation
Both Flux and Scout tools use Zod discriminated unions for constant-time schema dispatch:
Validation latency: <0.005ms average across all operations
Flux optimization:
action+subactionwith nestedsubactiondiscriminatorsScout optimization: Primary
actiondiscriminator with nested discriminators for zfs/logsConsistency: All operations perform identically fast (no worst-case scenarios)
SSH Connection Pooling
All SSH operations use connection pooling for optimal performance:
50× faster for repeated operations
Connections reused across compose operations
Automatic idle timeout and health checks
Configurable via environment variables
See docs/ssh-connection-pooling.md for details.
Key Benefits:
Eliminate 250ms connection overhead per operation
Support high-concurrency scenarios (configurable pool size)
Automatic connection cleanup and health monitoring
Zero code changes required (transparent integration)
Benchmarks
Run performance benchmarks:
npm run test:benchExpected results:
Worst-case validation: <0.005ms (0.003ms typical)
Average-case validation: <0.005ms (0.003ms typical)
Performance variance: <0.001ms (proves O(1) consistency)
Troubleshooting
Common Issues
"Cannot connect to Docker socket"
Symptoms:
Error:
connect EACCES /var/run/docker.sockError:
connect ENOENT /var/run/docker.sock
Solutions:
Permissions - Add your user to the docker group:
sudo usermod -aG docker $USER newgrp docker # Apply without logoutSocket path - Check if Docker socket exists:
ls -la /var/run/docker.sock # If not found, Docker may not be installed or running sudo systemctl status dockerDocker not running - Start Docker daemon:
sudo systemctl start docker sudo systemctl enable docker # Start on boot
"SSH connection failed" / "All configured authentication methods failed"
Symptoms:
Error:
HostOperationError: SSH connection failedOperations timeout on remote hosts
Solutions:
Test SSH manually - Verify SSH access works:
ssh -i ~/.ssh/id_rsa user@hostname # Should connect without password promptCheck SSH key permissions - Keys must not be world-readable:
chmod 600 ~/.ssh/id_rsa chmod 644 ~/.ssh/id_rsa.pubVerify host config - Ensure
sshUserandsshKeyPathare correct:{ "name": "remote", "host": "192.168.1.100", "protocol": "ssh", "sshUser": "admin", "sshKeyPath": "~/.ssh/id_rsa" // or absolute: "/home/user/.ssh/id_rsa" }SSH agent - If using SSH agent, ensure key is loaded:
ssh-add -l # List loaded keys ssh-add ~/.ssh/id_rsa # Add if missing
"Docker Compose project not found"
Symptoms:
Error:
No Docker Compose projects found on hostExpected projects don't appear in listings
Solutions:
Check search paths - Add custom compose paths to config:
{ "name": "myhost", "composeSearchPaths": ["/opt/appdata", "/mnt/docker", "/home/user/compose"] }Verify compose files exist - SSH to host and check:
find /path/to/search -name "docker-compose.y*ml" -o -name "compose.y*ml"Force cache refresh - Use the refresh subaction:
{ "action": "compose", "subaction": "refresh", "host": "myhost" }Check file permissions - Ensure compose files are readable:
ls -la /path/to/docker-compose.yml # Should show -rw-r--r-- (at minimum)
"Operation timed out"
Symptoms:
Requests hang for 30+ seconds then fail
Stats collection fails intermittently
Solutions:
Check host connectivity - Test network latency:
ping -c 4 hostname # Should show <50ms latency for local networkDocker daemon responsive - Check if Docker is overloaded:
ssh user@host "docker ps" # Should respond in <1sReduce parallelism - Query hosts sequentially instead of "all":
{ "action": "container", "subaction": "list", "host": "specific-host" }
"Command not in allowed list"
Symptoms:
Error:
Command 'X' not in allowed listContainer exec or Scout commands fail
Solutions:
Use allowed commands only - Check the allowlist:
// Allowed commands (src/constants.ts): [ "docker", "docker-compose", "systemctl", "cat", "head", "tail", "grep", "rg", "find", "ls", "tree", "wc", "sort", "uniq", "diff", "stat", "file", "du", "df", "pwd", "hostname", "uptime", "whoami" ]Development bypass - For testing only (NOT production):
export NODE_ENV=development export SYNAPSE_ALLOW_ANY_COMMAND=true # Restart serverRequest addition - If command is needed, open an issue with:
Command name and purpose
Security justification
Example use cases
Diagnostic Steps
1. Check MCP Server Logs
# If running via stdio
tail -f ~/.mcp/logs/synapse-mcp.log
# If running via HTTP
curl http://localhost:53001/health2. Test Host Configuration
# Test Docker API connection
docker -H ssh://user@host ps
# Test SSH command execution
ssh user@host "docker ps"
# Test Docker Compose
ssh user@host "docker compose -f /path/to/compose.yml ps"3. Validate Configuration
# Check config file syntax
cat ~/.config/synapse-mcp/config.json | jq .
# Verify paths exist
ls -la ~/.ssh/id_rsa
ls -la /var/run/docker.sock4. Enable Debug Logging
# Set environment variable
export DEBUG=synapse:*
# Or for specific modules
export DEBUG=synapse:ssh,synapse:docker
# Restart server to see detailed logsRecovery Procedures
Service Down
Check process - Ensure server is running:
ps aux | grep synapse-mcpRestart server - Restart via your MCP client (Claude Code):
Restart Claude Code application
Or restart specific MCP server from settings
Check config - Validate JSON syntax:
jq . ~/.config/synapse-mcp/config.json # Should output formatted JSON (no errors)
High Error Rate
Check Docker health - All hosts:
for host in host1 host2 host3; do ssh user@$host "docker info" doneReduce load - Limit concurrent operations:
Use specific host instead of "all"
Add delays between operations
Reduce pagination limits
Clear cache - Force SSH connection pool reset:
# Restart server (closes all connections) # Connections will be recreated on demand
Rollback Procedure
If an update causes issues:
Check version - Note current version:
cd ~/path/to/synapse-mcp git log --oneline -1Rollback - Revert to previous working version:
git log --oneline -10 # Find last working commit git checkout <commit-hash> pnpm install pnpm run buildRestart - Restart MCP server in Claude Code
Report issue - Open GitHub issue with:
Version that failed
Error messages
Steps to reproduce
Getting Help
GitHub Issues: https://github.com/anthropics/claude-code/issues
Documentation: See
.docs/directory for detailed architecture docsTransport setup: See
docs/TRANSPORTS.mdfor secure local/remote connection patternsLogs: Check
~/.mcp/logs/for detailed error traces
License
MIT