Skip to main content
Glama
marc-shade

Cluster Execution MCP Server

by marc-shade

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CLUSTER_DNSNoDNS server for IP detection8.8.8.8
CLUSTER_GATEWAYNoGateway IP for route detection192.168.1.1
CLUSTER_SSH_USERNoSSH username for remote executionmarc
AGENTIC_SYSTEM_PATHNoBase path for databases/mnt/agentic-system
CLUSTER_CMD_TIMEOUTNoCommand execution timeout (seconds)300
CLUSTER_MACPRO51_IPNoMac Pro fallback IP192.168.1.183
CLUSTER_SSH_RETRIESNoNumber of SSH retry attempts2
CLUSTER_SSH_TIMEOUTNoSSH connection timeout (seconds)5
CLUSTER_INFERENCE_IPNoInference node fallback IP192.168.1.186
CLUSTER_IP_CACHE_TTLNoIP resolution cache TTL (seconds)300
CLUSTER_MACSTUDIO_IPNoMac Studio fallback IP192.168.1.16
CLUSTER_CPU_THRESHOLDNoCPU usage % threshold for offloading40
CLUSTER_MACBOOKAIR_IPNoMacBook Air fallback IP192.168.1.172
CLUSTER_MACPRO51_HOSTNoMac Pro hostnamemacpro51.local
CLUSTER_INFERENCE_HOSTNoInference node hostnamecompleteu-server.local
CLUSTER_LOAD_THRESHOLDNoLoad average threshold for offloading4
CLUSTER_MACSTUDIO_HOSTNoMac Studio hostnameMarcs-Mac-Studio.local
CLUSTER_STATUS_TIMEOUTNoStatus check timeout (seconds)5
CLUSTER_MACBOOKAIR_HOSTNoMacBook Air hostnameMarcs-MacBook-Air.local
CLUSTER_MEMORY_THRESHOLDNoMemory usage % threshold for offloading80
CLUSTER_SSH_CONNECT_TIMEOUTNoInitial SSH connect timeout (seconds)2

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
cluster_bash

Execute bash command with automatic cluster routing.

Commands are automatically routed to optimal nodes based on:

  • Current cluster load (CPU, memory, load average)

  • Command characteristics (build/test/compile patterns)

  • Node capabilities (OS, architecture)

Heavy commands (make, cargo, pytest, docker, etc.) are automatically offloaded. Simple commands (ls, cat, echo) run locally for speed.

Parameters:

  • command (required): Bash command to execute

  • requires_os (optional): Force specific OS (linux/darwin)

  • requires_arch (optional): Force specific architecture (x86_64/arm64)

  • auto_route (optional): Enable auto-routing (default: true)

Returns execution result with node info and output.

cluster_status

Get current cluster status and load distribution.

Shows real-time metrics for all cluster nodes:

  • CPU usage percentage

  • Memory usage percentage

  • 1-minute load average

  • Active task count

  • Health status (healthy/overloaded)

  • Reachability

Use this to:

  • Check cluster health before heavy operations

  • Determine optimal node for manual routing

  • Debug cluster connectivity issues

  • Monitor distributed execution

Returns JSON with status for each node.

offload_to

Explicitly route command to specific cluster node.

Use when you need to:

  • Run Linux-specific commands -> offload to macpro51

  • Test on specific architecture

  • Balance load manually

  • Debug node-specific issues

Available nodes:

  • macpro51: Linux x86_64 builder (docker, podman, compilation)

  • mac-studio: macOS ARM64 orchestrator

  • macbook-air: macOS ARM64 researcher

Parameters:

  • command (required): Bash command to execute

  • node_id (required): Target node ID

Returns execution result from specified node.

parallel_execute

Execute multiple commands in parallel across cluster.

Distributes commands across available nodes for maximum parallelism. Use for:

  • Running test suites across multiple files

  • Parallel builds

  • Batch processing

  • Load testing

Commands are automatically distributed based on node availability and load.

Parameters:

  • commands (required): List of bash commands to execute in parallel

Returns list of results, one per command, with execution details.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marc-shade/cluster-execution-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server