Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
NEXUS_CODEX_MODELNoDefault model for the Codex runner
NEXUS_CLAUDE_MODELNoDefault model for the Claude runner
NEXUS_CODEX_MODELSNoComma-separated model list for the Codex runner
NEXUS_GEMINI_MODELNoDefault model for the Gemini runner
NEXUS_CLAUDE_MODELSNoComma-separated model list for the Claude runner
NEXUS_GEMINI_MODELSNoComma-separated model list for the Gemini runner
NEXUS_EXECUTION_MODENoGlobal execution mode (default or yolo)default
NEXUS_OPENCODE_MODELNoDefault model for the OpenCode runner
NEXUS_OPENCODE_MODELSNoComma-separated model list for the OpenCode runner
NEXUS_RETRY_MAX_DELAYNoMaximum seconds to wait between retries60.0
NEXUS_TIMEOUT_SECONDSNoSubprocess timeout in seconds (10 minutes)600
NEXUS_RETRY_BASE_DELAYNoBase seconds for exponential backoff2.0
NEXUS_OUTPUT_LIMIT_BYTESNoMax output size in bytes before temp-file spillover50000
NEXUS_RETRY_MAX_ATTEMPTSNoMax attempts including the first (set to 1 to disable retries)3
NEXUS_TOOL_TIMEOUT_SECONDSNoTool-level timeout in seconds (15 minutes); set to 0 to disable900
NEXUS_CLI_DETECTION_TIMEOUTNoTimeout in seconds for CLI binary version detection at startup30

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tasks
{
  "list": {},
  "cancel": {},
  "requests": {
    "tools": {
      "call": {}
    },
    "prompts": {
      "get": {}
    },
    "resources": {
      "read": {}
    }
  }
}
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
batch_prompt

Send multiple prompts to CLI runners in parallel (primary tool).

Fans out tasks server-side with asyncio.gather and a semaphore, enabling true parallel runner execution within a single MCP call. Single-task usage is perfectly valid — use prompt for convenience when sending one task.

Args: tasks: List of AgentTask objects, each with cli, prompt, and optional fields. max_concurrency: Max parallel runner invocations (default: 3). ctx: MCP context (auto-injected by FastMCP). None when called directly in tests.

Returns: MultiPromptResponse with results for each task.

prompt

Send a prompt to a CLI runner as a background task.

Returns immediately with a task ID. Client polls for results. This prevents timeouts for long operations (YOLO mode: 2-5 minutes).

Args: cli: CLI runner name (e.g., "gemini") prompt: Prompt text to send to the runner context: Optional context metadata execution_mode: 'default' (safe) or 'yolo'. None inherits session preference. model: Optional model name. None inherits session preference or uses CLI default. max_retries: Max retry attempts for transient errors (None inherits session preference). output_limit: Max output bytes (None inherits session preference or uses env default). timeout: Subprocess timeout seconds (None inherits session preference or uses env default). retry_base_delay: Base delay seconds for exponential backoff (None inherits session/config). retry_max_delay: Backoff ceiling in seconds (None inherits session preference or config). ctx: MCP context (auto-injected by FastMCP). None when called directly in tests.

Returns: Runner's response text

set_preferences

Set session-scoped preferences that apply to subsequent prompt/batch_prompt calls.

Preferences persist for the duration of the MCP session. Call again to update, or use clear_preferences to reset all fields at once.

To clear a single field while keeping others, pass the corresponding clear_* flag: set_preferences(clear_model=True) # clears model, keeps execution_mode

Args: execution_mode: Default execution mode for this session ('default' or 'yolo'). None retains the current session value (use clear_execution_mode=True to reset). model: Default model name for this session (e.g. 'gemini-2.5-flash'). None retains the current session value (use clear_model=True to reset). max_retries: Default max retry attempts for transient errors. None retains the current session value (use clear_max_retries=True to reset). output_limit: Default max output bytes per response. None retains the current session value (use clear_output_limit=True to reset). timeout: Default subprocess timeout in seconds. None retains the current session value (use clear_timeout=True to reset). retry_base_delay: Default base delay seconds for exponential backoff. None retains the current session value (use clear_retry_base_delay=True to reset). retry_max_delay: Default max delay cap seconds for exponential backoff. None retains the current session value (use clear_retry_max_delay=True to reset). clear_execution_mode: If True, resets execution_mode to None regardless of the execution_mode argument. clear_model: If True, resets model to None regardless of the model argument. clear_max_retries: If True, resets max_retries to None regardless of the argument. clear_output_limit: If True, resets output_limit to None regardless of the argument. clear_timeout: If True, resets timeout to None regardless of the argument. clear_retry_base_delay: If True, resets retry_base_delay to None. clear_retry_max_delay: If True, resets retry_max_delay to None. ctx: MCP context (auto-injected by FastMCP).

Returns: Confirmation string with the active preferences as JSON.

get_preferences

Return the current session preferences.

Returns: Dict with 'execution_mode', 'model', 'max_retries', 'output_limit', and 'timeout' keys (None when unset).

clear_preferences

Clear all session preferences, reverting to per-call defaults.

Returns: Confirmation string.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/j7an/nexus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server