Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
CODEX_CLI_NAMENo覆盖Codex CLI的命令名或绝对路径
FORGE_CLI_NAMENo覆盖Forge CLI的命令名或绝对路径
CLAUDE_CLI_NAMENo覆盖Claude CLI的命令名或绝对路径
GEMINI_CLI_NAMENo覆盖Gemini CLI的命令名或绝对路径
OPENCODE_CLI_NAMENo覆盖OpenCode CLI的命令名或绝对路径

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
runA

AI Agent Runner: Starts a Claude, Codex, Gemini, Forge, or OpenCode CLI process in the background and returns a PID immediately. Use list_processes and get_result to monitor progress.

• File ops: Create, read, (fuzzy) edit, move, copy, delete, list files, analyze/ocr images, file content analysis • Code: Generate / analyse / refactor / fix • Git: Stage ▸ commit ▸ push ▸ tag (any workflow) • Terminal: Run any CLI cmd or open URLs • Web search + summarise content on-the-fly • Multi-step workflows & GitHub integration

IMPORTANT: This tool now returns immediately with a PID. Use other tools to check status and get results.

Supported models: "claude-ultra", "codex-ultra", "gemini-ultra", "sonnet", "sonnet[1m]", "opus", "opusplan", "haiku", "gpt-5.4", "gpt-5.5", "gpt-5.4-mini", "gpt-5.3-codex", "gpt-5.3-codex-spark", "gpt-5.2", "gemini-2.5-pro", "gemini-2.5-flash", "gemini-3.1-pro-preview", "gemini-3-pro-preview", "gemini-3-flash-preview", "forge", "opencode", "oc-<provider/model>"

Prompt input: You must provide EITHER prompt (string) OR prompt_file (file path), but not both.

Prompt tips

  1. Be concise, explicit & step-by-step for complex tasks.

  2. Check process status with list_processes

  3. Get results with get_result using the returned PID

  4. Kill long-running processes with kill_process if needed

list_processesA

List all running and completed AI agent processes. Returns a simple list with PID, agent type, and status for each process.

get_resultB

Get the current output and status of an AI agent process by PID. Defaults to a compact result shape; set verbose to true for full metadata and detailed parsed output.

waitC

Wait for multiple AI agent processes to complete and return their results. Defaults to compact result items; set verbose to true for full metadata and detailed parsed output.

peekA

One-shot short observation window for running child agents. Returns only natural-language message events, and optionally normalized tool_call events, observed during this call; not a history API, not gapless streaming, and not stdout/stderr tailing. In v1, message extraction is supported for Codex, Claude, OpenCode, Gemini, and best-effort Forge Summary/Completed successfully lines. Forge tool calls are low-precision Execute/Finished markers and never include command output. Tool calls exclude raw tool output.

kill_processC

Terminate a running AI agent process by PID.

cleanup_processesA

Remove all completed and failed processes from the process list to free up memory.

doctorA

Check supported AI CLI binary availability and path resolution. Does not verify login state or terms acceptance.

modelsA

List supported model names, model aliases, and dynamic backend discovery hints.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lailai258/agent-bridge-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server