Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
OXIDE_AUTO_START_WEBNoSetting OXIDE_AUTO_START_WEB=true automatically starts the Web UI when the MCP server launchesfalse

Tools

Functions exposed to the LLM to take actions

NameDescription
route_task
Intelligently route a task to the best LLM. Analyzes the task characteristics and automatically selects the most appropriate LLM service (Gemini for large codebases, Qwen for code review, etc.). Args: prompt: Task description or query files: Optional list of file paths to include as context preferences: Optional routing preferences
analyze_parallel
Analyze large codebase in parallel across multiple LLMs. Distributes files across multiple LLM services for faster analysis. Ideal for analyzing large codebases with 20+ files. Args: directory: Directory path to analyze prompt: Analysis prompt/query num_workers: Number of parallel workers (default: 3)
list_services
Check health and availability of all configured LLM services. Returns status information for all services including: - Service health (available/unavailable) - Service type (CLI/HTTP) - Routing rules configuration

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/yayoboy/oxide'

If you have feedback or need assistance with the MCP directory API, please join our Discord server