Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Tools

Functions exposed to the LLM to take actions

NameDescription
spawn_colony
Spawn a new bug colony. Types: standard, fast, heavy, hybrid - standard: scout, worker, worker, memory - fast: scout, scout, worker, worker (speed) - heavy: memory, memory, learner, guardian (power) - hybrid: all roles balanced
deploy_swarm
Deploy swarm for parallel task execution. tasks: JSON array of {prompt, context?} objects Example: [{"prompt": "analyze X"}, {"prompt": "summarize Y", "context": "..."}]
quick_swarm
One-shot: spawn colony + deploy swarm + return results tasks: JSON array or single prompt string
list_colonies

List all active colonies with their bugs

colony_status

Get detailed status of a specific colony

quick_colony
Quick status check - ONE CALL for colony health Returns: verdict, issues, active colonies, stats
farm_stats

Get comprehensive farm statistics

dissolve_colony

Dissolve a colony and free resources

cleanup_idle

Remove all colonies with 0 tasks completed

code_review_swarm
Run parallel code review with 4 specialized perspectives. Pass code directly OR filepath to read from disk. Returns: security, performance, style, and refactoring analysis.
code_gen_swarm
Generate code from spec with 4 parallel perspectives. Returns: main code, tests, docstring, usage examples.
file_swarm
Execute parallel file operations. operations: JSON array of {action, path, content?} Actions: read, write, append, exists, delete Example: [{"action": "write", "path": "/tmp/test.txt", "content": "hello"}]
exec_swarm
Execute shell commands in parallel (with safety checks). commands: JSON array of command strings Example: ["ls -la", "pwd", "whoami"] BLOCKED: rm -rf, sudo, dd, mkfs, chmod 777, etc.
api_swarm
Execute parallel HTTP API requests. requests: JSON array of {url, method?, headers?, body?} Example: [{"url": "https://api.example.com/data"}, {"url": "...", "method": "POST", "body": "{}"}]
kmkb_swarm
Query KMKB from multiple angles in parallel, optionally synthesize. queries: JSON array of query strings OR single topic to auto-expand Example: ["what is X?", "how does X work?", "examples of X"] Or just: "Agent Farm" (auto-expands to multiple queries)
tool_swarm
Deploy tool-enabled agents that can use real system tools. Each bug role has different tool permissions: - scout: read_file, list_dir, file_exists, system_status, process_list - worker: read_file, write_file, exec_cmd, http_get, http_post - memory: read_file, kmkb_search, kmkb_ask, list_dir - guardian: system_status, process_list, disk_usage, check_service - learner: read_file, analyze_code, list_dir, kmkb_search tasks: JSON array of task objects: Standard: {"prompt": "Do something"} Write: {"path": "/file.txt", "content": "data..."} <-- DIRECT EXECUTE for long content Example: [{"prompt": "Check system health"}, {"path": "/tmp/out.txt", "content": "results"}] DIRECT EXECUTE: Write tasks with content >300 chars bypass LLM entirely. Bugs can't reliably echo long content - we write directly instead. deep: Enable deep work mode - bugs chain multiple tool calls for complex tasks synthesize: If True, uses qwen2.5:14b to synthesize results into unified summary
system_health_swarm
Quick system health check using tool-enabled bugs. Deploys guardian bugs to check CPU, memory, disk, and services. synthesize: If True (default), uses qwen2.5:14b to synthesize results into unified summary
recon_swarm
Reconnaissance swarm - scouts explore a directory/codebase. Uses scout bugs with read-only tools to map out a target. target_path: Directory to explore deep: Enable deep work mode (default False) - multi-iteration for complex analysis synthesize: If True (default), uses qwen2.5:14b to synthesize findings into unified report
worker_task
Single worker bug with tools to complete a task. Worker has: read_file, write_file, exec_cmd, http_get, http_post task: What to do context: Optional context/background
heavy_write
Direct file write - NO LLM involved. Use for large content. Bugs can't reliably echo content >300 chars through the LLM. This tool writes directly to disk, bypassing the model entirely. path: File path to write to content: Content to write (any length) Returns: success/error status and bytes written
deep_analysis_swarm
Deep analysis swarm using WORKERS with exec_cmd for thorough system analysis. Workers can run shell commands (find, du, grep, etc.) for real analysis. target_path: Directory/path to analyze analysis_type: 'full', 'redundant', 'sizes', 'cleanup' synthesize: If True (default), uses qwen2.5:14b to synthesize findings Use this for finding: redundant files, cache sizes, disk usage, log files, etc.
synthesize
Standalone synthesis tool - synthesize any JSON results into unified summary. Uses qwen2.5:14b for accuracy. results: JSON array of result objects (with 'answer' or 'response' keys) context: Optional context about what these results are from Example: synthesize('[{"answer": "CPU at 5%"}, {"answer": "Memory at 20%"}]', "health check")
chunked_write
Generate large documents by having bugs write sections in parallel. Bypasses the long-content limitation by chunking work. HOW IT WORKS: 1. Planner bug creates outline (structured JSON) 2. Worker bugs generate sections in PARALLEL 3. Python concatenates directly (NO LLM involved) 4. heavy_write saves result output_path: Where to save the document spec: What the document should be about num_sections: How many sections (default 5, max 10) doc_type: 'markdown', 'text', or 'code' EXAMPLE: chunked_write("/tmp/report.md", "Analysis of Python best practices", 5)
chunked_code_gen
Generate code files by having bugs write functions in parallel. Each bug writes one function, Python assembles the file. output_path: Where to save the code file spec: What the code should do language: 'python', 'javascript', 'bash', etc. num_functions: How many functions to generate (default 4, max 8) EXAMPLE: chunked_code_gen("/tmp/utils.py", "File utilities: read, write, copy, delete", "python", 4)
chunked_analysis
Analyze something from multiple perspectives in parallel. Each bug analyzes from a different angle, results are synthesized. target: What to analyze (file path, concept, code, etc.) question: The analysis question num_perspectives: How many different angles (default 4) EXAMPLE: chunked_analysis("/home/kyle/repos/project", "What are the main architectural patterns?", 4)

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BossX429/agent-farm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server