Change the active LLM backend for AI task routing. Specify a backend ID to switch between different local models like Ollama, llama.cpp, or Gemini for processing tasks.
Execute multiple AI tasks in parallel across all available GPUs to process files, review code, summarize documents, and analyze content simultaneously for maximum throughput.
A comprehensive and efficient Model Context Protocol server for task management that works with Claude, Cursor, and other MCP clients, providing powerful search, filtering, and organization capabilities across multiple file formats.