rlm_setup_ollama
Install and configure Ollama on macOS via Homebrew to enable local AI inference for processing large datasets with the Massive Context MCP server.
Instructions
Install Ollama via Homebrew (macOS).
Requires Homebrew pre-installed. Uses 'brew install' and 'brew services'. PROS: Auto-updates, pre-built binaries, managed service. CONS: Requires Homebrew, may prompt for sudo on first Homebrew install.
Args: install: Install Ollama via Homebrew (requires Homebrew) start_service: Start Ollama as a background service via brew services pull_model: Pull the default model (gemma3:12b) model: Model to pull (default: gemma3:12b). Use gemma3:4b or gemma3:1b for lower RAM systems.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| install | No | ||
| start_service | No | ||
| pull_model | No | ||
| model | No | gemma3:12b |