rlm_setup_ollama_direct
Install and configure Ollama locally on macOS without Homebrew or sudo. Downloads directly to ~/Applications, works on locked-down systems, and sets up the default model for local inference.
Instructions
Install Ollama via direct download (macOS).
Downloads from ollama.com to ~/Applications. PROS: No Homebrew needed, no sudo required, fully headless, works on locked-down machines. CONS: Manual PATH setup, no auto-updates, service runs as foreground process.
Args: install: Download and install Ollama to ~/Applications (no sudo needed) start_service: Start Ollama server (ollama serve) in background pull_model: Pull the default model (gemma3:12b) model: Model to pull (default: gemma3:12b). Use gemma3:4b or gemma3:1b for lower RAM systems.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| install | No | ||
| start_service | No | ||
| pull_model | No | ||
| model | No | gemma3:12b |