Skip to main content
Glama
egoughnour
by egoughnour

rlm_setup_ollama_direct

Install and configure Ollama locally on macOS without Homebrew or sudo. Downloads directly to ~/Applications, works on locked-down systems, and sets up the default model for local inference.

Instructions

Install Ollama via direct download (macOS).

Downloads from ollama.com to ~/Applications. PROS: No Homebrew needed, no sudo required, fully headless, works on locked-down machines. CONS: Manual PATH setup, no auto-updates, service runs as foreground process.

Args: install: Download and install Ollama to ~/Applications (no sudo needed) start_service: Start Ollama server (ollama serve) in background pull_model: Pull the default model (gemma3:12b) model: Model to pull (default: gemma3:12b). Use gemma3:4b or gemma3:1b for lower RAM systems.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
installNo
start_serviceNo
pull_modelNo
modelNogemma3:12b

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/egoughnour/massive-context-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server