Skip to main content
Glama
egoughnour
by egoughnour

rlm_setup_ollama

Install and configure Ollama on macOS via Homebrew to enable local AI inference for processing large datasets with the Massive Context MCP server.

Instructions

Install Ollama via Homebrew (macOS).

Requires Homebrew pre-installed. Uses 'brew install' and 'brew services'. PROS: Auto-updates, pre-built binaries, managed service. CONS: Requires Homebrew, may prompt for sudo on first Homebrew install.

Args: install: Install Ollama via Homebrew (requires Homebrew) start_service: Start Ollama as a background service via brew services pull_model: Pull the default model (gemma3:12b) model: Model to pull (default: gemma3:12b). Use gemma3:4b or gemma3:1b for lower RAM systems.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
installNo
start_serviceNo
pull_modelNo
modelNogemma3:12b

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/egoughnour/massive-context-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server