Skip to main content
Glama
egoughnour
by egoughnour

rlm_ollama_status

Check Ollama server status and available models to determine if free local inference is ready for processing large datasets.

Instructions

Check Ollama server status and available models.

Returns whether Ollama is running, list of available models, and if the default model (gemma3:12b) is available. Use this to determine if free local inference is available.

Args: force_refresh: Force refresh the cached status (default: false)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
force_refreshNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/egoughnour/massive-context-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server