rlm_ollama_status
Check Ollama server status and available models to determine if free local inference is ready for processing large datasets.
Instructions
Check Ollama server status and available models.
Returns whether Ollama is running, list of available models, and if the default model (gemma3:12b) is available. Use this to determine if free local inference is available.
Args: force_refresh: Force refresh the cached status (default: false)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| force_refresh | No |