health_check
Check whether the local Ollama service is reachable to ensure your AI workflows remain operational.
Instructions
Check whether the local Ollama service is reachable.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/foundry_reverse/server.py:43-45 (handler)The MCP tool handler for 'health_check'. Calls the underlying ollama_client.health_check() and returns a dict with reachability status and base URL.
async def health_check() -> dict[str, Any]: ok = await oc.health_check() return {"ollama_reachable": ok, "ollama_base_url": oc.OLLAMA_BASE_URL} - The actual implementation that pings the Ollama API root (GET /) to verify reachability. Returns True on HTTP 200, False otherwise.
async def health_check() -> bool: try: async with _client(timeout=5) as c: r = await c.get("/") return r.status_code == 200 except Exception: _log.debug("Ollama health check failed", exc_info=True) return False - src/foundry_reverse/server.py:39-41 (registration)Registration of 'health_check' as an MCP tool via the @mcp.tool decorator.
@mcp.tool( name="health_check", description="Check whether the local Ollama service is reachable.",