codebrain_status
Check locally available Ollama models to verify backend reachability and discover pulled models.
Instructions
Report which Ollama models are available locally.
Call this to verify the local backend is reachable and discover which models the user has pulled.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |
Implementation Reference
- codebrain/server.py:330-343 (handler)The 'codebrain_status' tool handler (registered via @mcp.tool()). Calls list_models() and reports which Ollama models are available locally.
@mcp.tool() async def codebrain_status() -> str: """Report which Ollama models are available locally. Call this to verify the local backend is reachable and discover which models the user has pulled. """ try: models = await list_models() except BackendError as exc: return f"[codebrain error] {exc}" if not models: return "No models installed. Run `ollama pull qwen2.5-coder:14b` to add the default." return "Installed models:\n" + "\n".join(f" - {m}" for m in models) - codebrain/server.py:330-330 (registration)The @mcp.tool() decorator registers 'codebrain_status' as an MCP tool on the FastMCP server instance.
@mcp.tool() - codebrain/backend.py:58-68 (helper)The list_models() helper function that queries Ollama's /api/tags endpoint and returns a list of installed model names.
async def list_models() -> list[str]: """List models currently installed in the local Ollama.""" try: async with httpx.AsyncClient(timeout=10.0) as client: response = await client.get(f"{OLLAMA_URL}/api/tags") response.raise_for_status() return [m["name"] for m in response.json().get("models", [])] except httpx.ConnectError as exc: raise BackendError( f"Cannot reach Ollama at {OLLAMA_URL} — is `ollama serve` running?" ) from exc - codebrain/server.py:331-336 (schema)Docstring serves as the tool's schema/description; the function takes no arguments and returns a string.
async def codebrain_status() -> str: """Report which Ollama models are available locally. Call this to verify the local backend is reachable and discover which models the user has pulled. """