pull_model
Download or update an Ollama model from the registry, with streaming status updates on download progress.
Instructions
Download / update an Ollama model from the Ollama registry. Returns streaming status lines summarising the download progress.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model_name | Yes |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/foundry_reverse/server.py:86-99 (registration)Registration of the 'pull_model' tool via @mcp.tool decorator and the async handler function that calls oc.pull_model and returns status lines.
@mcp.tool( name="pull_model", description=( "Download / update an Ollama model from the Ollama registry. " "Returns streaming status lines summarising the download progress." ), ) async def pull_model(model_name: str) -> dict[str, Any]: """ Args: model_name: The name of the model to pull (e.g. 'llama3', 'phi3'). """ lines = await oc.pull_model(model_name) return {"model": model_name, "status_lines": lines[-10:], "total_lines": len(lines)} - The underlying Ollama API client function that streams POST /api/pull to download/update a model, returning a list of status lines.
async def pull_model(name: str) -> list[str]: """Pull a model, streaming status lines.""" lines: list[str] = [] async with _client(timeout=600) as c: async with c.stream("POST", "/api/pull", json={"name": name, "stream": True}) as resp: resp.raise_for_status() async for raw in resp.aiter_lines(): if raw.strip(): lines.append(raw) return lines - src/foundry_reverse/server.py:93-99 (schema)Input schema: takes a 'model_name' string parameter. Output schema: returns a dict with 'model', 'status_lines' (last 10 lines), and 'total_lines'.
async def pull_model(model_name: str) -> dict[str, Any]: """ Args: model_name: The name of the model to pull (e.g. 'llama3', 'phi3'). """ lines = await oc.pull_model(model_name) return {"model": model_name, "status_lines": lines[-10:], "total_lines": len(lines)}