ollama_pull_model
Download models from the Ollama library to your local machine. Use this tool to install models needed for AI tasks when they aren't already available locally.
Instructions
Download a model from the Ollama library to the local machine. Use this tool when a model is needed but not yet installed locally. Do not use this if the model is already available — call ollama_list_models first to check. Do not use this to run inference — use ollama_chat or ollama_generate after pulling. Behavior: WRITE operation — downloads large files (1–100+ GB) and stores them on disk. Idempotent — re-pulling an already-installed model is safe and verifies integrity. No authentication required. No rate limits. Execution time ranges from seconds to hours depending on model size and network bandwidth. Not destructive (does not delete existing data). On network failure, returns an error object without throwing.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | Model identifier to download from the Ollama library. Use the format 'name:tag' (e.g., 'llama3.1:8b', 'mistral:latest', 'codellama:13b-instruct'). The tag selects a specific size or quantization variant. Omitting the tag defaults to ':latest'. |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| status | No | Download result status (e.g., 'success'). Indicates the model is now available for inference. | |
| error | No | Error message if the download failed (e.g., network error, model not found in library). Only present on failure. |