Skip to main content
Glama

pull

Download AI models from registries to run locally with Ollama's MCP server, enabling local LLM management and integration.

Instructions

Pull a model from a registry

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesName of the model to pull

Implementation Reference

  • The handler function that executes the 'ollama pull' command using the provided model name and returns the command's output as text content.
    private async handlePull(args: any) { try { const { stdout, stderr } = await execAsync(`ollama pull ${args.name}`); return { content: [ { type: 'text', text: stdout || stderr, }, ], }; } catch (error) { throw new McpError(ErrorCode.InternalError, `Failed to pull model: ${formatError(error)}`); } }
  • The input schema defining the 'name' parameter required for the 'pull' tool.
    inputSchema: { type: 'object', properties: { name: { type: 'string', description: 'Name of the model to pull', }, }, required: ['name'], additionalProperties: false, },
  • src/index.ts:264-265 (registration)
    The switch case that registers and dispatches 'pull' tool calls to the handlePull method.
    case 'pull': return await this.handlePull(request.params.arguments);
  • src/index.ts:134-148 (registration)
    The tool definition registered in the ListTools response, including name, description, and schema.
    { name: 'pull', description: 'Pull a model from a registry', inputSchema: { type: 'object', properties: { name: { type: 'string', description: 'Name of the model to pull', }, }, required: ['name'], additionalProperties: false, }, },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NightTrek/Ollama-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server