serve
Start the Ollama server to enable local AI model management and integration with MCP applications.
Instructions
Start Ollama server
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Implementation Reference
- src/index.ts:292-306 (handler)The main handler function for the 'serve' tool. It executes the 'ollama serve' command using execAsync, captures stdout/stderr, and returns it as text content, or throws an error if failed.private async handleServe() { try { const { stdout, stderr } = await execAsync('ollama serve'); return { content: [ { type: 'text', text: stdout || stderr, }, ], }; } catch (error) { throw new McpError(ErrorCode.InternalError, `Failed to start Ollama server: ${formatError(error)}`); } }
- src/index.ts:70-74 (schema)Input schema for the 'serve' tool, which requires no parameters (empty object).inputSchema: { type: 'object', properties: {}, additionalProperties: false, },
- src/index.ts:67-75 (registration)Registration of the 'serve' tool in the ListToolsRequestSchema handler, including name, description, and schema.{ name: 'serve', description: 'Start Ollama server', inputSchema: { type: 'object', properties: {}, additionalProperties: false, }, },
- src/index.ts:256-257 (helper)Dispatch case in the CallToolRequestSchema handler that routes 'serve' tool calls to the handleServe method.case 'serve': return await this.handleServe();