Skip to main content
Glama

serve

Start the Ollama server to enable local AI model management and integration with MCP applications.

Instructions

Start Ollama server

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The main handler function for the 'serve' tool. It executes the 'ollama serve' command using execAsync, captures stdout/stderr, and returns it as text content, or throws an error if failed.
    private async handleServe() { try { const { stdout, stderr } = await execAsync('ollama serve'); return { content: [ { type: 'text', text: stdout || stderr, }, ], }; } catch (error) { throw new McpError(ErrorCode.InternalError, `Failed to start Ollama server: ${formatError(error)}`); } }
  • Input schema for the 'serve' tool, which requires no parameters (empty object).
    inputSchema: { type: 'object', properties: {}, additionalProperties: false, },
  • src/index.ts:67-75 (registration)
    Registration of the 'serve' tool in the ListToolsRequestSchema handler, including name, description, and schema.
    { name: 'serve', description: 'Start Ollama server', inputSchema: { type: 'object', properties: {}, additionalProperties: false, }, },
  • Dispatch case in the CallToolRequestSchema handler that routes 'serve' tool calls to the handleServe method.
    case 'serve': return await this.handleServe();

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NightTrek/Ollama-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server