query_local_ai
Leverage local AI models via Ollama to assist with reasoning tasks. Input prompts, adjust model parameters, and receive contextual insights for architecture-focused problem-solving.
Instructions
Query local AI model via Ollama for reasoning assistance
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | Model name (default: architecture-reasoning:latest) | architecture-reasoning:latest |
| prompt | Yes | The reasoning prompt to send to local AI | |
| temperature | No | Temperature for response (0.1-1.0) |
Implementation Reference
- local-ai-server.js:177-212 (handler)The core implementation of the query_local_ai tool. This async function sends a POST request to the local Ollama server with the prompt, model, and temperature, then formats and returns the AI response in MCP content format.async queryLocalAI(prompt, model = 'architecture-reasoning:latest', temperature = 0.6) { try { const response = await fetch(`${this.ollamaUrl}/api/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ model: model, prompt: prompt, stream: false, options: { temperature: temperature, num_predict: 2048 } }), }); if (!response.ok) { throw new Error(`Ollama API error: ${response.status}`); } const data = await response.json(); return { content: [ { type: 'text', text: `Local AI Response (${model}):\n\n${data.response}\n\nTokens: ${data.eval_count || 'N/A'}` } ] }; } catch (error) { throw new Error(`Failed to query local AI: ${error.message}`); } }
- local-ai-server.js:36-55 (schema)Input schema for the query_local_ai tool, defining the expected parameters: prompt (required string), model (optional string), temperature (optional number).inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The reasoning prompt to send to local AI' }, model: { type: 'string', description: 'Model name (default: architecture-reasoning:latest)', default: 'architecture-reasoning:latest' }, temperature: { type: 'number', description: 'Temperature for response (0.1-1.0)', default: 0.6 } }, required: ['prompt'] }
- local-ai-server.js:33-56 (registration)Registration of the query_local_ai tool in the ListToolsRequest handler, including name, description, and full input schema.{ name: 'query_local_ai', description: 'Query local AI model via Ollama for reasoning assistance', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The reasoning prompt to send to local AI' }, model: { type: 'string', description: 'Model name (default: architecture-reasoning:latest)', default: 'architecture-reasoning:latest' }, temperature: { type: 'number', description: 'Temperature for response (0.1-1.0)', default: 0.6 } }, required: ['prompt'] } },
- local-ai-server.js:146-147 (registration)Dispatch logic in the CallToolRequest handler that calls the queryLocalAI function when query_local_ai is invoked.case 'query_local_ai': return await this.queryLocalAI(args.prompt, args.model, args.temperature);