query_local_ai
Query a local AI model through Ollama to get reasoning assistance for architectural prompts, with customizable model and temperature settings.
Instructions
Query local AI model via Ollama for reasoning assistance
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The reasoning prompt to send to local AI | |
| model | No | Model name (default: architecture-reasoning:latest) | architecture-reasoning:latest |
| temperature | No | Temperature for response (0.1-1.0) |
Implementation Reference
- local-ai-server.js:177-212 (handler)The core handler function that implements the tool logic by making an HTTP POST request to the local Ollama server (/api/generate) with the provided prompt, model, and temperature, then formats and returns the AI response.async queryLocalAI(prompt, model = 'architecture-reasoning:latest', temperature = 0.6) { try { const response = await fetch(`${this.ollamaUrl}/api/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ model: model, prompt: prompt, stream: false, options: { temperature: temperature, num_predict: 2048 } }), }); if (!response.ok) { throw new Error(`Ollama API error: ${response.status}`); } const data = await response.json(); return { content: [ { type: 'text', text: `Local AI Response (${model}):\n\n${data.response}\n\nTokens: ${data.eval_count || 'N/A'}` } ] }; } catch (error) { throw new Error(`Failed to query local AI: ${error.message}`); } }
- local-ai-server.js:33-56 (registration)Registers the query_local_ai tool in the ListToolsRequestSchema handler, providing name, description, and input schema.{ name: 'query_local_ai', description: 'Query local AI model via Ollama for reasoning assistance', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The reasoning prompt to send to local AI' }, model: { type: 'string', description: 'Model name (default: architecture-reasoning:latest)', default: 'architecture-reasoning:latest' }, temperature: { type: 'number', description: 'Temperature for response (0.1-1.0)', default: 0.6 } }, required: ['prompt'] } },
- local-ai-server.js:36-55 (schema)Input schema definition for the query_local_ai tool, specifying prompt (required), model, and temperature parameters with types and defaults.inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The reasoning prompt to send to local AI' }, model: { type: 'string', description: 'Model name (default: architecture-reasoning:latest)', default: 'architecture-reasoning:latest' }, temperature: { type: 'number', description: 'Temperature for response (0.1-1.0)', default: 0.6 } }, required: ['prompt'] }
- local-ai-server.js:146-147 (handler)Dispatch handler in the CallToolRequestSchema switch statement that routes query_local_ai calls to the queryLocalAI method.case 'query_local_ai': return await this.queryLocalAI(args.prompt, args.model, args.temperature);