consult_ollama
Consult Ollama AI models for architectural decisions, code reviews, and design discussions. Supports sequential chaining for complex multi-step reasoning.
Instructions
Consult with Ollama AI models for architectural decisions, code reviews, and design discussions. Supports sequential chaining of consultations for complex multi-step reasoning.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| consultation_type | No | Type of consultation: "thinking" (uses kimi-k2-thinking:cloud for reasoning tasks), "instruction" (uses qwen3-vl:235b-instruct-cloud for instruction-following), or "general" (uses specified model or default). If specified, overrides the model parameter. | |
| model | No | Model to use (e.g., "qwen2.5-coder:7b-cloud"). If not specified and no consultation_type, uses the first available model. Must be a cloud model (ends with :cloud or -cloud) or locally installed. | |
| prompt | Yes | Your question or prompt for the AI model. Can reference previous consultation results. | |
| system_prompt | No | Optional system prompt to guide model behavior | |
| context | No | Optional context including code, previous results, and metadata | |
| temperature | No | Sampling temperature (0.0-2.0, default: 0.7) | |
| timeout_ms | No | Request timeout in milliseconds (default: 60000). Increase for complex prompts with system prompts (e.g., 120000-300000 for complex reasoning) | |
| auto_settings | No | If true, auto-suggest temperature/timeout based on model name + prompt heuristics (can also be enabled via MCP_AUTO_MODEL_SETTINGS=1). Does not override explicitly provided temperature/timeout_ms. |