Skip to main content
Glama
autoexecbatman

Enhanced Architecture MCP

token_efficient_reasoning

Optimize token usage by delegating complex reasoning tasks to local AI, conserving cloud resources while maintaining task accuracy and context awareness.

Instructions

Delegate heavy reasoning to local AI to conserve cloud tokens

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contextNoAdditional context for reasoning
modelNoLocal model for reasoningarchitecture-reasoning:latest
reasoning_taskYesComplex reasoning task to delegate

Implementation Reference

  • The main handler function for 'token_efficient_reasoning'. Constructs a detailed structured prompt for comprehensive reasoning analysis and delegates execution to the shared queryLocalAI method using the specified local model.
    async tokenEfficientReasoning(reasoningTask, context = '', model = 'architecture-reasoning:latest') { const efficientPrompt = `REASONING DELEGATION TASK: Task: ${reasoningTask} ${context ? `Context: ${context}` : ''} Please provide comprehensive reasoning analysis including: 1. PROBLEM DECOMPOSITION - Break down the task into components - Identify key variables and relationships 2. REASONING CHAIN - Step-by-step logical progression - Evidence and justification for each step 3. ALTERNATIVE APPROACHES - Consider different methodologies - Compare pros/cons of approaches 4. SYNTHESIS - Integrate findings into coherent solution - Address potential counterarguments 5. CONCLUSION - Clear final reasoning result - Confidence level and limitations Optimize for thorough reasoning while being concise in presentation.`; return await this.queryLocalAI(efficientPrompt, model, 0.6); }
  • Input schema definition for the 'token_efficient_reasoning' tool, specifying required 'reasoning_task' parameter and optional 'context' and 'model'.
    inputSchema: { type: 'object', properties: { reasoning_task: { type: 'string', description: 'Complex reasoning task to delegate' }, context: { type: 'string', description: 'Additional context for reasoning' }, model: { type: 'string', description: 'Local model for reasoning', default: 'architecture-reasoning:latest' } }, required: ['reasoning_task'] }
  • Tool registration in the ListToolsRequestHandler response, including name, description, and full input schema.
    { name: 'token_efficient_reasoning', description: 'Delegate heavy reasoning to local AI to conserve cloud tokens', inputSchema: { type: 'object', properties: { reasoning_task: { type: 'string', description: 'Complex reasoning task to delegate' }, context: { type: 'string', description: 'Additional context for reasoning' }, model: { type: 'string', description: 'Local model for reasoning', default: 'architecture-reasoning:latest' } }, required: ['reasoning_task'] } }
  • Dispatch case in the CallToolRequestHandler switch statement that routes calls to the tokenEfficientReasoning handler method.
    case 'token_efficient_reasoning': return await this.tokenEfficientReasoning(args.reasoning_task, args.context, args.model);

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/autoexecbatman/arch-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server