Skip to main content
Glama
autoexecbatman

Enhanced Architecture MCP

token_efficient_reasoning

Delegate complex reasoning tasks to local AI models to reduce cloud token usage while maintaining processing capabilities.

Instructions

Delegate heavy reasoning to local AI to conserve cloud tokens

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
reasoning_taskYesComplex reasoning task to delegate
contextNoAdditional context for reasoning
modelNoLocal model for reasoningarchitecture-reasoning:latest

Implementation Reference

  • The main handler function that constructs a structured, token-efficient reasoning prompt and delegates execution to the local AI via queryLocalAI.
      async tokenEfficientReasoning(reasoningTask, context = '', model = 'architecture-reasoning:latest') {
        const efficientPrompt = `REASONING DELEGATION TASK:
    
    Task: ${reasoningTask}
    ${context ? `Context: ${context}` : ''}
    
    Please provide comprehensive reasoning analysis including:
    
    1. PROBLEM DECOMPOSITION
       - Break down the task into components
       - Identify key variables and relationships
    
    2. REASONING CHAIN
       - Step-by-step logical progression
       - Evidence and justification for each step
    
    3. ALTERNATIVE APPROACHES
       - Consider different methodologies
       - Compare pros/cons of approaches
    
    4. SYNTHESIS
       - Integrate findings into coherent solution
       - Address potential counterarguments
    
    5. CONCLUSION
       - Clear final reasoning result
       - Confidence level and limitations
    
    Optimize for thorough reasoning while being concise in presentation.`;
    
        return await this.queryLocalAI(efficientPrompt, model, 0.6);
      }
  • Input schema definition and tool metadata for token_efficient_reasoning, including properties for reasoning_task, context, and model.
    {
      name: 'token_efficient_reasoning',
      description: 'Delegate heavy reasoning to local AI to conserve cloud tokens',
      inputSchema: {
        type: 'object',
        properties: {
          reasoning_task: {
            type: 'string',
            description: 'Complex reasoning task to delegate'
          },
          context: {
            type: 'string',
            description: 'Additional context for reasoning'
          },
          model: {
            type: 'string',
            description: 'Local model for reasoning',
            default: 'architecture-reasoning:latest'
          }
        },
        required: ['reasoning_task']
      }
    }
  • Tool dispatch registration in the CallToolRequestSchema handler switch statement.
    case 'token_efficient_reasoning':
      return await this.tokenEfficientReasoning(args.reasoning_task, args.context, args.model);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the tool delegates to 'local AI' and conserves 'cloud tokens,' which hints at cost-saving and local processing, but lacks details on performance, error handling, or output format. For a tool with no annotations, this leaves significant gaps in understanding its behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key purpose without any wasted words. It directly communicates the tool's value proposition and is appropriately sized for its complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (delegating reasoning tasks) and the lack of annotations and output schema, the description is incomplete. It doesn't explain what the tool returns, how errors are handled, or any limitations, which are crucial for effective use. The description alone isn't sufficient for a full understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters thoroughly. The description doesn't add any meaning beyond what the schema provides—it doesn't explain parameter interactions or usage nuances. This meets the baseline score of 3 since the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Delegate heavy reasoning to local AI to conserve cloud tokens.' It specifies the verb ('delegate') and resource ('heavy reasoning'), and distinguishes it from potential siblings by emphasizing token conservation. However, it doesn't explicitly differentiate from tools like 'reasoning_assist' or 'query_local_ai' by name, which keeps it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use this tool ('heavy reasoning' scenarios where conserving cloud tokens is important) but doesn't provide explicit guidance on when not to use it or name alternatives. Given sibling tools like 'hybrid_analysis' and 'reasoning_assist', more specific differentiation would be helpful, but the context is clear enough for basic usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/autoexecbatman/arch-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server