Skip to main content
Glama

mentalmodel

Apply structured mental models like First Principles Thinking and Pareto Principle to systematically break down and solve complex problems.

Instructions

A tool for applying structured mental models to problem-solving. Supports various mental models including:

  • First Principles Thinking

  • Opportunity Cost Analysis

  • Error Propagation Understanding

  • Rubber Duck Debugging

  • Pareto Principle

  • Occam's Razor

Each model provides a systematic approach to breaking down and solving problems.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNameYes
problemYes
stepsNo
reasoningNo
conclusionNo

Implementation Reference

  • The `processModel` method acts as the primary handler for the `mentalmodel` tool, coordinating validation, formatting, and result execution.
    public processModel(input: unknown): any {
        try {
            const validatedInput = this.validateModelData(input);
            const formattedOutput = this.formatModelOutput(validatedInput);
            console.error(formattedOutput);
    
            return {
                modelName: validatedInput.modelName,
                status: "success",
                hasSteps: validatedInput.steps.length > 0,
                hasConclusion: !!validatedInput.conclusion,
            };
        } catch (error) {
            return {
                error: error instanceof Error ? error.message : String(error),
                status: "failed",
            };
        }
    }
  • Input validation logic ensuring the required fields for the mental model are present and correctly typed.
    private validateModelData(input: unknown): MentalModelData {
        const data = input as Record<string, unknown>;
    
        if (!data.modelName || typeof data.modelName !== "string") {
            throw new Error("Invalid modelName: must be a string");
        }
        if (!data.problem || typeof data.problem !== "string") {
            throw new Error("Invalid problem: must be a string");
        }
    
        return {
            modelName: data.modelName as string,
            problem: data.problem as string,
            steps: Array.isArray(data.steps) ? data.steps.map(String) : [],
            reasoning:
                typeof data.reasoning === "string"
                    ? (data.reasoning as string)
                    : "",
            conclusion:
                typeof data.conclusion === "string"
                    ? (data.conclusion as string)
                    : "",
        };
  • src/index.ts:1046-1056 (registration)
    The `mentalmodel` tool is registered and invoked within the main MCP tool execution switch case in `src/index.ts`.
    case "mentalmodel": {
        const result = modelServer.processModel(request.params.arguments);
        return {
            content: [
                {
                    type: "text",
                    text: JSON.stringify(result, null, 2),
                },
            ],
        };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full disclosure burden. States models provide 'systematic approach' but doesn't explain what the tool actually returns (generated analysis? validation? structured output?), whether state is modified, or execution constraints.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Appropriately sized with clear bullet enumeration of models. Front-loaded purpose statement followed by specific examples. No significant waste, though final sentence ('systematic approach') is somewhat redundant with earlier text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 undocumented parameters (0% coverage), no annotations, no output schema, and high sibling ambiguity, the description is insufficient. Needs to clarify parameter roles (especially steps/reasoning/conclusion), distinguish from similar tools, and disclose behavioral outcomes.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%—description must compensate heavily. While it implicitly documents 'modelName' via the enum list, it fails to explain the other 4 parameters. Particularly unclear: 'steps', 'reasoning', and 'conclusion' appear to be outputs of mental model application but are listed as optional inputs without explanation of their purpose.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('applying') and resource ('mental models'), with specific enumeration of supported models (First Principles, Pareto, etc.). However, fails to differentiate from siblings like 'decisionframework', 'sequentialthinking', or 'structuredargumentation' despite having 10+ overlapping reasoning tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to select this tool versus the 10 sibling reasoning/collaboration tools. No prerequisites, exclusions, or alternative recommendations mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/chirag127/Clear-Thought-MCP-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server