Skip to main content
Glama

CRASH - Cascaded Reasoning with Adaptive Step Handling

Cascaded Reasoning with Adaptive Step Handling

a.k.a CRASH

"Use crash to ..."

An advanced MCP (Model Context Protocol) server that facilitates structured, iterative reasoning for complex problem-solving and analysis. CRASH v2.0 introduces flexible validation, confidence tracking, revision mechanisms, and branching support while maintaining backward compatibility.

Heavily inspired by MCP Sequential Thinking Server


Why I made this?

Frankly, I created this because typing "use sequential_thinking" was cumbersome. Now I can simply say "use crash" instead!

That was just the starting point though. CRASH is a modified version of sequential thinking with token-efficiency as its core design principle. It's more streamlined in its prompting approach - it doesn't include codes in thoughts and doesn't require the agent to list all available tools, etc.

I've had great experience with CRASH so far and have completely replaced sequential thinking in my workflow. It's now my go-to solution whenever an agent can't solve an issue in one shot or even when plan mode falls short.

Claude Code's Experience

## Question > /ask is running… Did the CRASH-MCP actually help you, or would you have been better off using your internal planning mechanisms? ● CRASH MCP helped significantly for this specific task. Here's my honest assessment: Where CRASH helped: - Systematic analysis: It forced me to break down the GitHub issue methodically, understanding the exact problem before jumping to solutions - Solution exploration: I explored multiple approaches (preprocessing, decorators, direct modification) before settling on the best one - Planning validation: Each step built on the previous one logically, ensuring I didn't miss critical aspects Where my internal planning would have been sufficient: - Implementation execution: Once I had the plan, the actual coding was straightforward - I could have done that without CRASH - Code pattern recognition: I could have immediately recognized this as a parameter serialization issue The key difference: CRASH forced me to be more thorough in the analysis phase. Without it, I might have rushed to implement the first solution that came to mind (like using Union[str, dict] parameters) rather than exploring the cleaner parse_tool_input() helper approach. Verdict: CRASH added value here because this was a complex problem requiring systematic analysis of multiple solution paths. For simpler coding tasks, my internal planning would be sufficient and faster.

🚀 Key Features

Core Capabilities

  • 🎯 Flexible Purpose Types: Extended set including validation, exploration, hypothesis, correction, planning, plus custom purposes
  • 📝 Natural Language Flow: No forced prefixes or rigid formatting (configurable)
  • 🔄 Revision Mechanism: Correct and improve previous reasoning steps
  • 🌿 Branching Support: Explore multiple solution paths in parallel
  • 📊 Confidence Tracking: Express uncertainty with confidence scores (0-1 scale)
  • 🔧 Structured Actions: Enhanced tool integration with parameters and expected outputs
  • 💾 Session Management: Multiple concurrent reasoning chains with unique IDs
  • 📋 Multiple Output Formats: Console, JSON, and Markdown formatting

Configuration Options

  • Strict Mode: Legacy compatibility with original rigid validation
  • Flexible Mode: Full access to enhanced features (default)
  • Customizable Validation: Toggle prefix requirements independently
  • Environment Variables: Easy configuration without code changes

📦 Installation

npm install crash-mcp

Or use directly with npx:

npx crash-mcp

Requirements

  • Node.js >= v18.0.0
  • Cursor, Claude Code, VSCode, Windsurf or another MCP Client

Go to: Settings -> Cursor Settings -> MCP -> Add new global MCP server

Pasting the following configuration into your Cursor ~/.cursor/mcp.json file is the recommended approach. You may also install in a specific project by creating .cursor/mcp.json in your project folder. See Cursor MCP docs for more info.

Cursor Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }
Cursor with Environment Variables
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"], "env": { "MAX_HISTORY_SIZE": "100", "CRASH_STRICT_MODE": "false", "CRASH_OUTPUT_FORMAT": "console", "CRASH_NO_COLOR": "false" } } } }

Run this command. See Claude Code MCP docs for more info.

Claude Code Local Server Connection
claude mcp add crash -- npx -y crash-mcp

or if you are using powershell:

claude mcp add crash '--' npx -y crash-mcp

Add this to your Windsurf MCP config file. See Windsurf MCP docs for more info.

Windsurf Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"], "env": { "MAX_HISTORY_SIZE": "100", "CRASH_STRICT_MODE": "false", "CRASH_OUTPUT_FORMAT": "console" } } } }

Add this to your VS Code MCP config file. See VS Code MCP docs for more info.

VS Code Local Server Connection
"mcp": { "servers": { "crash": { "type": "stdio", "command": "npx", "args": ["-y", "crash-mcp"] } } }

Add this to your Cline MCP configuration:

{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"], "env": { "CRASH_STRICT_MODE": "false", "MAX_HISTORY_SIZE": "100" } } } }

Add this to your Zed settings.json. See Zed Context Server docs for more info.

{ "context_servers": { "CRASH": { "command": { "path": "npx", "args": ["-y", "crash-mcp"] }, "settings": { "env": { "CRASH_STRICT_MODE": "false", "MAX_HISTORY_SIZE": "100" } } } } }

To configure CRASH MCP in Augment Code, you can use either the graphical interface or manual configuration.

A. Using the Augment Code UI

  1. Click the hamburger menu.
  2. Select Settings.
  3. Navigate to the Tools section.
  4. Click the + Add MCP button.
  5. Enter the following command:
    npx -y crash-mcp
  6. Name the MCP: CRASH.
  7. Click the Add button.

B. Manual Configuration

  1. Press Cmd/Ctrl Shift P or go to the hamburger menu in the Augment panel
  2. Select Edit Settings
  3. Under Advanced, click Edit in settings.json
  4. Add the server configuration to the mcpServers array in the augment.advanced object
"augment.advanced": { "mcpServers": [ { "name": "crash", "command": "npx", "args": ["-y", "crash-mcp"] } ] }

Add this to your Roo Code MCP configuration file. See Roo Code MCP docs for more info.

Roo Code Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

See Gemini CLI Configuration for details.

  1. Open the Gemini CLI settings file. The location is ~/.gemini/settings.json (where ~ is your home directory).
  2. Add the following to the mcpServers object in your settings.json file:
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

If the mcpServers object does not exist, create it.

Local Server Connection

Open Claude Desktop developer settings and edit your claude_desktop_config.json file to add the following configuration. See Claude Desktop MCP docs for more info.

{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"], "env": { "MAX_HISTORY_SIZE": "100", "CRASH_STRICT_MODE": "false", "CRASH_OUTPUT_FORMAT": "console", "CRASH_NO_COLOR": "false" } } } }

Add this to your Opencode configuration file. See Opencode MCP docs docs for more info.

Opencode Local Server Connection
{ "mcp": { "crash": { "type": "local", "command": ["npx", "-y", "crash-mcp"], "enabled": true } } }

See OpenAI Codex for more information.

Add the following configuration to your OpenAI Codex MCP server settings:

[mcp_servers.crash] args = ["-y", "crash-mcp"] command = "npx"

See JetBrains AI Assistant Documentation for more details.

  1. In JetBrains IDEs go to Settings -> Tools -> AI Assistant -> Model Context Protocol (MCP)
  2. Click + Add.
  3. Click on Command in the top-left corner of the dialog and select the As JSON option from the list
  4. Add this configuration and click OK
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }
  1. Click Apply to save changes.
  2. The same way CRASH could be added for JetBrains Junie in Settings -> Tools -> Junie -> MCP Settings

See Kiro Model Context Protocol Documentation for details.

  1. Navigate Kiro > MCP Servers
  2. Add a new MCP server by clicking the + Add button.
  3. Paste the configuration given below:
{ "mcpServers": { "CRASH": { "command": "npx", "args": ["-y", "crash-mcp"], "env": {}, "disabled": false, "autoApprove": [] } } }
  1. Click Save to apply the changes.

Use the Add manually feature and fill in the JSON configuration information for that MCP server. For more details, visit the Trae documentation.

Trae Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

Use these alternatives to run the local CRASH MCP server with other runtimes. These examples work for any client that supports launching a local MCP server via command + args.

Bun
{ "mcpServers": { "crash": { "command": "bunx", "args": ["-y", "crash-mcp"] } } }
Deno
{ "mcpServers": { "crash": { "command": "deno", "args": [ "run", "--allow-env=NO_DEPRECATION,TRACE_DEPRECATION,MAX_HISTORY_SIZE,CRASH_STRICT_MODE,CRASH_OUTPUT_FORMAT,CRASH_NO_COLOR", "--allow-net", "npm:crash-mcp" ] } } }

If you prefer to run the MCP server in a Docker container:

  1. Build the Docker Image:First, create a Dockerfile in the project root (or anywhere you prefer):
    FROM node:18-alpine WORKDIR /app # Install the latest version globally RUN npm install -g crash-mcp # Set environment variables ENV MAX_HISTORY_SIZE=100 ENV CRASH_STRICT_MODE=false ENV CRASH_OUTPUT_FORMAT=console ENV CRASH_NO_COLOR=false # Default command to run the server CMD ["crash-mcp"]
    Then, build the image using a tag (e.g., crash-mcp). Make sure Docker Desktop (or the Docker daemon) is running. Run the following command in the same directory where you saved the Dockerfile:
    docker build -t crash-mcp .
  2. Configure Your MCP Client:Update your MCP client's configuration to use the Docker command.Example for a cline_mcp_settings.json:
    { "mcpServers": { "CRASH": { "autoApprove": [], "disabled": false, "timeout": 60, "command": "docker", "args": ["run", "-i", "--rm", "crash-mcp"], "transportType": "stdio" } } }

The configuration on Windows is slightly different compared to Linux or macOS (Cline is used in the example). The same principle applies to other editors; refer to the configuration of command and args.

{ "mcpServers": { "crash": { "command": "cmd", "args": ["/c", "npx", "-y", "crash-mcp"], "disabled": false, "autoApprove": [] } } }

Add this to your Amazon Q Developer CLI configuration file. See Amazon Q Developer CLI docs for more details.

{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

See Warp Model Context Protocol Documentation for details.

  1. Navigate Settings > AI > Manage MCP servers.
  2. Add a new MCP server by clicking the + Add button.
  3. Paste the configuration given below:
{ "CRASH": { "command": "npx", "args": ["-y", "crash-mcp"], "env": { "CRASH_STRICT_MODE": "false", "MAX_HISTORY_SIZE": "100" }, "working_directory": null, "start_on_launch": true } }
  1. Click Save to apply the changes.

See LM Studio MCP Support for more information.

Manual set-up:
  1. Navigate to Program (right side) > Install > Edit mcp.json.
  2. Paste the configuration given below:
{ "mcpServers": { "CRASH": { "command": "npx", "args": ["-y", "crash-mcp"] } } }
  1. Click Save to apply the changes.
  2. Toggle the MCP server on/off from the right hand side, under Program, or by clicking the plug icon at the bottom of the chat box.

You can configure CRASH MCP in Visual Studio 2022 by following the Visual Studio MCP Servers documentation.

Add this to your Visual Studio MCP config file (see the Visual Studio docs for details):

{ "mcp": { "servers": { "crash": { "type": "stdio", "command": "npx", "args": ["-y", "crash-mcp"] } } } }

For more information and troubleshooting, refer to the Visual Studio MCP Servers documentation.

Add this to your Crush configuration file. See Crush MCP docs for more info.

Crush Local Server Connection
{ "$schema": "https://charm.land/crush.json", "mcp": { "crash": { "type": "stdio", "command": "npx", "args": ["-y", "crash-mcp"] } } }

Open the "Settings" page of the app, navigate to "Plugins," and enter the following JSON:

{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

Once saved, you can use the crash tool for structured reasoning in your chats. More information is available on BoltAI's Documentation site. For BoltAI on iOS, see this guide.

Edit your Rovo Dev CLI MCP config by running the command below -

acli rovodev mcp

Example config -

Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

To configure CRASH MCP in Zencoder, follow these steps:

  1. Go to the Zencoder menu (...)
  2. From the dropdown menu, select Agent tools
  3. Click on the Add custom MCP
  4. Add the name and server configuration from below, and make sure to hit the Install button
{ "command": "npx", "args": ["-y", "crash-mcp"] }

Once the MCP server is added, you can easily continue using it.

See Qodo Gen docs for more details.

  1. Open Qodo Gen chat panel in VSCode or IntelliJ.
  2. Click Connect more tools.
  3. Click + Add new MCP.
  4. Add the following configuration:
Qodo Gen Local Server Connection
{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "crash-mcp"] } } }

See Local and Remote MCPs for Perplexity for more information.

  1. Navigate Perplexity > Settings
  2. Select Connectors.
  3. Click Add Connector.
  4. Select Advanced.
  5. Enter Server Name: CRASH
  6. Paste the following JSON in the text area:
{ "args": ["-y", "crash-mcp"], "command": "npx", "env": { "CRASH_STRICT_MODE": "false", "MAX_HISTORY_SIZE": "100" } }
  1. Click Save.

⚙️ Configuration

Environment Variables

VariableDescriptionDefaultOptions
CRASH_STRICT_MODEEnable legacy validation rulesfalsetrue, false
MAX_HISTORY_SIZEMaximum steps to retain100Any positive integer
CRASH_OUTPUT_FORMATOutput display formatconsoleconsole, json, markdown
CRASH_NO_COLORDisable colored outputfalsetrue, false

🛠️ Tool Usage

Basic Parameters (Required)

  • step_number: Sequential step number
  • estimated_total: Current estimate of total steps (adjustable)
  • purpose: Step purpose (see Purpose Types below)
  • context: What is already known to avoid redundancy
  • thought: Current reasoning (natural language)
  • outcome: Expected/actual result
  • next_action: Next tool or action (string or structured object)
  • rationale: Why this action is chosen

Enhanced Parameters (Optional)

Confidence & Uncertainty
  • confidence: 0-1 scale confidence level
  • uncertainty_notes: Describe doubts or concerns
Revision Support
  • revises_step: Step number to revise
  • revision_reason: Why revision is needed
Branching
  • branch_from: Step to branch from
  • branch_id: Unique branch identifier
  • branch_name: Descriptive branch name
Tool Integration
  • tools_used: Array of tools used
  • external_context: External data/outputs
  • dependencies: Step numbers this depends on
Session Management
  • session_id: Group related reasoning chains

📝 Purpose Types

Standard Purposes

  • analysis - Analyzing information
  • action - Taking an action
  • reflection - Reflecting on progress
  • decision - Making a decision
  • summary - Summarizing findings
  • validation - Validating results
  • exploration - Exploring options
  • hypothesis - Forming hypotheses
  • correction - Correcting errors
  • planning - Planning approach

Custom Purposes

When not in strict mode, any string can be used as a purpose.

💡 Examples

Basic Usage

{ "step_number": 1, "estimated_total": 3, "purpose": "analysis", "context": "User requested optimization of database queries", "thought": "I need to first understand the current query patterns", "outcome": "Identified slow queries for optimization", "next_action": "analyze query execution plans", "rationale": "Understanding execution plans will reveal bottlenecks" }

With Confidence Tracking

{ "step_number": 2, "estimated_total": 5, "purpose": "hypothesis", "context": "Slow queries identified, need optimization strategy", "thought": "The main issue appears to be missing indexes", "outcome": "Hypothesis about missing indexes formed", "next_action": "validate hypothesis with EXPLAIN", "rationale": "Need to confirm before making changes", "confidence": 0.7, "uncertainty_notes": "Could also be due to table statistics" }

Revision Example

{ "step_number": 4, "estimated_total": 5, "purpose": "correction", "context": "Previous analysis was incomplete", "thought": "I missed an important join condition", "outcome": "Corrected analysis with complete information", "next_action": "re-evaluate optimization strategy", "rationale": "New information changes the approach", "revises_step": 2, "revision_reason": "Overlooked critical join in initial analysis" }

Branching Example

{ "step_number": 3, "estimated_total": 6, "purpose": "exploration", "context": "Two possible optimization approaches identified", "thought": "Let me explore the indexing approach first", "outcome": "Branch created for index optimization", "next_action": "test index performance", "rationale": "This approach has lower risk", "branch_from": 2, "branch_id": "index-optimization", "branch_name": "Index-based optimization" }

Structured Action Example

{ "step_number": 5, "estimated_total": 7, "purpose": "action", "context": "Ready to implement optimization", "thought": "Implementing the index creation", "outcome": "Index created successfully", "next_action": { "tool": "sql_executor", "action": "CREATE INDEX", "parameters": { "table": "users", "columns": ["email", "created_at"] }, "expectedOutput": "Index created on users table" }, "rationale": "This index will optimize the most common query pattern", "tools_used": ["sql_executor"], "confidence": 0.9 }

🔄 Backward Compatibility

Strict Mode

Enable strict mode for legacy behavior:

export CRASH_STRICT_MODE=true

In strict mode:

  • Thoughts must start with required prefixes
  • Rationale must start with "To "
  • Only predefined purpose types allowed
  • Original validation rules enforced

Migration Guide

  1. From v1.x to v2.0: No changes required - fully backward compatible
  2. To use new features: Set CRASH_STRICT_MODE=false (default)
  3. Gradual adoption: Enable features individually through configuration

🏗️ Development

# Install dependencies npm install # Build TypeScript npm run build # Run in development mode npm run dev # Start built server npm start

🚨 Troubleshooting

If you encounter ERR_MODULE_NOT_FOUND, try using bunx instead of npx:

{ "mcpServers": { "crash": { "command": "bunx", "args": ["-y", "crash-mcp"] } } }

This often resolves module resolution issues in environments where npx doesn't properly install or resolve packages.

For errors like Error: Cannot find module, try the --experimental-vm-modules flag:

{ "mcpServers": { "crash": { "command": "npx", "args": ["-y", "--node-options=--experimental-vm-modules", "crash-mcp"] } } }
  1. Try adding @latest to the package name
  2. Use bunx as an alternative to npx
  3. Consider using deno as another alternative
  4. Ensure you're using Node.js v18 or higher for native support

🎯 Use Cases

When to Use CRASH

  1. Complex Problem Solving: Multi-step tasks requiring systematic approach
  2. Code Analysis & Optimization: Understanding and improving codebases
  3. System Design: Planning architecture with multiple considerations
  4. Debugging: Systematic error investigation with hypothesis testing
  5. Research & Exploration: Investigating multiple solution paths
  6. Decision Making: Evaluating options with confidence tracking

When NOT to Use CRASH

  1. Simple, single-step tasks: Direct action is more efficient
  2. Pure information retrieval: No reasoning required
  3. Time-critical operations: Overhead of structured reasoning
  4. Deterministic procedures: No uncertainty or exploration needed

🔍 Comparison with Sequential Thinking

FeatureCRASH v2.0Sequential Thinking
StructureFlexible, configurableMay be more rigid
ValidationOptional prefixesDepends on implementation
RevisionsBuilt-in supportVaries
BranchingNative branchingVaries
ConfidenceExplicit trackingMay not have
Tool IntegrationStructured actionsVaries
Token EfficiencyOptimized, no code in thoughtsDepends on usage
Output FormatsMultiple (console, JSON, MD)Varies

📊 Performance

  • Memory: Configurable history size prevents unbounded growth
  • Processing: Minimal overhead (~1-2ms per step)
  • Token Usage: Optimized prompts, no code generation in thoughts
  • Scalability: Session management for concurrent chains

🎯 Credits & Inspiration

CRASH is an adaptation and enhancement of the sequential thinking tools from the Model Context Protocol ecosystem:

CRASH builds upon these foundations by adding flexible validation, confidence tracking, revision mechanisms, branching support, and enhanced tool integration while maintaining the core structured reasoning approach.

👨‍💻 Author

Nikko Gonzales

🤝 Contributing

Contributions welcome! Areas for enhancement:

  1. Visualization: Graph/tree view for branches
  2. Persistence: Save/load reasoning sessions
  3. Analytics: Pattern recognition in reasoning
  4. Integration: More MCP tool integrations
  5. Templates: Pre-built reasoning templates

📄 License

MIT

📈 Version History

v2.0.0 (Current)

  • Flexible validation system
  • Confidence tracking
  • Revision mechanism
  • Branching support
  • Structured actions
  • Multiple output formats
  • Session management
  • Backward compatibility

v1.0.0

  • Initial release
  • Basic structured reasoning
  • Required prefixes
  • Five purpose types
  • Console output only

Related MCP Servers

  • A
    security
    A
    license
    A
    quality
    Enhances AI model capabilities with structured, retrieval-augmented thinking processes that enable dynamic thought chains, parallel exploration paths, and recursive refinement cycles for improved reasoning.
    Last updated -
    1
    15
    MIT License
    • Apple
  • A
    security
    F
    license
    A
    quality
    Provides a tool for dynamic and reflective problem-solving by breaking complex problems into manageable steps with support for revision, branching, and hypothesis generation.
    Last updated -
    1
    111,935
    3
  • A
    security
    A
    license
    A
    quality
    An enhanced sequential thinking tool optimized for programming tasks that helps break down complex coding problems into structured, self-auditing thought steps with branching and revision capabilities.
    Last updated -
    1
    89
    189
    MIT License
    • Apple
  • A
    security
    A
    license
    A
    quality
    Provides AI assistants with enhanced reasoning capabilities through structured thinking, persistent knowledge graph memory, and intelligent tool orchestration for complex problem-solving.
    Last updated -
    20
    510
    33
    MIT License
    • Apple
    • Linux

View all related MCP servers

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/nikkoxgonzales/crash-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server