This server enables context-efficient, persistent creation and management of unconventional problem-solving thoughts, minimizing token usage through metadata-first returns and on-demand resource loading.
Core Capabilities:
Generate Unconventional Thoughts - Create boundary-breaking ideas that challenge conventional wisdom, with options to force rebellious thinking or build upon previous thoughts
Branch Thinking Paths - Create divergent paths from existing thoughts in three directions: more extreme, opposite, or tangential
Search and Filter Efficiently - Find thoughts using server-side filtering by branch ID, rebellion status, or assumption challenges without loading unnecessary data
Access Content On-Demand - Retrieve full thought content only when needed via
thought://[thoughtId]resource URIs, reducing context usage by 98.7%Persistent Storage - All thoughts and metadata stored in local
.thoughts/directory, maintaining sessions across interactions and enabling inspection
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Unconventional-thinking MCP servergenerate an unreasonable thought about reducing meeting times"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Unconventional Thinking Server (v0.3.0)
A context-efficient MCP server for bold, unconventional, and boundary-breaking problem-solving.
This is a TypeScript-based MCP server that implements an unconventional thinking system optimized for context space savings based on Anthropic's latest MCP architecture patterns. It generates and tracks creative solutions to problems while maintaining efficiency.
MCP spec 2025-11-25 compliant — uses
@modelcontextprotocol/sdkv1.27.1 with tooltitle,annotations,outputSchema,structuredContentresponses, andresource_linkcontent type.
Architecture: Context-Saving Design
This server demonstrates Anthropic's recommended patterns for reducing context overhead by 98.7%:
Key Context-Saving Features
Resources API for On-Demand Data Loading
Thought content is stored as resources (
thought://id)Claude loads full content only when explicitly needed
Metadata is returned by default, saving tokens
Server-Side Filtering
search_thoughtsfilters data locally instead of passing unfiltered sets to ClaudeOnly matching results returned, not entire dataset
Reduces context consumption by filtering at the source
Metadata-First Returns
Tools return only essential metadata + resource URIs
Full thought content accessible via Resources API
Claude decides whether to fetch full content based on need
Persistent File-Based Storage
Data persists in
.thoughts/directoryNo in-memory bloat accumulating across sessions
Easy to inspect and debug thoughts locally
Related MCP server: MCP Think Tool
Features
Tools (All Context-Efficient, MCP spec 2025-11-25)
Each tool now includes:
title— human-readable display name shown in client UIsannotations— behaviour hints (readOnlyHint,destructiveHint,idempotentHint,openWorldHint)outputSchema— JSON Schema describing the structured resultstructuredContentin responses — machine-readable output conforming to the schemaresource_linkcontent items — explicit links clients can subscribe to or fetch
generate_unreasonable_thought— Generate new unconventional thoughtsReturns
resource_link+structuredContent, not raw text blobsCan build upon or rebel against previous thoughts
Full thought content available via Resources API
branch_thought— Create new branches of thinkingSupports directions:
more_extreme,opposite,tangential(now enum-typed)Returns
resource_link+structuredContentfor the new branch
search_thoughts— Efficient metadata searchFilters by branchId, isRebellion, challengesAssumption
Returns
structuredContentwith typed count + thoughts arrayIncludes limit parameter to control result size
Resources (On-Demand Content Loading)
Each thought available as a resource:
thought://[thoughtId]Metadata includes: isRebellion, challengesAssumption, timestamp, branch info
Full thought content loaded only when Claude explicitly requests it
Dramatically reduces token usage when many thoughts exist
How This Implements Context Efficiency
1. Progressive Disclosure
Claude doesn't need the full content of 100 thoughts upfront. Instead:
search_thoughtsreturns just IDs and metadata (100 bytes per thought)Claude selectively fetches full content via Resources API for relevant thoughts
Similar to how filesystems work: list files, then open specific files
2. Server-Side Filtering
Traditional approach (❌ inefficient):
All 1000 thoughts → Claude → Claude filters → Uses only 10
(costs tokens for all 1000)This server (✅ efficient):
search_thoughts filter params → Server filters locally → Returns only 10 results
(Claude never sees the unused 990)3. Metadata-First Pattern
Tool responses contain:
Thought ID
Resource URI to access full content
Brief metadata (2-3 KB each)
NOT the full 500-character thought (saves ~5KB per thought)
Example savings: With 100 thoughts:
Old way: 500KB context usage
New way: ~30KB + fetch only what's needed
Development
Install dependencies:
npm installBuild the server:
npm run buildFor development with auto-rebuild:
npm run watchInstallation
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
{
"mcpServers": {
"unconventional-thinking": {
"command": "/path/to/unconventional-thinking/build/index.js"
}
}
}Usage Example
Claude: Generate an unreasonable thought about scaling problems
→ Tool: generate_unreasonable_thought("scaling problems")
← Returns: resource_link (thought://...) + structuredContent { thoughtId, isRebellion, ... }
Claude: What are all the rebellious thoughts?
→ Tool: search_thoughts(isRebellion=true, limit=5)
← Returns: structuredContent { count, thoughts: [...metadata] }
Claude: I need to see the full content of thought_xyz
→ Resource: Read thought://thought_xyz
← Returns: Full thought content (loaded only when needed)Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspectorThe Inspector will provide a URL to access debugging tools in your browser.
References
This server implements patterns from: