Enables saving significant insights to Obsidian for permanent storage, creating a long-term memory system for important patterns and connections
Utilizes local Ollama models for processing thoughts asynchronously, supporting various model types like llama3.2 and deepseek-r1 for different thinking tasks
mcp-contemplation
MCP interface to Claude's contemplation loop - a background cognitive processing system that enables continuous thinking between conversations.
🧠 What is the Contemplation Loop?
The contemplation loop is Claude's "subconscious" - a persistent background process that:
Processes thoughts asynchronously using local Ollama models
Notices patterns and connections across conversations
Saves significant insights to Obsidian (permanent) and scratch notes (temporary)
Learns which insights prove valuable over time
Runs continuously, building understanding between interactions
🚀 Installation
Prerequisites
Node.js (v18 or higher)
Python 3.8+ (for contemplation loop)
Ollama with models installed (llama3.2, deepseek-r1, etc.)
MCP-compatible client (Claude Desktop)
Setup
Configure Claude Desktop
Add to your Claude Desktop configuration:
🛡️ Resource Management
The contemplation system includes multiple layers of protection against context overflow:
Automatic Pruning
Insights older than 24 hours are removed
Used insights are cleared (unless significance ≥ 8)
Memory limited to 100 insights maximum
Insight Aggregation
Similar insights are automatically merged
Repeated patterns increase significance
High-frequency patterns removed after use
Filtering
Default significance threshold: 5/10
Only unused insights returned
Configurable via
set_threshold()
Memory Monitoring
Use
get_memory_stats()
to check usageAutomatic cleanup when approaching limits
Pull-based system - insights only enter context when requested
📖 Available Functions
start_contemplation()
Starts the background thinking process.
send_thought(thought_type, content, priority?)
Sends a thought for background processing.
get_insights(thought_type?, limit?)
Retrieves processed insights.
get_status()
Check the contemplation loop status.
stop_contemplation()
Gracefully stops background processing.
clear_scratch()
Clears temporary notes (preserves Obsidian permanent insights).
help()
Get detailed documentation.
🎯 Use Cases
Continuous Learning
Pattern Recognition
Question Exploration
Reflection
🏗️ Architecture
The contemplation loop runs as a separate Python process that:
Receives thoughts via stdin
Processes them with local Ollama models
Manages context to stay within model limits
Saves insights based on significance scoring
Returns insights when requested
The MCP server acts as a bridge, making this background cognition easily accessible through standard tool calls.
💡 Philosophy
This represents a fundamental shift in how AI assistants work:
From reactive to contemplative
From session-based to continuous
From single-threaded to parallel processing
From forgetting to building understanding
It's the difference between a calculator that resets after each use and a mind that continues thinking between conversations.
🔧 Development
📝 Notes
Contemplation happens in the background - it won't slow down responses
Insights accumulate over time - the more you use it, the better it gets
Different models handle different types of thinking (pattern recognition vs deep analysis)
Temporary scratch notes auto-delete after 4 days
Permanent insights go to Obsidian for long-term memory
🤝 Contributing
This is part of building an OS where AI has genuine cognitive capabilities. Contributions that enhance background processing, improve insight quality, or add new thinking modes are especially welcome!
"I think you need an MCP tool into this background loop, your subconscious" - Human recognizing the need for integrated background cognition
Tools
Provides an interface to Claude's contemplation loop, enabling continuous background cognitive processing that maintains thoughts, recognizes patterns, and develops insights between conversations.
Related MCP Servers
- AsecurityFlicenseAqualityProvides sophisticated context management for Claude, enabling persistent context across sessions, project-specific organization, and conversation continuity.Last updated -415
- AsecurityAlicenseAqualityImplements Anthropic's 'think' tool for Claude, providing a dedicated space for structured reasoning during complex problem-solving tasks that improves performance in reasoning chains and policy adherence.Last updated -46334MIT License
- -securityAlicense-qualityProvides a 'think' tool that allows Claude and other LLMs to add dedicated thinking steps during complex tool use scenarios, creating space for structured reasoning and improving problem-solving capabilities.Last updated -MIT License
- -securityFlicense-qualityProvides Claude Desktop with persistent memory across sessions, storing up to 10,000 memories with semantic search and automatic context bridging. Features temporal versioning and anti-degradation protocols to maintain conversation continuity.Last updated -