Built on Node.js runtime, providing cognitive tools for AI systems including thinking modes, memory management, and reasoning analysis
Implemented in TypeScript for type-safe cognitive processing, offering structured thinking tools and memory systems for AI applications
ThoughtMCP Cognitive Architecture v0.5.0
Production-Ready AI Cognitive Architecture with Human-Like Memory and Reasoning
ThoughtMCP v0.5.0 is a complete rebuild featuring Hierarchical Memory Decomposition (HMD), persistent PostgreSQL storage, parallel reasoning streams, dynamic framework selection, and metacognitive monitoring. Built for production with 95%+ test coverage and sub-200ms retrieval performance.
β Status: Production Ready - Complete PostgreSQL-based architecture with advanced cognitive capabilities
What's New in v0.5.0?
ThoughtMCP v0.5.0 is a complete architectural rebuild with production-grade capabilities:
π§ Hierarchical Memory Decomposition (HMD)
Five-Sector Embeddings: Episodic, Semantic, Procedural, Emotional, and Reflective memory types
Waypoint Graph System: Sparse graph with 1-3 connections per memory for efficient traversal
Composite Scoring: 0.6Γsimilarity + 0.2Γsalience + 0.1Γrecency + 0.1Γlink_weight
Temporal Decay: Exponential forgetting with automatic reinforcement on access
PostgreSQL Persistence: Production-grade storage with pgvector for vector operations
β‘ Performance Targets
Sub-200ms Retrieval: p50 <100ms, p95 <200ms, p99 <500ms at 100k memories
Fast Embedding: <500ms for all five sectors
Parallel Reasoning: <30s total, <10s per stream
Efficient Operations: <100ms confidence assessment, <15% bias detection overhead
π Parallel Reasoning Streams
Four Concurrent Streams: Analytical, Creative, Critical, and Synthetic reasoning
Real-Time Coordination: Synchronization at 25%, 50%, 75% completion
Conflict Preservation: Maintains diverse perspectives in synthesis
Low Overhead: <10% coordination cost
π― Dynamic Framework Selection
Eight Frameworks: Scientific Method, Design Thinking, Systems Thinking, Critical Thinking, Creative Problem Solving, Root Cause Analysis, First Principles, Scenario Planning
Auto-Selection: >80% accuracy in choosing optimal framework
Hybrid Support: Combines 2-3 frameworks for complex problems
Adaptive Learning: Improves selection over time
π¬ Metacognitive Monitoring
Confidence Calibration: Β±10% accuracy between predicted and actual performance
Bias Detection: >70% detection rate for 8 bias types (confirmation, anchoring, availability, etc.)
Emotion Detection: >75% accuracy using Circumplex model (valence, arousal, dominance)
Self-Improvement: 5-10% monthly performance improvement
ποΈ Production Hardening
95%+ Test Coverage: Comprehensive unit, integration, e2e, performance, and accuracy tests
99.9% Uptime: MTTD <5 minutes, MTTR <1 hour
Local Embeddings: Uses local models (Ollama, E5, BGE) for zero API costs
Horizontal Scaling: Supports up to 1M memories per user
Quick Start
Prerequisites
Node.js 18.0+ (LTS recommended)
PostgreSQL 14.0+ with pgvector extension
Docker (optional, for local PostgreSQL)
Installation
Development Setup
Build Process - Quality Gates
The build process automatically enforces all quality standards:
Automatic Quality Pipeline:
β Clean build artifacts
β Auto-format code with Prettier
β Security audit (moderate+ vulnerabilities)
β Format verification
β Lint code quality
β Type check TypeScript
β Run complete test suite
β Generate TypeScript declarations
β Create production bundle
Build fails if ANY step fails. Zero tolerance for security vulnerabilities, formatting issues, linting errors, type errors, or test failures.
Docker Deployment
ThoughtMCP uses a unified Docker Compose approach with separate files for development, testing, and production. Docker Compose files are the single source of truth for all container configuration.
Docker Compose Files
File | Purpose | When to Use |
| Development environment | Local development with
|
| Test containers | Automated tests or manual test runs |
| Production deployment | Deploying the full MCP server stack |
Environment Configuration
All configuration comes from .env files. Copy the template and configure:
Key variables in .env:
Quick Start: Development
Quick Start: Testing (Auto Containers)
Tests automatically start and stop containers via the TestContainerManager:
For manual container management:
Quick Start: Production
π Complete Docker Deployment Guide
MCP Server Configuration
Configure ThoughtMCP as an MCP server in .kiro/settings/mcp.json. There are two connection methods:
Option 1: Docker Exec (Recommended for Production)
Connect directly to the running Docker container. The container runs in standby mode, waiting for MCP client connections.
Option 2: Local Node Process (Development)
Run the MCP server locally while connecting to Docker services.
Environment Variables (for local node process):
DATABASE_URL- PostgreSQL connection stringEMBEDDING_MODEL- Embedding model (nomic-embed-text, mxbai-embed-large)EMBEDDING_DIMENSION- Model-specific dimension (768 for nomic-embed-text)OLLAMA_HOST- Ollama server URLLOG_LEVEL- Logging level (DEBUG, INFO, WARN, ERROR)NODE_ENV- Environment (development, production, test)
π Complete Configuration Guide
Architecture Overview
Core Components
Key Features
HMD Memory System: Five-sector embeddings with waypoint graph
Parallel Reasoning: Four concurrent streams (Analytical, Creative, Critical, Synthetic)
Framework Selection: Eight systematic thinking frameworks
Metacognition: Confidence calibration, bias detection, emotion analysis
Production Ready: 95%+ test coverage, sub-200ms retrieval, 99.9% uptime
MCP Tools Overview
ThoughtMCP exposes cognitive capabilities through MCP tools:
Category | Tools | Description |
Memory |
,
,
,
,
| Persistent memory with five-sector embeddings |
Reasoning |
,
,
,
| Multi-stream reasoning with framework selection |
Metacognitive |
,
,
,
| Self-monitoring and quality assessment |
π Complete MCP Tools Reference
Documentation
π Essential Guides
Guide | Description |
Getting started and basic usage | |
Complete API documentation | |
MCP tool schemas and examples | |
System design and components |
π§ Configuration & Operations
Guide | Description |
Environment variables reference | |
PostgreSQL setup and schema | |
Docker Compose deployment guide | |
Production deployment guide | |
Observability and alerting |
π» Development
Guide | Description |
Development workflow and setup | |
Testing strategies and TDD | |
Contribution guidelines | |
Build system and optimization |
π Quick Links
Quick Start - Get up and running in minutes
Architecture Overview - System architecture diagram
MCP Configuration - Configure as MCP server
Contributing - How to contribute to the project
Why ThoughtMCP v0.5.0?
Production-Grade Architecture
PostgreSQL Persistence: Cross-session memory with pgvector for vector operations
Sub-200ms Retrieval: p50 <100ms, p95 <200ms, p99 <500ms at 100k memories
95%+ Test Coverage: Comprehensive unit, integration, e2e, performance, and accuracy tests
99.9% Uptime: MTTD <5 minutes, MTTR <1 hour with graceful degradation
Local Embeddings: Uses local models (Ollama, E5, BGE) for zero API costs
Advanced Cognitive Capabilities
HMD Memory System: Five-sector embeddings (Episodic, Semantic, Procedural, Emotional, Reflective)
Parallel Reasoning: Four concurrent streams with real-time coordination
Framework Selection: Eight systematic thinking frameworks with >80% selection accuracy
Metacognition: Confidence calibration (Β±10%), bias detection (>70%), emotion analysis (>75%)
Self-Improvement: 5-10% monthly performance improvement through learning
Developer Experience
Test-Driven Development: Strict TDD with comprehensive test utilities
Clear Documentation: Development, testing, database, and configuration guides
Modern Stack: TypeScript 5.0+, Vitest, PostgreSQL 14+, pgvector
Quality Standards: Zero TypeScript errors, zero ESLint warnings, formatted with Prettier
Open Source: MIT license, active development, extensible architecture
Contributing
We welcome contributions! ThoughtMCP v0.5.0 is a complete rebuild following strict quality standards.
Development Workflow
Fork and Clone: Fork the repository and clone locally
Setup Environment: Follow Development Guide
Create Branch:
git checkout -b feature/your-featureFollow TDD: Write tests first, then implementation
Run Validation:
npm run validate(format, lint, typecheck, test)Submit PR: Create pull request with clear description
Quality Standards
Test-Driven Development: Write failing tests first
95%+ Coverage: Line coverage 95%+, branch coverage 90%+
Zero Warnings: No TypeScript errors, no ESLint warnings
All Tests Pass: No exceptions, no skipped tests without plan
Clear Commits: Follow conventional commits format
Key Resources
Development Guide - Complete development workflow
Testing Guide - TDD principles and test utilities
Contributing Guide - How to contribute
Architecture Guide - System design and components
Community and Support
π Documentation: docs/ - Comprehensive guides
π¬ GitHub Discussions: Ask questions and share ideas
π Issues: Report bugs and request features
π€ Contributing: See Contributing section
π§ Contact: @keyurgolani
Project Status
β Production Ready: Complete PostgreSQL-based cognitive architecture
β All Phases Complete: HMD memory, reasoning, metacognition, production hardening
β Fully Tested: 3457 tests, 96%+ statement coverage, 91%+ branch coverage
β All Accuracy Targets Met: Confidence Β±10%, Bias >70%, Emotion >75%, Framework >80%
β Documentation Complete: User guides, API docs, deployment guides, examples
License
MIT License - see LICENSE for details
Building Production-Ready AI Cognitive Architecture
π [Get Started](# quick-start) | π Documentation | π€ Contribute | π¬ Discussions