# π§ NexusMind
<div align="center">
```
ββββββββββββββββββββββββββββββββββββββββ
β β
β π§ NexusMind π§ β
β β
β Intelligent Scientific β
β Reasoning through β
β Graph-of-Thoughts β
β β
ββββββββββββββββββββββββββββββββββββββββ
```
#### **Intelligent Scientific Reasoning through Graph-of-Thoughts**
[](https://github.com/SaptaDey/NexusMind/releases)
[](https://www.python.org/downloads/)
[](LICENSE)
[](Dockerfile)
[](https://fastapi.tiangolo.com)
[](https://networkx.org)
[](CHANGELOG.md)
</div>
<div align="center">
<p><strong>π Next-Generation AI Reasoning Framework for Scientific Research</strong></p>
<p><em>Leveraging graph structures to transform how AI systems approach scientific reasoning</em></p>
</div>
## π Overview
NexusMind leverages **graph structures** to perform sophisticated scientific reasoning. It implements the **Model Context Protocol (MCP)** to integrate with AI applications like Claude Desktop, providing an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework designed for complex research tasks.
**Key highlights:**
- Process complex scientific queries using graph-based reasoning
- Dynamic confidence scoring with multi-dimensional evaluations
- Built with modern Python and FastAPI for high performance
- Dockerized for easy deployment
- Modular design for extensibility and customization
- Integration with Claude Desktop via MCP protocol
## π Key Features
### 8-Stage Reasoning Pipeline
```mermaid
graph TD
A[π± Stage 1: Initialization] --> B[π§© Stage 2: Decomposition]
B --> C[π¬ Stage 3: Hypothesis/Planning]
C --> D[π Stage 4: Evidence Integration]
D --> E[βοΈ Stage 5: Pruning/Merging]
E --> F[π Stage 6: Subgraph Extraction]
F --> G[π Stage 7: Composition]
G --> H[π€ Stage 8: Reflection]
A1[Create root node<br/>Set initial confidence<br/>Define graph structure] --> A
B1[Break into dimensions<br/>Identify components<br/>Create dimensional nodes] --> B
C1[Generate hypotheses<br/>Create reasoning strategy<br/>Set falsification criteria] --> C
D1[Gather evidence<br/>Link to hypotheses<br/>Update confidence scores] --> D
E1[Remove low-value elements<br/>Consolidate similar nodes<br/>Optimize structure] --> E
F1[Identify relevant portions<br/>Focus on high-value paths<br/>Create targeted subgraphs] --> F
G1[Synthesize findings<br/>Create coherent insights<br/>Generate comprehensive answer] --> G
H1[Evaluate reasoning quality<br/>Identify improvements<br/>Final confidence assessment] --> H
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style D fill:#fff3e0
style E fill:#ffebee
style F fill:#f1f8e9
style G fill:#e3f2fd
style H fill:#fce4ec
```
The core reasoning process follows a sophisticated 8-stage pipeline:
1. **π± Initialization**
- Creates root node from query with multi-dimensional confidence vector
- Establishes initial graph structure with proper metadata
- Sets baseline confidence across empirical, theoretical, methodological, and consensus dimensions
2. **π§© Decomposition**
- Breaks query into key dimensions: Scope, Objectives, Constraints, Data Needs, Use Cases
- Identifies potential biases and knowledge gaps from the outset
- Creates dimensional nodes with initial confidence assessments
3. **π¬ Hypothesis/Planning**
- Generates 3-5 hypotheses per dimension with explicit falsification criteria
- Creates detailed execution plans for each hypothesis
- Tags with disciplinary provenance and impact estimates
4. **π Evidence Integration**
- Iteratively selects hypotheses based on confidence-to-cost ratio and impact
- Gathers and links evidence using typed edges (causal, temporal, correlative)
- Updates confidence vectors using Bayesian methods with statistical power assessment
5. **βοΈ Pruning/Merging**
- Removes nodes with low confidence and impact scores
- Consolidates semantically similar nodes
- Optimizes graph structure while preserving critical relationships
6. **π Subgraph Extraction**
- Identifies high-value subgraphs based on multiple criteria
- Focuses on nodes with high confidence and impact scores
- Extracts patterns relevant to the original query
7. **π Composition**
- Synthesizes findings into coherent narrative
- Annotates claims with node IDs and edge types
- Provides comprehensive answers with proper citations
8. **π€ Reflection**
- Performs comprehensive quality audit
- Evaluates coverage, bias detection, and methodological rigor
- Provides final confidence assessment and improvement recommendations
### Advanced Technical Capabilities
<div align="center">
<table>
<tr>
<td align="center">π <b>Multi-Dimensional<br>Confidence</b></td>
<td align="center">π§ <b>Graph-Based<br>Knowledge</b></td>
<td align="center">π <b>MCP<br>Integration</b></td>
<td align="center">β‘ <b>FastAPI<br>Backend</b></td>
</tr>
<tr>
<td align="center">π³ <b>Docker<br>Deployment</b></td>
<td align="center">π§© <b>Modular<br>Design</b></td>
<td align="center">βοΈ <b>Configuration<br>Management</b></td>
<td align="center">π <b>Type<br>Safety</b></td>
</tr>
<tr>
<td align="center">π <b>Interdisciplinary<br>Bridge Nodes</b></td>
<td align="center">π <b>Hyperedge<br>Support</b></td>
<td align="center">π <b>Statistical<br>Power Analysis</b></td>
<td align="center">π― <b>Impact<br>Estimation</b></td>
</tr>
</table>
</div>
**Core Features:**
- **π§ Graph Knowledge Representation**: Uses `networkx` to model complex relationships with hyperedges and multi-layer networks
- **π Dynamic Confidence Vectors**: Four-dimensional confidence assessment (empirical support, theoretical basis, methodological rigor, consensus alignment)
- **π Interdisciplinary Bridge Nodes**: Automatically connects insights across different research domains
- **π Advanced Edge Types**: Supports causal, temporal, correlative, and custom relationship types
- **π Statistical Rigor**: Integrated power analysis and effect size estimation
- **π― Impact-Driven Prioritization**: Focuses on high-impact research directions
- **π MCP Server**: Seamless Claude Desktop integration with Model Context Protocol
- **β‘ High-Performance API**: Modern FastAPI implementation with async support
## π οΈ Technology Stack
<div align="center">
<table>
<tr>
<td align="center"><img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/python/python-original.svg" width="38" height="38"/><br>Python 3.13+</td>
<td align="center"><img src="https://fastapi.tiangolo.com/img/logo-margin/logo-teal.png" width="38" height="38"/><br>FastAPI</td>
<td align="center"><img src="https://networkx.org/documentation/stable/_static/networkx_logo.svg" width="38" height="38"/><br>NetworkX</td>
<td align="center"><img src="https://raw.githubusercontent.com/devicons/devicon/master/icons/docker/docker-original.svg" width="38" height="38"/><br>Docker</td>
</tr>
<tr>
<td align="center"><img src="https://docs.pytest.org/en/7.3.x/_static/pytest_logo_curves.svg" width="38" height="38"/><br>Pytest</td>
<td align="center"><img src="https://docs.pydantic.dev/latest/img/logo-white.svg" width="38" height="38"/><br>Pydantic</td>
<td align="center"><img src="https://python-poetry.org/images/logo-origami.svg" width="38" height="38"/><br>Poetry</td>
<td align="center"><img src="https://raw.githubusercontent.com/tomchristie/uvicorn/master/docs/uvicorn.png" width="38" height="38"/><br>Uvicorn</td>
</tr>
</table>
</div>
## π Project Structure
```
NexusMind/
βββ π config/ # Configuration files
β βββ settings.yaml # Application settings
β βββ claude_mcp_config.json # Claude MCP integration config
β βββ logging.yaml # Logging configuration
β
βββ π src/asr_got_reimagined/ # Main source code
β βββ π api/ # API layer
β β βββ π routes/ # API route definitions
β β β βββ mcp.py # MCP protocol endpoints
β β β βββ health.py # Health check endpoints
β β β βββ graph.py # Graph query endpoints
β β βββ schemas.py # API request/response schemas
β β βββ middleware.py # API middleware
β β
β βββ π domain/ # Core business logic
β β βββ π models/ # Domain models
β β β βββ common.py # Common types and enums
β β β βββ graph_elements.py # Node, Edge, Hyperedge models
β β β βββ graph_state.py # Graph state management
β β β βββ confidence.py # Confidence vector models
β β β βββ metadata.py # Metadata schemas
β β β
β β βββ π services/ # Business services
β β β βββ got_processor.py # Main GoT processing service
β β β βββ evidence_service.py # Evidence gathering and assessment
β β β βββ confidence_service.py # Confidence calculation service
β β β βββ graph_service.py # Graph manipulation service
β β β βββ mcp_service.py # MCP protocol service
β β β
β β βββ π stages/ # 8-Stage pipeline implementation
β β β βββ base_stage.py # Abstract base stage
β β β βββ stage_1_initialization.py # Stage 1: Graph initialization
β β β βββ stage_2_decomposition.py # Stage 2: Query decomposition
β β β βββ stage_3_hypothesis.py # Stage 3: Hypothesis generation
β β β βββ stage_4_evidence.py # Stage 4: Evidence integration
β β β βββ stage_5_pruning.py # Stage 5: Pruning and merging
β β β βββ stage_6_extraction.py # Stage 6: Subgraph extraction
β β β βββ stage_7_composition.py # Stage 7: Answer composition
β β β βββ stage_8_reflection.py # Stage 8: Quality reflection
β β β
β β βββ π utils/ # Utility functions
β β βββ graph_utils.py # Graph manipulation utilities
β β βββ confidence_utils.py # Confidence calculation utilities
β β βββ statistical_utils.py # Statistical analysis utilities
β β βββ bias_detection.py # Bias detection algorithms
β β βββ temporal_analysis.py # Temporal pattern analysis
β β
β βββ π infrastructure/ # Infrastructure layer
β β βββ π database/ # Database integration
β β βββ π cache/ # Caching layer
β β βββ π external/ # External service integrations
β β
β βββ main.py # Application entry point
β βββ app_setup.py # Application setup and configuration
β
βββ π tests/ # Test suite
β βββ π unit/ # Unit tests
β β βββ π stages/ # Stage-specific tests
β β βββ π services/ # Service tests
β β βββ π models/ # Model tests
β βββ π integration/ # Integration tests
β βββ π fixtures/ # Test fixtures and data
β
βββ π scripts/ # Utility scripts
β βββ setup_dev.py # Development setup
β βββ add_type_hints.py # Type hint utilities
β βββ deployment/ # Deployment scripts
β
βββ π docs/ # Documentation
β βββ api/ # API documentation
β βββ architecture/ # Architecture diagrams
β βββ examples/ # Usage examples
β
βββ π static/ # Static assets
β βββ nexusmind-logo.png # Application logo
β
βββ π Docker Files & Config
βββ Dockerfile # Docker container definition
βββ docker-compose.yml # Multi-container setup
βββ .dockerignore # Docker ignore patterns
β
βββ π Configuration Files
βββ pyproject.toml # Python project configuration
βββ poetry.lock # Dependency lock file
βββ mypy.ini # Type checking configuration
βββ pyrightconfig.json # Python type checker config
βββ .pre-commit-config.yaml # Pre-commit hooks
βββ .gitignore # Git ignore patterns
β
βββ π Documentation
βββ README.md # This file
βββ CHANGELOG.md # Version history
βββ LICENSE # Apache 2.0 license
βββ CONTRIBUTING.md # Contribution guidelines
```
## π Getting Started
### Prerequisites
- **Python 3.13+** (Docker image uses Python 3.13.3-slim-bookworm)
- **[Poetry](https://python-poetry.org/docs/#installation)**: For dependency management
- **[Docker](https://www.docker.com/get-started)** and **[Docker Compose](https://docs.docker.com/compose/install/)**: For containerized deployment
### Installation and Setup (Local Development)
1. **Clone the repository**:
```bash
git clone https://github.com/SaptaDey/NexusMind.git
cd NexusMind
```
2. **Install dependencies using Poetry**:
```bash
poetry install
```
This creates a virtual environment and installs all necessary packages specified in `pyproject.toml`.
3. **Activate the virtual environment**:
```bash
poetry shell
```
4. **Configure the application**:
```bash
# Copy example configuration
cp config/settings.example.yaml config/settings.yaml
# Edit configuration as needed
vim config/settings.yaml
```
5. **Set up environment variables** (optional):
```bash
# Create .env file for sensitive configuration
echo "LOG_LEVEL=DEBUG" > .env
echo "API_HOST=0.0.0.0" >> .env
echo "API_PORT=8000" >> .env
```
6. **Run the development server**:
```bash
python src/asr_got_reimagined/main.py
```
Alternatively, for more control:
```bash
uvicorn asr_got_reimagined.main:app --reload --host 0.0.0.0 --port 8000
```
The API will be available at `http://localhost:8000`.
### Docker Deployment
```mermaid
graph TB
subgraph "Development Environment"
A[π¨βπ» Developer] --> B[π³ Docker Compose]
end
subgraph "Container Orchestration"
B --> C[π¦ NexusMind Container]
B --> D[π Monitoring Container]
B --> E[ποΈ Database Container]
end
subgraph "NexusMind Application"
C --> F[β‘ FastAPI Server]
F --> G[π§ ASR-GoT Engine]
F --> H[π MCP Protocol]
end
subgraph "External Integrations"
H --> I[π€ Claude Desktop]
H --> J[π Other AI Clients]
end
style A fill:#e1f5fe
style B fill:#f3e5f5
style C fill:#e8f5e8
style F fill:#fff3e0
style G fill:#ffebee
style H fill:#f1f8e9
```
1. **Quick Start with Docker Compose**:
```bash
# Build and run all services
docker-compose up --build
# For detached mode (background)
docker-compose up --build -d
# View logs
docker-compose logs -f nexusmind
```
2. **Individual Docker Container**:
```bash
# Build the image
docker build -t nexusmind:latest .
# Run the container
docker run -p 8000:8000 -v $(pwd)/config:/app/config nexusmind:latest
```
3. **Production Deployment**:
```bash
# Use production compose file
docker-compose -f docker-compose.prod.yml up --build -d
```
4. **Access the Services**:
- **API Documentation**: `http://localhost:8000/docs`
- **Health Check**: `http://localhost:8000/health`
- **MCP Endpoint**: `http://localhost:8000/mcp`
## π API Endpoints
### Core Endpoints
- **MCP Protocol**: `POST /mcp`
```json
{
"method": "process_query",
"params": {
"query": "Analyze the relationship between microbiome diversity and cancer progression",
"confidence_threshold": 0.7,
"max_stages": 8
}
}
```
- **Health Check**: `GET /health`
```json
{
"status": "healthy",
"version": "0.1.0",
"timestamp": "2024-05-23T10:30:00Z"
}
```
### Advanced Endpoints
- **Graph Query**: `POST /api/v1/graph/query`
```json
{
"query": "Research question or hypothesis",
"parameters": {
"disciplines": ["immunology", "oncology"],
"confidence_threshold": 0.6,
"include_temporal_analysis": true,
"enable_bias_detection": true
}
}
```
- **Graph State**: `GET /api/v1/graph/{session_id}`
- Retrieve current state of a reasoning graph
- Includes confidence scores, node relationships, and metadata
- **Analytics**: `GET /api/v1/analytics/{session_id}`
- Get comprehensive metrics about the reasoning process
- Includes performance stats, confidence trends, and quality measures
- **Subgraph Extraction**: `POST /api/v1/graph/{session_id}/extract`
```json
{
"criteria": {
"min_confidence": 0.7,
"node_types": ["hypothesis", "evidence"],
"include_causal_chains": true
}
}
```
## π§ͺ Testing & Quality Assurance
<div align="center">
<table>
<tr>
<td align="center">π§ͺ<br><b>Testing</b></td>
<td align="center">π<br><b>Type Checking</b></td>
<td align="center">β¨<br><b>Linting</b></td>
<td align="center">π<br><b>Coverage</b></td>
</tr>
<tr>
<td align="center">
<pre>poetry run pytest</pre>
<pre>poetry run pytest -v</pre>
</td>
<td align="center">
<pre>poetry run mypy src/</pre>
<pre>pyright src/</pre>
</td>
<td align="center">
<pre>poetry run ruff check .</pre>
<pre>poetry run ruff format .</pre>
</td>
<td align="center">
<pre>poetry run pytest --cov=src</pre>
<pre>coverage html</pre>
</td>
</tr>
</table>
</div>
### Development Commands
```bash
# Run full test suite with coverage
poetry run pytest --cov=src --cov-report=html --cov-report=term
# Run specific test categories
poetry run pytest tests/unit/stages/ # Stage-specific tests
poetry run pytest tests/integration/ # Integration tests
poetry run pytest -k "test_confidence" # Tests matching pattern
# Type checking and linting
poetry run mypy src/ --strict # Strict type checking
poetry run ruff check . --fix # Auto-fix linting issues
poetry run ruff format . # Format code
# Pre-commit hooks (recommended)
poetry run pre-commit install # Install hooks
poetry run pre-commit run --all-files # Run all hooks
```
### Quality Metrics
- **Type Safety**:
- Fully typed codebase with strict mypy configuration
- Configured with `mypy.ini` and `pyrightconfig.json`
- Fix logger type issues: `python scripts/add_type_hints.py`
- **Code Quality**:
- 95%+ test coverage target
- Automated formatting with Ruff
- Pre-commit hooks for consistent code quality
- Comprehensive integration tests for the 8-stage pipeline
## π§ Configuration
### Application Settings (`config/settings.yaml`)
```yaml
# Core application settings
app:
name: "NexusMind"
version: "0.1.0"
debug: false
log_level: "INFO"
# API configuration
api:
host: "0.0.0.0"
port: 8000
cors_origins: ["*"]
# ASR-GoT Framework settings
asr_got:
max_stages: 8
default_confidence_threshold: 0.6
enable_bias_detection: true
enable_temporal_analysis: true
max_hypotheses_per_dimension: 5
# Graph settings
graph:
max_nodes: 10000
enable_hyperedges: true
enable_multi_layer: true
temporal_decay_factor: 0.1
```
### MCP Configuration (`config/claude_mcp_config.json`)
```json
{
"name": "nexusmind",
"description": "Advanced Scientific Reasoning with Graph-of-Thoughts",
"version": "0.1.0",
"endpoints": {
"mcp": "http://localhost:8000/mcp"
},
"capabilities": [
"scientific_reasoning",
"graph_analysis",
"confidence_assessment",
"bias_detection"
]
}
```
## π€ Contributing
We welcome contributions! Please see our [Contributing Guidelines](CONTRIBUTING.md) for details.
### Development Setup
1. Fork the repository
2. Create a feature branch: `git checkout -b feature/amazing-feature`
3. Install development dependencies: `poetry install --with dev`
4. Make your changes and add tests
5. Run the test suite: `poetry run pytest`
6. Submit a pull request
### Code Style
- Follow PEP 8 style guidelines
- Use type hints for all functions and methods
- Write comprehensive docstrings
- Maintain test coverage above 95%
## π Documentation
- **[API Documentation](docs/api/)**: Comprehensive API reference
- **[Architecture Guide](docs/architecture/)**: System design and components
- **[Usage Examples](docs/examples/)**: Practical usage scenarios
- **[Development Guide](docs/development/)**: Contributing and development setup
## π License
This project is licensed under the Apache License 2.0 - see the [LICENSE](LICENSE) file for details.
## π Acknowledgments
- **NetworkX** community for graph analysis capabilities
- **FastAPI** team for the excellent web framework
- **Pydantic** for robust data validation
- The scientific research community for inspiration and feedback
---
<div align="center">
<p><strong>Built with β€οΈ for the scientific research community</strong></p>
<p><em>NexusMind - Advancing scientific reasoning through intelligent graph structures</em></p>
</div>