Planned future integration with arXiv for accessing and incorporating scientific research data into the reasoning process
Uses FastAPI to expose endpoints for the MCP protocol and health checks, enabling communication with AI clients
Leverages Neo4j graph database to perform sophisticated scientific reasoning, storing and manipulating the graph structures used in the reasoning process
🧠 Adaptive Graph of Thoughts
Intelligent Scientific Reasoning through Graph-of-Thoughts
📚 Documentation
For comprehensive information on Adaptive Graph of Thoughts, including detailed installation instructions, usage guides, configuration options, API references, contribution guidelines, and the project roadmap, please visit our full documentation site:
➡️ Adaptive Graph of Thoughts Documentation Site (Note: This link will be active once the GitHub Pages site is deployed via the new workflow.)
🔍 Overview
Adaptive Graph of Thoughts leverages a Neo4j graph database to perform sophisticated scientific reasoning, with graph operations managed within its pipeline stages. It implements the Model Context Protocol (MCP) to integrate with AI applications like Claude Desktop, providing an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) framework designed for complex research tasks.
Key highlights:
- Process complex scientific queries using graph-based reasoning
- Dynamic confidence scoring with multi-dimensional evaluations
- Built with modern Python and FastAPI for high performance
- Dockerized for easy deployment
- Modular design for extensibility and customization
- Integration with Claude Desktop via MCP protocol
📂 Project Structure
The project is organized as follows (see the documentation site for more details):
🚀 Getting Started
Deployment Prerequisites
Before running Adaptive Graph of Thoughts (either locally or via Docker if not using the provided docker-compose.prod.yml
which includes Neo4j), ensure you have:
- A running Neo4j Instance: Adaptive Graph of Thoughts requires a connection to a Neo4j graph database.
- APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g.,
apoc.create.addLabels
,apoc.merge.node
). Without APOC, the application will not function correctly. You can find installation instructions on the official APOC website. - Configuration: Ensure that your
config/settings.yaml
(or corresponding environment variables) correctly points to your Neo4j instance URI, username, and password. - Indexing: For optimal performance, ensure appropriate Neo4j indexes are created. See Neo4j Indexing Strategy for details.
Note: The provided
docker-compose.yml
(for development) anddocker-compose.prod.yml
(for production) already include a Neo4j service with the APOC library pre-configured, satisfying this requirement when using Docker Compose. - APOC Library: Crucially, the Neo4j instance must have the APOC (Awesome Procedures On Cypher) library installed. Several Cypher queries within the application's reasoning stages utilize APOC procedures (e.g.,
Prerequisites
- Python 3.11+ (as specified in
pyproject.toml
, e.g., the Docker image uses Python 3.11.x or 3.12.x, 3.13.x) - Poetry: For dependency management
- Docker and Docker Compose: For containerized deployment
Installation and Setup (Local Development)
- Clone the repository:
- Install dependencies using Poetry:This creates a virtual environment and installs all necessary packages specified in
pyproject.toml
. - Activate the virtual environment:
- Configure the application:
- Set up environment variables (optional):
- Run the development server:Alternatively, for more control:The API will be available at
http://localhost:8000
.
Docker Deployment
- Quick Start with Docker Compose:
- Individual Docker Container:
- Production Deployment:
Notes on Specific Deployment Platforms
- Smithery.ai: Deployment to the Smithery.ai platform typically involves using the provided Docker image directly.
- Consult Smithery.ai's specific documentation for instructions on deploying custom Docker images.
- Port Configuration: Ensure that the platform is configured to expose port 8000 (or the port configured via
APP_PORT
if overridden) for the Adaptive Graph of Thoughts container, as this is the default port used by the FastAPI application. - Health Checks: Smithery.ai may use health checks to monitor container status. The Adaptive Graph of Thoughts Docker image includes a
HEALTHCHECK
instruction that verifies the/health
endpoint (e.g.,http://localhost:8000/health
). Ensure Smithery.ai is configured to use this endpoint if it requires a specific health check path. - The provided
Dockerfile
anddocker-compose.prod.yml
serve as a baseline for understanding the container setup. Adapt as per Smithery.ai's requirements.
- Access the Services:
- API Documentation:
http://localhost:8000/docs
- Health Check:
http://localhost:8000/health
- MCP Endpoint:
http://localhost:8000/mcp
- API Documentation:
🔌 API Endpoints
The primary API endpoints exposed by Adaptive Graph of Thoughts are:
- MCP Protocol Endpoint:
POST /mcp
- This endpoint is used for communication with MCP clients like Claude Desktop.
- Example Request for the
asr_got.query
method: - Other supported MCP methods include
initialize
andshutdown
.
- Health Check Endpoint:
GET /health
- Provides a simple health status of the application.
- Example Response:(Note: The timestamp field shown previously is not part of the current health check response.)
The advanced API endpoints previously listed (e.g., /api/v1/graph/query
) are not implemented in the current version and are reserved for potential future development.
Session Handling (session_id
)
Currently, the session_id
parameter available in API requests (e.g., for asr_got.query
) and present in responses serves primarily to identify and track a single, complete query-response cycle. It is also used for correlating progress notifications (like got/queryProgress
) with the originating query.
While the system generates and utilizes session_id
s, Adaptive Graph of Thoughts does not currently support true multi-turn conversational continuity where the detailed graph state or reasoning context from a previous query is automatically loaded and reused for a follow-up query using the same session_id
. Each query is processed independently at this time.
Future Enhancement: Persistent Sessions
A potential future enhancement for Adaptive Graph of Thoughts is the implementation of persistent sessions. This would enable more interactive and evolving reasoning processes by allowing users to:
- Persist State: Store the generated graph state and relevant reasoning context from a query, associated with its
session_id
, likely within the Neo4j database. - Reload State: When a new query is submitted with an existing
session_id
, the system could reload this saved state as the starting point for further processing. - Refine and Extend: Allow the new query to interact with the loaded graph—for example, by refining previous hypotheses, adding new evidence to existing structures, or exploring alternative reasoning paths based on the established context.
Implementing persistent sessions would involve developing robust strategies for:
- Efficiently storing and retrieving session-specific graph data in Neo4j.
- Managing the lifecycle (e.g., creation, update, expiration) of session data.
- Designing sophisticated logic for how new queries merge with, modify, or extend pre-existing session contexts and graphs.
This is a significant feature that could greatly enhance the interactive capabilities of Adaptive Graph of Thoughts. Contributions from the community in designing and implementing persistent session functionality are welcome.
Future Enhancement: Asynchronous and Parallel Stage Execution
Currently, the 8 stages of the Adaptive Graph of Thoughts reasoning pipeline are executed sequentially. For complex queries or to further optimize performance, exploring asynchronous or parallel execution for certain parts of the pipeline is a potential future enhancement.
Potential Areas for Parallelism:
- Hypothesis Generation: The
HypothesisStage
generates hypotheses for each dimension identified by theDecompositionStage
. The process of generating hypotheses for different, independent dimensions could potentially be parallelized. For instance, if three dimensions are decomposed, three parallel tasks could work on generating hypotheses for each respective dimension. - Evidence Integration (Partial): Within the
EvidenceStage
, if multiple hypotheses are selected for evaluation, the "plan execution" phase (simulated evidence gathering) for these different hypotheses might be performed concurrently.
Challenges and Considerations:
Implementing parallel stage execution would introduce complexities that need careful management:
- Data Consistency: Concurrent operations, especially writes to the Neo4j database (e.g., creating multiple hypothesis nodes or evidence nodes simultaneously), must be handled carefully to ensure data integrity and avoid race conditions. Unique ID generation schemes would need to be robust for parallel execution.
- Transaction Management: Neo4j transactions for concurrent writes would need to be managed appropriately.
- Dependency Management: Ensuring that stages (or parts of stages) that truly depend on the output of others are correctly sequenced would be critical.
- Resource Utilization: Parallel execution could increase resource demands (CPU, memory, database connections).
- Complexity: The overall control flow of the
GoTProcessor
would become more complex.
While the current sequential execution ensures a clear and manageable data flow, targeted parallelism in areas like hypothesis generation for independent dimensions could offer performance benefits for future versions of Adaptive Graph of Thoughts. This remains an open area for research and development.
🧪 Testing & Quality Assurance
Development Commands
🗺️ Roadmap and Future Directions
We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.
For more details on our planned features and long-term goals, please see our Roadmap (also available on the documentation site).
🗺️ Roadmap and Future Directions
We have an exciting vision for the future of Adaptive Graph of Thoughts! Our roadmap includes plans for enhanced graph visualization, integration with more data sources like Arxiv, and further refinements to the core reasoning engine.
For more details on our planned features and long-term goals, please see our Roadmap.
🤝 Contributing
We welcome contributions! Please see our Contributing Guidelines (also available on the documentation site) for details on how to get started, our branching strategy, code style, and more.
📄 License
This project is licensed under the Apache License 2.0. License.
🙏 Acknowledgments
- NetworkX community for graph analysis capabilities
- FastAPI team for the excellent web framework
- Pydantic for robust data validation
- The scientific research community for inspiration and feedback
This server cannot be installed
A scientific reasoning framework that leverages graph structures and the Model Context Protocol (MCP) to process complex scientific queries through an Advanced Scientific Reasoning Graph-of-Thoughts (ASR-GoT) approach.
- Intelligent Scientific Reasoning through Graph-of-Thoughts
- 📚 Documentation
- 🔍 Overview
- 📂 Project Structure
- 🚀 Getting Started
- 🔌 API Endpoints
- Session Handling (session_id)
- 🧪 Testing & Quality Assurance
- 🗺️ Roadmap and Future Directions
- 🗺️ Roadmap and Future Directions
- 🤝 Contributing
- 📄 License
- 🙏 Acknowledgments
- Aptive-Graph-of-Thought-MCP-
Related MCP Servers
- -securityFlicense-qualityEnables interaction with a TrueRAG system through a GraphQL API using the Model Context Protocol (MCP), facilitating access to policies with a Python SDK and GQL library integration.Last updated -2
- AsecurityFlicenseAqualityProvides a scalable knowledge graph implementation for Model Context Protocol using Elasticsearch, enabling AI models to store and query information with advanced search capabilities, memory-like behavior, and multi-zone architecture.Last updated -176TypeScript
- AsecurityAlicenseAqualityAoT MCP server enables AI models to solve complex reasoning problems by decomposing them into independent, reusable atomic units of thought, featuring a powerful decomposition-contraction mechanism that allows for deep exploration of problem spaces while maintaining high confidence in conclusions.Last updated -325JavaScriptMIT License
- AsecurityAlicenseAqualityA sophisticated MCP server that provides a multi-dimensional, adaptive reasoning framework for AI assistants, replacing linear reasoning with a graph-based architecture for more nuanced cognitive processes.Last updated -117413TypeScriptMIT License