Integrations
Collaboration between GitHub users @jasonkneen and @ExpressionsBot
Uses TensorFlow.js for efficient tensor operations in the neural memory model with operations wrapped in tf.tidy() for proper memory management
Implements a type-safe implementation with TypeScript including type-safe MCP tool definitions
Titan Memory Server
A Model Context Protocol (MCP) server implementation with an enhanced Titan Memory model.
Overview
This project implements a memory model for large language models (LLMs) that is designed to enhance memory capabilities in generative AI systems. It's built using TensorFlow.js and implemented as an MCP server, making it easy to integrate with any MCP-compatible client.
Features
Currently implemented:
- Multi-head attention mechanism
- Hierarchical memory structure
- Memory state persistence
- Integration with Model Context Protocol (MCP)
- Memory replay for enhanced learning
- LLM Cache integration
- Dynamic memory allocation
- Long-term memory storage
- Advanced memory compression
- Persistent task-specific memory
- Momentum-based memory updates
- Configurable memory integration variants (MAC/MAG)
Usage
The server exposes several tools via the Model Context Protocol (MCP):
init_model
: Initialize the memory model with custom configurationsforward
: Perform a forward pass through the modeltrain_step
: Perform a single training steptrain_sequence
: Train on a sequence of vectorssave_model
: Save the current model weightsload_model
: Load model weights from a saved fileget_status
: Get the current status of the modelstore_memory_state
: Store the current memory state with a keyretrieve_memory_state
: Retrieve a stored memory statecompress_memory
: Compress the current memory state to save spacememory_replay
: Perform memory replay training to enhance learning
Installation
Running the Server
This will start the MCP server on port 3000.
Development
Testing
Advanced Features
Memory Replay
The memory replay mechanism stores past input-output pairs and periodically retrains on them to reinforce learning. This helps prevent catastrophic forgetting and improves overall model performance.
Dynamic Memory Allocation
The model can dynamically adjust memory allocation based on the complexity of the input and the surprise level (prediction error). This allows it to allocate more resources to complex patterns and compress simpler ones.
Long-term Memory Storage
The system maintains a persistent long-term memory that survives across sessions. This memory is stored on disk and loaded when the server starts, allowing for continuity in learning.
Memory Compression
Advanced compression techniques reduce the memory footprint while preserving important information. This is particularly useful for deployment in resource-constrained environments.
LLM Cache Integration
The system maintains a cache of frequently accessed memory states, improving performance for repeated queries and reducing computational overhead.
Citation
If you use this implementation in your research, please cite:
License
MIT
This server cannot be installed
remote-capable server
The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.
Enables neural memory sequence learning with a memory-augmented model for improved code understanding and generation, featuring state management, novelty detection, and model persistence.
- Overview
- Features
- Usage
- Installation
- Running the Server
- Development
- Testing
- Advanced Features
- Citation
- License
Related Resources
Related MCP Servers
- -securityAlicense-qualityProvides deep source code analysis for Unreal Engine codebases, allowing AI assistants to understand C++ class structures, search code, and analyze subsystems.Last updated -71TypeScriptMIT License
- AsecurityAlicenseAqualityEnhances Claude AI with persistent memory storage for Infrastructure-as-Code components, supporting version tracking and relationship mapping for Terraform and Ansible resources.Last updated -234PythonMIT License
- -securityAlicense-qualityThis advanced memory server facilitates neural memory-based sequence learning and prediction, enhancing code generation and understanding through state maintenance and manifold optimization as inspired by Google Research's framework.Last updated -3440TypeScriptMIT License
- -securityFlicense-qualityProvides a project memory bank and RAG context provider for enhanced code understanding and management through vector embeddings, integrated with RooCode and Cline.Last updated -9Python