MCP - Titan Memory Server

by synthience

Integrations

  • Collaboration between GitHub users @jasonkneen and @ExpressionsBot

  • Uses TensorFlow.js for efficient tensor operations in the neural memory model with operations wrapped in tf.tidy() for proper memory management

  • Implements a type-safe implementation with TypeScript including type-safe MCP tool definitions

🧠 MCP - Titan Memory Server implementation

Colaboration between @jasonkneen and @ExpressionsBot

Follow us on X

An implementation inspired by Google Research's paper "Generative AI for Programming: A Common Task Framework". This server provides a neural memory system that can learn and predict sequences while maintaining state through a memory vector, following principles outlined in the research for improved code generation and understanding.

📚 Research Background

This implementation draws from the concepts presented in the Google Research paper (Muennighoff et al., 2024) which introduces a framework for evaluating and improving code generation models. The Titan Memory Server implements key concepts from the paper:

  • Memory-augmented sequence learning
  • Surprise metric for novelty detection
  • Manifold optimization for stable learning
  • State maintenance through memory vectors

These features align with the paper's goals of improving code understanding and generation through better memory and state management.

🚀 Features

  • Neural memory model with configurable dimensions
  • Sequence learning and prediction
  • Surprise metric calculation
  • Model persistence (save/load)
  • Memory state management
  • Full MCP tool integration

📦 Installation

# Install dependencies npm install # Build the project npm run build # Run tests npm test

🛠️ Available MCP Tools

1. 🎯 init_model

Initialize the Titan Memory model with custom configuration.

{ inputDim?: number; // Input dimension (default: 64) outputDim?: number; // Output/Memory dimension (default: 64) }

2. 📚 train_step

Perform a single training step with current and next state vectors.

{ x_t: number[]; // Current state vector x_next: number[]; // Next state vector }

3. 🔄 forward_pass

Run a forward pass through the model with an input vector.

{ x: number[]; // Input vector }

4. 💾 save_model

Save the model to a specified path.

{ path: string; // Path to save the model }

5. 📂 load_model

Load the model from a specified path.

{ path: string; // Path to load the model from }

6. ℹ️ get_status

Get current model status and configuration.

{} // No parameters required

7. 🔄 train_sequence

Train the model on a sequence of vectors.

{ sequence: number[][]; // Array of vectors to train on }

🌟 Example Usage

// Initialize model await callTool('init_model', { inputDim: 64, outputDim: 64 }); // Train on a sequence const sequence = [ [1, 0, 0, /* ... */], [0, 1, 0, /* ... */], [0, 0, 1, /* ... */] ]; await callTool('train_sequence', { sequence }); // Run forward pass const result = await callTool('forward_pass', { x: [1, 0, 0, /* ... */] });

🔧 Technical Details

  • Built with TensorFlow.js for efficient tensor operations
  • Uses manifold optimization for stable learning
  • Implements surprise metric for novelty detection
  • Memory management with proper tensor cleanup
  • Type-safe implementation with TypeScript
  • Comprehensive error handling

🧪 Testing

The project includes comprehensive tests covering:

  • Model initialization and configuration
  • Training and forward pass operations
  • Memory state management
  • Model persistence
  • Edge cases and error handling
  • Tensor cleanup and memory management

Run tests with:

npm test

🔍 Implementation Notes

  • All tensor operations are wrapped in tf.tidy() for proper memory management
  • Implements proper error handling with detailed error messages
  • Uses type-safe MCP tool definitions
  • Maintains memory state between operations
  • Handles floating-point precision issues with epsilon tolerance

📝 License

MIT License - feel free to use and modify as needed!

-
security - not tested
F
license - not found
-
quality - not tested

remote-capable server

The server can be hosted and run remotely because it primarily relies on remote services or has no dependency on the local environment.

Enables neural memory sequence learning with a memory-augmented model for improved code understanding and generation, featuring state management, novelty detection, and model persistence.

  1. 📚 Research Background
    1. 🚀 Features
      1. 📦 Installation
        1. 🛠️ Available MCP Tools
          1. 1. 🎯 init_model
          2. 2. 📚 train_step
          3. 3. 🔄 forward_pass
          4. 4. 💾 save_model
          5. 5. 📂 load_model
          6. 6. ℹ️ get_status
          7. 7. 🔄 train_sequence
        2. 🌟 Example Usage
          1. 🔧 Technical Details
            1. 🧪 Testing
              1. 🔍 Implementation Notes
                1. 📝 License

                  Related MCP Servers

                  • -
                    security
                    A
                    license
                    -
                    quality
                    Provides deep source code analysis for Unreal Engine codebases, allowing AI assistants to understand C++ class structures, search code, and analyze subsystems.
                    Last updated -
                    71
                    TypeScript
                    MIT License
                  • A
                    security
                    A
                    license
                    A
                    quality
                    Enhances Claude AI with persistent memory storage for Infrastructure-as-Code components, supporting version tracking and relationship mapping for Terraform and Ansible resources.
                    Last updated -
                    23
                    4
                    Python
                    MIT License
                  • -
                    security
                    A
                    license
                    -
                    quality
                    This advanced memory server facilitates neural memory-based sequence learning and prediction, enhancing code generation and understanding through state maintenance and manifold optimization as inspired by Google Research's framework.
                    Last updated -
                    34
                    40
                    TypeScript
                    MIT License
                    • Apple
                    • Linux
                  • -
                    security
                    F
                    license
                    -
                    quality
                    Provides a project memory bank and RAG context provider for enhanced code understanding and management through vector embeddings, integrated with RooCode and Cline.
                    Last updated -
                    9
                    Python
                    • Apple

                  View all related MCP servers

                  ID: iasz64qj43