Skip to main content
Glama
# CLAUDE.md This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. ## Project Overview This is **mcp-titan**, a sophisticated neural memory system for LLMs that provides persistent memory capabilities through an MCP (Model Context Protocol) server. The system uses TensorFlow.js for neural computations and implements transformer-based memory mechanisms. ## Development Commands ### Core Commands - `npm run build` - Compile TypeScript to JavaScript - `npm run typecheck` - Type check without emitting files - `npm run lint` - Run ESLint with zero warnings policy - `npm run lint:fix` - Auto-fix ESLint issues - `npm run test` - Run Jest tests with experimental VM modules - `npm run clean` - Remove build artifacts ### Model Training Commands - `npm run train-model` - Train the model with default settings - `npm run train-quick` - Quick training for development - `npm run train-production` - Full production training - `npm run download-data` - Download training datasets - `npm run test-model` - Test model functionality ### Server Commands - `npm start` - Start the MCP server (production) - `npm run dev` - Development mode with auto-restart ## Architecture Overview ### Core Components **TitanMemoryServer** (`src/index.ts`) - Main MCP server that registers 16 tools for memory operations including: - Model initialization and configuration - Forward passes and training steps - Memory state management and pruning - Checkpoint saving/loading - Bootstrap memory from URLs/text - Online learning service **TitanMemoryModel** (`src/model.ts`) - Neural memory implementation with: - Transformer-XL inspired architecture - Hierarchical memory (short-term, long-term, meta) - Information-gain based pruning - Surprise-based learning - Tensor memory management **Key Services:** - **LearnerService** (`src/learner.ts`) - Online learning with replay buffer - **AdvancedTokenizer** (`src/tokenizer/`) - BPE tokenization and embeddings - **MemoryPruner** (`src/pruning.ts`) - Intelligent memory pruning - **VectorProcessor** (`src/utils.ts`) - Text encoding and vector operations ### Memory Architecture The system implements a three-tier memory hierarchy: 1. **Short-term memory** - Recent activations and working memory 2. **Long-term memory** - Consolidated patterns and knowledge 3. **Meta memory** - Statistics and metadata about memory usage Memory is automatically pruned using information-gain scoring to maintain performance and prevent memory bloat. ## MCP Integration The server is designed to work with Cursor and other MCP-compatible clients. Configuration example: ```json { "titan-memory": { "command": "node", "args": ["index.js"], "cwd": "/path/to/mcp-titan" } } ``` ## Dependencies - **@tensorflow/tfjs-node** - Neural network computations - **@modelcontextprotocol/sdk** - MCP protocol implementation - **zod** - Runtime type validation for tool parameters - **ws** - WebSocket transport support ## Testing Tests are in `src/__tests__/` and cover: - Model functionality and training - Memory operations and pruning - Server tool integration - Tokenizer capabilities Run `npm test` to execute the full test suite. ## Important Notes - The system requires Node.js >=22.0.0 for optimal performance - Memory state is automatically saved every minute and on shutdown - Use `reset_gradients` tool if training becomes unstable - Pruning is automatically triggered when memory capacity exceeds limits - The server includes comprehensive error handling and telemetry

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/henryhawke/mcp-titan'

If you have feedback or need assistance with the MCP directory API, please join our Discord server