Skip to main content
Glama

In Memoria

In Memoria

Persistent memory and pattern learning for AI coding assistants via MCP.

AI coding tools suffer from complete session amnesia. Every conversation with Claude, Copilot, or Cursor starts from scratch, forcing you to re-explain your codebase architecture, patterns, and preferences repeatedly.

In Memoria solves this by building persistent intelligence about your code that AI assistants can access through the Model Context Protocol - giving them memory that persists across sessions.

The Problem

# What happens now You: "Refactor this function using our established patterns" AI: "What patterns? I don't know your codebase." You: *explains architecture for the 50th time* # What should happen You: "Refactor this function using our established patterns" AI: "Based on your preference for functional composition and your naming conventions..."

Current AI tools:

  • Re-analyze codebases every session (expensive)
  • Give generic suggestions that don't match your style
  • Have no memory of architectural decisions
  • Can't learn from corrections you've made

Quick Start

# Install and start the MCP server npx in-memoria server # Connect from Claude Desktop (add to config) { "mcpServers": { "in-memoria": { "command": "npx", "args": ["in-memoria", "server"] } } } # Connect from Claude Code CLI claude mcp add in-memoria -- npx in-memoria server # Learn from your codebase npx in-memoria learn ./my-project # Native AST parsing for 12 programming languages # Supports: TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, SQL # Intelligent filtering excludes build artifacts and dependencies

How It Works

In Memoria runs as an MCP server that AI tools connect to. It provides 17 tools for codebase analysis and pattern learning.

Architecture:

┌─────────────────────┐ MCP ┌──────────────────────┐ napi-rs ┌─────────────────────┐ │ AI Tool (Claude) │◄─────────►│ TypeScript Server │◄─────────────►│ Rust Core │ └─────────────────────┘ └──────────┬───────────┘ │ • AST Parser │ │ │ • Pattern Learner │ │ │ • Semantic Engine │ ▼ └─────────────────────┘ ┌──────────────────────┐ │ SQLite + SurrealDB │ │ (Local Storage) │ └──────────────────────┘

Core engines:

  • AST Parser (Rust): Tree-sitter parsing for TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, and SQL
  • Pattern Learner (Rust): Statistical analysis of naming conventions, function signatures, and architectural choices
  • Semantic Engine (Rust): Code relationship mapping and concept extraction with timeout protection
  • TypeScript Layer: MCP server, SQLite/SurrealDB operations, file watching
  • Storage: SQLite for structured patterns, SurrealDB for vector search and semantic queries

What It Learns

In Memoria builds statistical models from your actual code to understand your preferences:

Naming Patterns:

// Detects patterns like: useXxxData for API hooks, handleXxx for events const useUserData = () => { ... } const handleSubmit = () => { ... } const formatUserName = (name: string) => { ... } // AI gets context: "This developer uses camelCase, 'use' prefix for hooks, // 'handle' for events, 'format' for data transformation"

Architectural Choices:

// Learns you consistently use Result types instead of throwing type ApiResult<T> = | { success: true; data: T } | { success: false; error: string }; // AI suggests this pattern in new code instead of try/catch

Code Structure Preferences:

// Detects your preference for functional composition const processUser = pipe(validateUser, enrichUserData, saveUser); // vs object-oriented approaches you avoid class UserProcessor { ... } // Rarely used in your codebase

Project Organization:

src/ components/ # UI components services/ # Business logic utils/ # Pure functions types/ # Type definitions // AI learns your directory structure preferences

MCP Tools for AI Assistants

In Memoria provides 17 tools that AI assistants can use to understand your codebase:

Getting Started:

  • get_learning_status - Check what intelligence exists for a project
  • auto_learn_if_needed - Automatically learn from codebase if no intelligence exists
  • quick_setup - Initialize In Memoria for a new project

Code Analysis:

  • analyze_codebase - Get architectural overview, complexity metrics, and language breakdown
  • get_file_content - Retrieve files with rich metadata and analysis
  • get_project_structure - Understand directory hierarchy and organization patterns
  • search_codebase - Semantic search that finds code by meaning, not just keywords

Pattern Intelligence:

  • get_pattern_recommendations - Get coding suggestions that match your established style
  • predict_coding_approach - Predict how you'd solve similar problems based on your patterns
  • get_developer_profile - Access your learned coding preferences and decision patterns
  • get_semantic_insights - Discover code relationships and architectural concepts

Learning & Memory:

  • learn_codebase_intelligence - Manually trigger analysis of a codebase
  • contribute_insights - Allow AI to add observations back to the knowledge base
  • generate_documentation - Create docs that understand your project's patterns

System Monitoring:

  • get_system_status - Health check and component status
  • get_intelligence_metrics - Quality and completeness of learned patterns
  • get_performance_status - System performance and benchmarking

Implementation Details

Pattern Learning Algorithm:

  1. Parse code into ASTs using tree-sitter
  2. Extract structural patterns (function signatures, class hierarchies, naming)
  3. Build frequency maps of developer choices
  4. Train classifier on decision patterns
  5. Generate predictions for new code contexts

Performance:

  • Smart file filtering - Automatically excludes build artifacts, dependencies, and generated files
  • Timeout protection - Prevents analysis from hanging on complex files
  • Fast analysis - Optimized processing that skips node_modules/, dist/, .next/, and other non-source files
  • File size limits - Skips very large files to prevent memory issues
  • Incremental analysis - Only processes changed files in subsequent runs
  • SQLite for structured data, SurrealDB embedded for semantic search and vectors
  • Cross-platform Rust binaries (Windows, macOS, Linux)
  • Built-in performance profiling and memory leak detection
  • Optimized for real-time file watching without blocking development workflow

Team Usage

In Memoria works for individual developers and teams:

Individual:

  • Learns your personal coding style
  • Remembers architectural decisions you've made
  • Provides context-aware suggestions

Team:

  • Share .in-memoria.db files containing learned patterns across team members
  • Onboard new developers by providing pre-learned codebase intelligence
  • Ensure consistent AI suggestions team-wide through shared pattern recognition

Technical Comparison

vs GitHub Copilot's memory:

  • Copilot: Basic fact storage, no pattern learning
  • In Memoria: Semantic analysis with prediction engine

vs Cursor's rules:

  • Cursor: Static rules, manually defined
  • In Memoria: Dynamic learning from actual code

vs Custom RAG:

  • RAG: Retrieves relevant code snippets
  • In Memoria: Understands coding patterns and predicts behavior

Build from Source

git clone https://github.com/pi22by7/in-memoria cd in-memoria npm install npm run build

Requirements:

  • Node.js 18+
  • Rust 1.70+ (for building)
  • 2GB RAM minimum

Quality & Testing:

  • 98.3% unit test pass rate (118/120 tests)
  • 100% MCP integration test coverage (23/23 tests)
  • Comprehensive server lifecycle testing
  • All Rust clippy warnings resolved
  • Zero memory leaks verified

Development:

npm run dev # Start in development mode npm test # Run test suite npm run build:rust # Build Rust components

Contributing

This is infrastructure for the AI development ecosystem. Contributions welcome:

  • Language support - Add tree-sitter parsers or extend file filtering
  • Pattern learning improvements - Enhance statistical analysis and concept extraction
  • MCP tool additions - New tools for AI assistant integration
  • Performance optimizations - Further speed improvements and memory usage reduction
  • Timeout and reliability - Additional safeguards for edge cases

See CONTRIBUTING.md for details.

FAQ

Q: Does this replace my AI coding assistant? A: No, it enhances them. In Memoria provides memory and context that AI tools can use.

Q: What data is collected? A: Everything stays local. No data is sent to external services.

Q: How accurate is pattern learning? A: Pattern recognition accuracy improves with codebase size and consistency. Projects with established patterns and consistent style will see better pattern detection than smaller or inconsistent codebases. The system learns from frequency and repetition in your actual code.

Q: Performance impact? A: Minimal. Runs in background with smart filtering that skips build artifacts and dependencies. Modern analysis engine with built-in safeguards for reliable operation.

Q: What file types are supported? A: TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, and SQL with native AST parsing. Build artifacts and dependencies are automatically excluded.

Q: What if analysis encounters issues? A: Built-in reliability features handle edge cases gracefully. Large files and complex directories are processed efficiently with automatic fallbacks. Progress is shown during analysis.

License

MIT - see LICENSE


Try it: npx in-memoria server

Give your AI tools the memory they've been missing.

-
security - not tested
A
license - permissive license
-
quality - not tested

hybrid server

The server is able to function both locally and remotely, depending on the configuration or use case.

Persistent memory and pattern learning for AI coding assistants via MCP.

  1. The Problem
    1. Quick Start
      1. How It Works
        1. What It Learns
          1. MCP Tools for AI Assistants
            1. Implementation Details
              1. Team Usage
                1. Technical Comparison
                  1. Build from Source
                    1. Contributing
                      1. FAQ
                        1. License

                          Related MCP Servers

                          • -
                            security
                            F
                            license
                            -
                            quality
                            An MCP server that integrates with mem0.ai to help users store, retrieve, and search coding preferences for more consistent programming practices.
                            Last updated -
                            439
                          • -
                            security
                            F
                            license
                            -
                            quality
                            An MCP server that gives AI assistants the ability to remember user information (preferences, behaviors) across conversations using vector search technology.
                            Last updated -
                          • A
                            security
                            F
                            license
                            A
                            quality
                            An MCP server that enhances AI agents' coding capabilities by providing zero hallucinations, improved code quality, security-first approach, high test coverage, and efficient context management.
                            Last updated -
                            15
                            55
                            1
                          • -
                            security
                            F
                            license
                            -
                            quality
                            A MCP Server that gives AI assistants the ability to remember information about users across conversations using vector search technology.
                            Last updated -

                          View all related MCP servers

                          MCP directory API

                          We provide all the information about MCP servers via our MCP API.

                          curl -X GET 'https://glama.ai/api/mcp/v1/servers/pi22by7/In-Memoria'

                          If you have feedback or need assistance with the MCP directory API, please join our Discord server