Integrates with Git repositories to analyze codebase evolution and maintain persistent intelligence across development sessions
Enhances coding assistance with persistent intelligence about code patterns and architectural decisions that survive beyond individual sessions
Analyzes JavaScript code to extract naming patterns, function signatures, and structural preferences for AI-assisted development
Performs tree-sitter parsing of Python codebases to learn coding patterns and provide context-aware suggestions
Analyzes Rust code structure and patterns to provide intelligent coding assistance based on learned preferences
Uses SQLite for structured storage of learned coding patterns, naming conventions, and architectural choices
Leverages SurrealDB for vector search and semantic queries of codebase intelligence and pattern relationships
Provides AST parsing and pattern learning for TypeScript codebases to understand coding style and architectural preferences
In Memoria
Persistent memory and pattern learning for AI coding assistants via MCP.
AI coding tools suffer from complete session amnesia. Every conversation with Claude, Copilot, or Cursor starts from scratch, forcing you to re-explain your codebase architecture, patterns, and preferences repeatedly.
In Memoria solves this by building persistent intelligence about your code that AI assistants can access through the Model Context Protocol - giving them memory that persists across sessions.
The Problem
Current AI tools:
- Re-analyze codebases every session (expensive)
- Give generic suggestions that don't match your style
- Have no memory of architectural decisions
- Can't learn from corrections you've made
Quick Start
How It Works
In Memoria runs as an MCP server that AI tools connect to. It provides 17 tools for codebase analysis and pattern learning.
Architecture:
Core engines:
- AST Parser (Rust): Tree-sitter parsing for TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, and SQL
- Pattern Learner (Rust): Statistical analysis of naming conventions, function signatures, and architectural choices
- Semantic Engine (Rust): Code relationship mapping and concept extraction with timeout protection
- TypeScript Layer: MCP server, SQLite/SurrealDB operations, file watching
- Storage: SQLite for structured patterns, SurrealDB for vector search and semantic queries
What It Learns
In Memoria builds statistical models from your actual code to understand your preferences:
Naming Patterns:
Architectural Choices:
Code Structure Preferences:
Project Organization:
MCP Tools for AI Assistants
In Memoria provides 17 tools that AI assistants can use to understand your codebase:
Getting Started:
get_learning_status
- Check what intelligence exists for a projectauto_learn_if_needed
- Automatically learn from codebase if no intelligence existsquick_setup
- Initialize In Memoria for a new project
Code Analysis:
analyze_codebase
- Get architectural overview, complexity metrics, and language breakdownget_file_content
- Retrieve files with rich metadata and analysisget_project_structure
- Understand directory hierarchy and organization patternssearch_codebase
- Semantic search that finds code by meaning, not just keywords
Pattern Intelligence:
get_pattern_recommendations
- Get coding suggestions that match your established stylepredict_coding_approach
- Predict how you'd solve similar problems based on your patternsget_developer_profile
- Access your learned coding preferences and decision patternsget_semantic_insights
- Discover code relationships and architectural concepts
Learning & Memory:
learn_codebase_intelligence
- Manually trigger analysis of a codebasecontribute_insights
- Allow AI to add observations back to the knowledge basegenerate_documentation
- Create docs that understand your project's patterns
System Monitoring:
get_system_status
- Health check and component statusget_intelligence_metrics
- Quality and completeness of learned patternsget_performance_status
- System performance and benchmarking
Implementation Details
Pattern Learning Algorithm:
- Parse code into ASTs using tree-sitter
- Extract structural patterns (function signatures, class hierarchies, naming)
- Build frequency maps of developer choices
- Train classifier on decision patterns
- Generate predictions for new code contexts
Performance:
- Smart file filtering - Automatically excludes build artifacts, dependencies, and generated files
- Timeout protection - Prevents analysis from hanging on complex files
- Fast analysis - Optimized processing that skips
node_modules/
,dist/
,.next/
, and other non-source files - File size limits - Skips very large files to prevent memory issues
- Incremental analysis - Only processes changed files in subsequent runs
- SQLite for structured data, SurrealDB embedded for semantic search and vectors
- Cross-platform Rust binaries (Windows, macOS, Linux)
- Built-in performance profiling and memory leak detection
- Optimized for real-time file watching without blocking development workflow
Team Usage
In Memoria works for individual developers and teams:
Individual:
- Learns your personal coding style
- Remembers architectural decisions you've made
- Provides context-aware suggestions
Team:
- Share
.in-memoria.db
files containing learned patterns across team members - Onboard new developers by providing pre-learned codebase intelligence
- Ensure consistent AI suggestions team-wide through shared pattern recognition
Technical Comparison
vs GitHub Copilot's memory:
- Copilot: Basic fact storage, no pattern learning
- In Memoria: Semantic analysis with prediction engine
vs Cursor's rules:
- Cursor: Static rules, manually defined
- In Memoria: Dynamic learning from actual code
vs Custom RAG:
- RAG: Retrieves relevant code snippets
- In Memoria: Understands coding patterns and predicts behavior
Build from Source
Requirements:
- Node.js 18+
- Rust 1.70+ (for building)
- 2GB RAM minimum
Quality & Testing:
- 98.3% unit test pass rate (118/120 tests)
- 100% MCP integration test coverage (23/23 tests)
- Comprehensive server lifecycle testing
- All Rust clippy warnings resolved
- Zero memory leaks verified
Development:
Contributing
This is infrastructure for the AI development ecosystem. Contributions welcome:
- Language support - Add tree-sitter parsers or extend file filtering
- Pattern learning improvements - Enhance statistical analysis and concept extraction
- MCP tool additions - New tools for AI assistant integration
- Performance optimizations - Further speed improvements and memory usage reduction
- Timeout and reliability - Additional safeguards for edge cases
See CONTRIBUTING.md for details.
FAQ
Q: Does this replace my AI coding assistant? A: No, it enhances them. In Memoria provides memory and context that AI tools can use.
Q: What data is collected? A: Everything stays local. No data is sent to external services.
Q: How accurate is pattern learning? A: Pattern recognition accuracy improves with codebase size and consistency. Projects with established patterns and consistent style will see better pattern detection than smaller or inconsistent codebases. The system learns from frequency and repetition in your actual code.
Q: Performance impact? A: Minimal. Runs in background with smart filtering that skips build artifacts and dependencies. Modern analysis engine with built-in safeguards for reliable operation.
Q: What file types are supported? A: TypeScript, JavaScript, Python, Rust, Go, Java, C/C++, C#, Svelte, Vue, and SQL with native AST parsing. Build artifacts and dependencies are automatically excluded.
Q: What if analysis encounters issues? A: Built-in reliability features handle edge cases gracefully. Large files and complex directories are processed efficiently with automatic fallbacks. Progress is shown during analysis.
License
MIT - see LICENSE
Try it: npx in-memoria server
Give your AI tools the memory they've been missing.
This server cannot be installed
hybrid server
The server is able to function both locally and remotely, depending on the configuration or use case.
Persistent memory and pattern learning for AI coding assistants via MCP.
Related MCP Servers
- -securityFlicense-qualityAn MCP server that integrates with mem0.ai to help users store, retrieve, and search coding preferences for more consistent programming practices.Last updated -439
- -securityFlicense-qualityAn MCP server that gives AI assistants the ability to remember user information (preferences, behaviors) across conversations using vector search technology.Last updated -
- AsecurityFlicenseAqualityAn MCP server that enhances AI agents' coding capabilities by providing zero hallucinations, improved code quality, security-first approach, high test coverage, and efficient context management.Last updated -15551
- -securityFlicense-qualityA MCP Server that gives AI assistants the ability to remember information about users across conversations using vector search technology.Last updated -