basic-memory
local-only server
The server can only run on the client’s local machine because it depends on local resources.
Integrations
Uses Markdown as the primary file format for storing knowledge, with specific patterns for semantic structure.
Works seamlessly with Obsidian for knowledge management, visualization, and editing of the Basic Memory knowledge base files.
Provides import capability for ChatGPT conversation history into the Basic Memory knowledge base.
Glama.ai
Basic Memory
Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to enable any compatible LLM to read and write to your local knowledge base.
- Website: http://basicmachines.co
- Documentation: http://memory.basicmachines.co
Basic Memory provides persistent contextual awareness across sessions through a structured knowledge graph. The system enables LLMs to access and reference prior conversations, track semantic relationships between concepts, and incorporate human edits made directly to knowledge files.
Quick Start
You can view shared context via files in ~/basic-memory
(default directory location).
Alternative Installation via Smithery
You can use Smithery to automatically configure Basic Memory for Claude Desktop:
This installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.
CLI Tools
You can also install the CLI tools to sync files or manage projects.
View available projects
Basic Memory will write notes in Markdown format. Open your project directory in your text editor to view project files while you have conversations with an LLM.
Why Basic Memory?
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
- Chat histories capture conversations but aren't structured knowledge
- RAG systems can query documents but don't let LLMs write back
- Vector databases require complex setups and often live in the cloud
- Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can read and write to. The key advantages:
- Local-first: All knowledge stays in files you control
- Bi-directional: Both you and the LLM read and write to the same files
- Structured yet simple: Uses familiar Markdown with semantic patterns
- Traversable knowledge graph: LLMs can follow links between topics
- Standard formats: Works with existing editors like Obsidian
- Lightweight infrastructure: Just local files indexed in a local SQLite database
With Basic Memory, you can:
- Have conversations that build on previous knowledge
- Create structured notes during natural conversations
- Have conversations with LLMs that remember what you've discussed before
- Navigate your knowledge graph semantically
- Keep everything local and under your control
- Use familiar tools like Obsidian to view and edit notes
- Build a personal knowledge base that grows over time
How It Works in Practice
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
- Start by chatting normally:
... continue conversation.
- Ask the LLM to help structure this knowledge:
LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
The note embeds semantic content and links to other topics via simple Markdown formatting.
- You see this file on your computer in real time in the
~/$HOME/basic-memory
directory:
- In a new chat with the LLM, you can reference this knowledge:
The LLM can now build rich context from the knowledge graph. For example:
Each related document can lead to more context, building a rich semantic understanding of your knowledge base. All of this context comes from standard Markdown files that both humans and LLMs can read and write.
Every time the LLM writes notes, they are saved in local Markdown files that you can:
- Edit in any text editor
- Version via git
- Back up normally
- Share when you want to
Technical Implementation
Under the hood, Basic Memory:
- Stores everything in Markdown files
- Uses a SQLite database for searching and indexing
- Extracts semantic meaning from simple Markdown patterns
- Files become
Entity
objects - Each
Entity
can haveObservations
, or facts associated with it Relations
connect entities together to form the knowledge graph
- Files become
- Maintains the local knowledge graph derived from the files
- Provides bidirectional synchronization between files and the knowledge graph
- Implements the Model Context Protocol (MCP) for AI integration
- Exposes tools that let AI assistants traverse and manipulate the knowledge graph
- Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
Frontmatter
Observations
Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a category
, tags
using a
"#" character, and an optional context
.
Observation Markdown format:
Examples of observations:
Relations
Relations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
Examples of relations:
Complete Example
Here's a complete example of a note with frontmatter, observations, and relations:
Basic Memory will parse the Markdown and derive the semantic relationships in the content. When you run
basic-memory sync
:
- New and changed files are detected
- Markdown patterns become semantic knowledge:
[tech]
becomes a categorized observation[[WikiLink]]
creates a relation in the knowledge graph- Tags and metadata are indexed for search
- A SQLite database maintains these relationships for fast querying
- MCP-compatible LLMs can access this knowledge via memory:// URLs
This creates a two-way flow where:
- Humans write and edit Markdown files
- LLMs read and write through the MCP protocol
- Sync keeps everything consistent
- All knowledge stays in local files.
Using with Claude Desktop
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
- Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json
for OS X):
If you want to use a specific project (see Multiple Projects below), update your Claude Desktop config:
- Sync your knowledge:
- In Claude Desktop, the LLM can now use these tools:
- Example prompts to try:
Multiple Projects
Basic Memory supports managing multiple separate knowledge bases through projects. This feature allows you to maintain separate knowledge graphs for different purposes (e.g., personal notes, work projects, research topics).
Managing Projects
Using Projects in Commands
All commands support the --project
flag to specify which project to use:
You can also set the BASIC_MEMORY_PROJECT
environment variable:
Project Isolation
Each project maintains:
- Its own collection of markdown files in the specified directory
- A separate SQLite database for that project
- Complete knowledge graph isolation from other projects
Design Philosophy
Basic Memory is built on some key ideas:
- Your knowledge should stay in files you control
- Both humans and AI should use natural formats
- Simple text patterns can capture rich meaning
- Local-first doesn't mean feature-poor
- Knowledge should persist across conversations
- AI assistants should build on past context
- File formats should be human-readable and editable
- Semantic structure should emerge from natural patterns
- Knowledge graphs should be both AI and human navigable
- Systems should augment human memory, not replace it
Importing Existing Data
Basic Memory provides CLI commands to import data from various sources, converting them into the structured Markdown format:
Claude.ai
First, request an export of your data from your Claude account. The data will be emailed to you in several files,
including
conversations.json
and projects.json
.
Import Claude.ai conversation data
The conversations will be turned into Markdown files and placed in the "conversations" folder by default (this can be changed with the --folder arg).
Example:
Next, you can run the sync
command to import the data into basic-memory
You can also import project data from Claude.ai
OpenAI ChatGPT
Knowledge Graph Memory Server
From the MCP Server: https://github.com/modelcontextprotocol/servers/tree/main/src/memory
Working with Your Knowledge Base
Once you've built up a knowledge base, you can interact with it in several ways:
Command Line Interface
Basic Memory provides a powerful CLI for managing your knowledge:
Obsidian Integration
Basic Memory works seamlessly with Obsidian, a popular knowledge management app:
- Point Obsidian to your Basic Memory directory
- Use standard Obsidian features like backlinks and graph view
- See your knowledge graph visually
- Use the canvas visualization generated by Basic Memory
File Organization
Basic Memory is flexible about how you organize your files:
- Group by topic in folders
- Use a flat structure with descriptive filenames
- Add custom metadata in frontmatter
- Tag files for better searchability
The system will build the semantic knowledge graph regardless of your file organization preference.
Using stdin with Basic Memory's write_note
Tool
The write-note
tool supports reading content from standard input (stdin), allowing for more flexible workflows when
creating or updating notes in your Basic Memory knowledge base.
Use Cases
This feature is particularly useful for:
- Piping output from other commands directly into Basic Memory notes
- Creating notes with multi-line content without having to escape quotes or special characters
- Integrating with AI assistants like Claude Code that can generate content and pipe it to Basic Memory
- Processing text data from files or other sources
Basic Usage
Method 1: Using a Pipe
You can pipe content from another command into write_note
:
Method 2: Using Heredoc Syntax
For multi-line content, you can use heredoc syntax:
Method 3: Input Redirection
You can redirect input from a file:
License
AGPL-3.0
Built with ♥️ by Basic Machines
You must be authenticated.
Basic Memory is a knowledge management system that allows you to build a persistent semantic graph from conversations with AI assistants. All knowledge is stored in standard Markdown files on your computer, giving you full control and ownership of your data. Integrates directly with Obsidan.md
- Basic Memory