The basic-memory server allows you to manage a local knowledge base via Markdown files and a SQLite database. Key capabilities include:
Read/Write notes: Create, update, and retrieve markdown notes with tags and folders
Delete notes: Remove notes by title or permalink
Search: Perform advanced searches with boolean operators and filters
Build context: Explore related topics using memory:// URIs with natural language timeframes
Recent activity: Track updates across your knowledge base
Generate visualizations: Create Obsidian canvas files to visualize connections
Project info: Access statistics about your current project
Read raw content: Access file content by path or permalink
Uses Markdown as the primary file format for storing knowledge, with specific patterns for semantic structure.
Works seamlessly with Obsidian for knowledge management, visualization, and editing of the Basic Memory knowledge base files.
Provides import capability for ChatGPT conversation history into the Basic Memory knowledge base.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@basic-memorycreate a note about coffee brewing methods I've been exploring"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Basic Memory
Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to enable any compatible LLM to read and write to your local knowledge base.
Website: https://basicmachines.co
Documentation: https://memory.basicmachines.co
Pick up your conversation right where you left off
AI assistants can load context from local files in a new conversation
Notes are saved locally as Markdown files in real time
No project knowledge or special prompting required
https://github.com/user-attachments/assets/a55d8238-8dd0-454a-be4c-8860dbbd0ddc
Related MCP server: memento-mcp
Quick Start
# Install with uv (recommended)
uv tool install basic-memory
# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}
# Now in Claude Desktop, you can:
# - Write notes with "Create a note about coffee brewing methods"
# - Read notes with "What do I know about pour over coffee?"
# - Search with "Find information about Ethiopian beans"
You can view shared context via files in ~/basic-memory (default directory location).
Alternative Installation via Smithery
You can use Smithery to automatically configure Basic Memory for Claude Desktop:
npx -y @smithery/cli install @basicmachines-co/basic-memory --client claudeThis installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.
Glama.ai
Why Basic Memory?
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
Chat histories capture conversations but aren't structured knowledge
RAG systems can query documents but don't let LLMs write back
Vector databases require complex setups and often live in the cloud
Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can read and write to. The key advantages:
Local-first: All knowledge stays in files you control
Bi-directional: Both you and the LLM read and write to the same files
Structured yet simple: Uses familiar Markdown with semantic patterns
Traversable knowledge graph: LLMs can follow links between topics
Standard formats: Works with existing editors like Obsidian
Lightweight infrastructure: Just local files indexed in a local SQLite database
With Basic Memory, you can:
Have conversations that build on previous knowledge
Create structured notes during natural conversations
Have conversations with LLMs that remember what you've discussed before
Navigate your knowledge graph semantically
Keep everything local and under your control
Use familiar tools like Obsidian to view and edit notes
Build a personal knowledge base that grows over time
Sync your knowledge to the cloud with bidirectional synchronization
Authenticate and manage cloud projects with subscription validation
Mount cloud storage for direct file access
How It Works in Practice
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
Start by chatting normally:
I've been experimenting with different coffee brewing methods. Key things I've learned:
- Pour over gives more clarity in flavor than French press
- Water temperature is critical - around 205°F seems best
- Freshly ground beans make a huge difference... continue conversation.
Ask the LLM to help structure this knowledge:
"Let's write a note about coffee brewing methods."LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
tags:
- coffee
- brewing
---
# Coffee Brewing Methods
## Observations
- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor
## Relations
- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]The note embeds semantic content and links to other topics via simple Markdown formatting.
You see this file on your computer in real time in the current project directory (default
~/$HOME/basic-memory).
Realtime sync can be enabled via running
basic-memory sync --watch
In a chat with the LLM, you can reference a topic:
Look at `coffee-brewing-methods` for context about pour over coffeeThe LLM can now build rich context from the knowledge graph. For example:
Following relation 'relates_to [[Coffee Bean Origins]]':
- Found information about Ethiopian Yirgacheffe
- Notes on Colombian beans' nutty profile
- Altitude effects on bean characteristics
Following relation 'requires [[Proper Grinding Technique]]':
- Burr vs. blade grinder comparisons
- Grind size recommendations for different methods
- Impact of consistent particle size on extractionEach related document can lead to more context, building a rich semantic understanding of your knowledge base.
This creates a two-way flow where:
Humans write and edit Markdown files
LLMs read and write through the MCP protocol
Sync keeps everything consistent
All knowledge stays in local files.
Technical Implementation
Under the hood, Basic Memory:
Stores everything in Markdown files
Uses a SQLite database for searching and indexing
Extracts semantic meaning from simple Markdown patterns
Files become
EntityobjectsEach
Entitycan haveObservations, or facts associated with itRelationsconnect entities together to form the knowledge graph
Maintains the local knowledge graph derived from the files
Provides bidirectional synchronization between files and the knowledge graph
Implements the Model Context Protocol (MCP) for AI integration
Exposes tools that let AI assistants traverse and manipulate the knowledge graph
Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
Frontmatter
title: <Entity title>
type: <The type of Entity> (e.g. note)
permalink: <a uri slug>
- <optional metadata> (such as tags) Observations
Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a category, tags using a
"#" character, and an optional context.
Observation Markdown format:
- [category] content #tag (optional context)Examples of observations:
- [method] Pour over extracts more floral notes than French press
- [tip] Grind size should be medium-fine for pour over #brewing
- [preference] Ethiopian beans have bright, fruity flavors (especially from Yirgacheffe)
- [fact] Lighter roasts generally contain more caffeine than dark roasts
- [experiment] Tried 1:15 coffee-to-water ratio with good results
- [resource] James Hoffman's V60 technique on YouTube is excellent
- [question] Does water temperature affect extraction of different compounds differently?
- [note] My favorite local shop uses a 30-second bloom timeRelations
Relations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
- relation_type [[WikiLink]] (optional context)Examples of relations:
- pairs_well_with [[Chocolate Desserts]]
- grown_in [[Ethiopia]]
- contrasts_with [[Tea Brewing Methods]]
- requires [[Burr Grinder]]
- improves_with [[Fresh Beans]]
- relates_to [[Morning Routine]]
- inspired_by [[Japanese Coffee Culture]]
- documented_in [[Coffee Journal]]Using with VS Code
Add the following JSON block to your User Settings (JSON) file in VS Code. You can do this by pressing Ctrl + Shift + P and typing Preferences: Open User Settings (JSON).
{
"mcp": {
"servers": {
"basic-memory": {
"command": "uvx",
"args": ["basic-memory", "mcp"]
}
}
}
}Optionally, you can add it to a file called .vscode/mcp.json in your workspace. This will allow you to share the configuration with others.
{
"servers": {
"basic-memory": {
"command": "uvx",
"args": ["basic-memory", "mcp"]
}
}
}You can use Basic Memory with VS Code to easily retrieve and store information while coding.
Using with Claude Desktop
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json
for OS X):
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}If you want to use a specific project (see Multiple Projects below), update your Claude Desktop config:
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp",
"--project",
"your-project-name"
]
}
}
}Sync your knowledge:
# One-time sync of local knowledge updates
basic-memory sync
# Run realtime sync process (recommended)
basic-memory sync --watchCloud features (optional, requires subscription):
# Authenticate with cloud
basic-memory cloud login
# Bidirectional sync with cloud
basic-memory cloud sync
# Verify cloud integrity
basic-memory cloud check
# Mount cloud storage
basic-memory cloud mountIn Claude Desktop, the LLM can now use these tools:
Content Management:
write_note(title, content, folder, tags) - Create or update notes
read_note(identifier, page, page_size) - Read notes by title or permalink
read_content(path) - Read raw file content (text, images, binaries)
view_note(identifier) - View notes as formatted artifacts
edit_note(identifier, operation, content) - Edit notes incrementally
move_note(identifier, destination_path) - Move notes with database consistency
delete_note(identifier) - Delete notes from knowledge baseKnowledge Graph Navigation:
build_context(url, depth, timeframe) - Navigate knowledge graph via memory:// URLs
recent_activity(type, depth, timeframe) - Find recently updated information
list_directory(dir_name, depth) - Browse directory contents with filteringSearch & Discovery:
search(query, page, page_size) - Search across your knowledge baseProject Management:
list_memory_projects() - List all available projects
create_memory_project(project_name, project_path) - Create new projects
get_current_project() - Show current project stats
sync_status() - Check synchronization statusVisualization:
canvas(nodes, edges, title, folder) - Generate knowledge visualizationsExample prompts to try:
"Create a note about our project architecture decisions"
"Find information about JWT authentication in my notes"
"Create a canvas visualization of my project components"
"Read my notes on the authentication system"
"What have I been working on in the past week?"Futher info
See the Documentation for more info, including:
License
AGPL-3.0
Contributions are welcome. See the Contributing guide for info about setting up the project locally and submitting PRs.
Star History
Built with ♥️ by Basic Machines
Appeared in Searches
- A server for finding information about memory banks
- Finding the Best Tool for Memory Context Across Agentic Sessions Using Augment Code
- Coding tools to understand and manage a large codebase like MCP, CLINE, or ROOCODE
- Optimizing prompts for exploring and utilizing a local document library
- Information or resources related to 'rag'