SQLite Project Memory MCP
Supports rendering and exporting project memory into human-readable Markdown files, allowing structured data like roadmaps and tasks to be viewed as generated documents.
Utilizes a SQLite database as the authoritative relational store for project memory, providing tools to manage entities, relationships, and content through structured SQL-backed operations.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@SQLite Project Memory MCPshow me the project overview and recent activity"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
SQLite Project Memory MCP
SQLite-backed MCP server for storing project memory as a graph-friendly relational core.
The server is designed around four rules:
Everything is an entity.
Everything can relate to everything.
State is authoritative.
Narrative is separate from structure.
Instead of generating and maintaining many parallel documents, the MCP server stores project state in SQLite and exposes tools for safe access. Files such as todo.md or roadmap.md can be generated later as views, not treated as the source of truth.
What It Stores
The schema supports project memory such as:
tasks
file metadata
dependencies
decisions
roadmap items
architecture elements
plans
notes
todos
reasoning records
snapshots and audit history
Everything is modeled through generic tables:
entitiesattributesrelationshipscontenteventssnapshotssnapshot_entitiestags
The server also creates an FTS5 index for content.body when available.
Key MCP Tools
create_entityupsert_entityupdate_entityget_entitylist_entitiesfind_similar_entitiesresolve_entity_by_nameget_or_create_entityupsert_attributesset_tagsadd_relationshipconnect_entitieslist_relationshipsadd_contentappend_contentsearch_contentcreate_snapshotget_snapshotget_project_overviewget_recent_activityget_database_healthprune_content_retentionget_entity_graphbootstrap_project_memoryrun_read_queryrender_markdown_viewsexport_markdown_viewsserver_info
Resources And Prompt
memory://schemamemory://overviewmemory://recent-activityentity://{entity_id}prompt:
project_memory_policy
Run
Option 1: pip
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install -e .
python -m sqlite_mcp_serverOption 2: uv
uv venv
.\.venv\Scripts\Activate.ps1
uv pip install -e .
python -m sqlite_mcp_serverThe default database path is data/project_memory.db under the repository root.
Configuration
Environment variables:
SQLITE_MCP_DB_PATH: override the SQLite database file path.SQLITE_MCP_TRANSPORT:stdioorstreamable-http.SQLITE_MCP_EXPORT_DIR: default output directory for generated markdown views.
Example:
$env:SQLITE_MCP_DB_PATH = "D:\memory\project.db"
$env:SQLITE_MCP_TRANSPORT = "stdio"
python -m sqlite_mcp_serverDesign Notes
Entity ids, relationship ids, tags, types, and attribute keys are validated.
Duplicate entities are prevented by primary key.
Duplicate edges are prevented by a unique constraint on
(from_entity, to_entity, type).Narrative content is stored separately from authoritative state.
Mutating operations record audit events.
Raw arbitrary SQL write access is intentionally not exposed through MCP tools.
A constrained read-only SQL tool is available for diagnostics and ad hoc retrieval.
Markdown files are treated as generated views, not storage.
AI-First Tooling Guidance
If this server is going to be called frequently by an AI, the useful surface is not a single RUN SQL tool. The practical surface is:
bootstrap_project_memoryto initialize a project root and standard memory areas.upsert_entityso the AI can write idempotently instead of guessing whether to create or update.connect_entitiesso repeated graph writes do not produce duplicate edges.append_contentso narrative memory can be added without the AI having to mint content ids every time.get_recent_activityso an AI can resume context quickly after a new session.run_read_queryfor controlled read-only analytics when the built-in tools are not enough.render_markdown_viewsandexport_markdown_viewswhen human-readabletodo,roadmap,plan,architecture,decisions, ornotesfiles are needed.
The intended pattern is:
Use explicit domain tools for writes.
Use
run_read_queryonly for read-only inspection.Generate markdown views only when a person or downstream tool needs a document.
Keep SQLite authoritative.
For long-running AI usage, the hygiene tools matter as much as the write tools:
find_similar_entitieshelps avoid creating duplicate memory objects.resolve_entity_by_namelets the AI reuse existing entities when a human-style name is all it has.get_or_create_entitygives the AI a safer name-first workflow with stable id generation.get_database_healthreports duplicate candidates, invalid statuses, low-signal attributes, and retention pressure.prune_content_retentionprovides a controlled cleanup path for high-volumereasoningandlogcontent.
Suggested Modeling Conventions
Use stable ids such as
task.auth-flow,file.src.server,decision.schema-graph-core.Keep
typebroad and durable:task,file,module,decision,feature,plan,note.Put volatile metadata in
attributes, not in new tables.Use
content_typeto distinguishnote,spec,analysis,reasoning,log.Use relationships deliberately:
depends_on,implements,blocks,calls,owns.
This server cannot be installed
Resources
Unclaimed servers have limited discoverability.
Looking for Admin?
If you are the server author, to access and configure the admin panel.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/WebRTCGame/SQLITE-MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server