Project Tessera is a local, private AI knowledge layer that captures, stores, and serves knowledge across AI tools and sessions. All processing (embeddings, storage) runs locally via LanceDB and fastembed/ONNX — no external API calls.
Search & Retrieval
search_documents— Hybrid semantic + keyword search across indexed documents (PRDs, decision logs, session logs, etc.), with optional filtering by project or typelist_sources— List all indexed files in the workspaceread_file— Read the full contents of any file
Memory & Learning
remember— Save knowledge, decisions, or preferences for cross-session persistencerecall— Search and retrieve memories from past conversationslearn— Auto-save and immediately index new insights for future search
File Organization
organize_files— Move, rename, or archive workspace filessuggest_cleanup— Detect and suggest cleanup for clutter, backups, large files, and empty directories
Project & Decision Tracking
project_status— Get recent changes and file statisticsextract_decisions— Pull past decisions from session and decision logsaudit_prd— Audit a PRD for quality and completeness across a 13-section structure
Knowledge Graphs
knowledge_graph— Generate a Mermaid diagram of relationships between documents and conceptsexplore_connections— Show connections for a specific document or concept
Indexing & Sync
ingest_documents— Full rebuild of the local vector indexsync_documents— Incremental sync of only new, changed, or deleted files
Integrates with Claude Desktop via MCP, with planned HTTP API support for other platforms (ChatGPT, Gemini).
Tessera
Personal Knowledge Layer for AI. Own your memory across every AI tool.
You use Claude, ChatGPT, Gemini, Copilot. Each conversation generates knowledge that disappears when the session ends. Tessera captures that knowledge, stores it locally, and serves it back to any AI. Your memory, your machine, your data.
What makes Tessera different
Auto-learning -- Tessera records every interaction and extracts decisions, preferences, and facts automatically. No manual "remember this."
Interface-agnostic core -- One knowledge engine, multiple interfaces. MCP today, HTTP API for ChatGPT/Gemini/extensions coming next.
Cross-session memory -- AI remembers your decisions and context between conversations.
100% local -- No cloud, no API keys, no data leaving your machine. LanceDB + fastembed/ONNX.
Hybrid search -- Semantic + keyword search with reranking. Not just vector similarity.
Related MCP server: MCP VectorStore Server
Architecture
+-----------------+
| src/core.py | Business logic (35 functions)
| | Search, memory, knowledge graph,
| | auto-extract, interaction log
+-----------------+
/ | \
+-------------+ +----------------+ +----------+
| mcp_server | | http_server.py | | cli.py |
| (stdio/MCP) | | (REST API) | | (CLI) |
| Claude | | ChatGPT, | | |
| Desktop | | Gemini, etc. | | |
+-------------+ +----------------+ +----------+
(planned)
Core engine:
+--------------------------------------------------+
| LanceDB (vectors) | SQLite (metadata, analytics) |
| fastembed/ONNX (local embeddings, no API keys) |
| Auto-extract (pattern-based fact detection) |
| Interaction log (every tool call recorded) |
+--------------------------------------------------+One core, multiple interfaces. The same knowledge base works regardless of which AI tool you use.
Get started
1. Install
pip install project-tesseraOr with uv:
uvx --from project-tessera tessera setup2. Setup
tessera setupThis does everything:
Creates a workspace config
Downloads the embedding model (~220MB, first time only)
Configures Claude Desktop automatically
3. Restart Claude Desktop
Ask Claude about your documents. It searches automatically.
Supported file types (40+)
Category | Extensions | Install |
Documents |
| included |
Office |
|
|
Code |
| included |
Config |
| included |
Web |
| included |
Images |
|
|
Tools (39)
Search
Tool | What it does |
| Semantic + keyword hybrid search across all docs |
| Search documents AND memories in one call |
| Full file view (CSV as table, XLSX per sheet, etc.) |
| Read any file's full content |
| See what's indexed |
Memory
Tool | What it does |
| Save knowledge that persists across sessions |
| Search past memories from previous conversations |
| Save and immediately index new knowledge |
| Auto-extract decisions/facts from the current session |
| Browse saved memories |
| Delete a specific memory |
| Batch export all memories as JSON |
| Batch import memories from JSON |
| List all unique tags with counts |
| Filter memories by specific tag |
| List auto-detected categories (decision/preference/fact) |
| Filter memories by category |
Knowledge graph
Tool | What it does |
| Find documents similar to a given file |
| Build a Mermaid diagram of document relationships |
| Show connections around a specific topic |
Auto-learn
Tool | What it does |
| Extract and save knowledge from the current session |
| Turn auto-learning on/off or check status |
| Review recently auto-learned memories |
| View tool calls from current/past sessions |
| Session history with interaction counts |
Workspace
Tool | What it does |
| Index documents (first-time or full rebuild) |
| Incremental sync (only changed files) |
| Recent changes per project |
| Find past decisions from logs |
| Check PRD quality (13-section structure) |
| Move, rename, archive files |
| Detect backup files, empty dirs, misplaced files |
| Server health: tracked files, sync history, cache |
| Comprehensive workspace diagnostics |
| Search usage patterns, top queries, response times |
| Detect stale documents older than N days |
CLI
tessera setup # One-command setup
tessera init # Interactive setup
tessera ingest # Index all sources
tessera sync # Re-index changed files
tessera check # Workspace health
tessera status # Project status
tessera install-mcp # Configure Claude Desktop
tessera version # Show versionHow it works
Documents (Markdown, CSV, XLSX, DOCX, PDF)
|
v
Parse & chunk --> Embed locally (fastembed/ONNX) --> LanceDB (local vector DB)
|
v
src/core.py (search, memory, knowledge graph, auto-extract)
|
v
MCP server (Claude Desktop) / HTTP API (ChatGPT, Gemini, extensions)Everything runs on your machine. No external API calls for search or embedding.
Claude Desktop config
With uvx (recommended):
{
"mcpServers": {
"tessera": {
"command": "uvx",
"args": ["--from", "project-tessera", "tessera-mcp"]
}
}
}With pip:
{
"mcpServers": {
"tessera": {
"command": "tessera-mcp"
}
}
}Config location:
macOS:
~/Library/Application Support/Claude/claude_desktop_config.jsonWindows:
%APPDATA%\Claude\claude_desktop_config.json
Configuration
tessera setup creates workspace.yaml. All parameters are tunable:
workspace:
root: /Users/you/Documents
name: my-workspace
sources:
- path: .
type: document
search:
reranker_weight: 0.7 # Semantic vs keyword balance
max_top_k: 50 # Max results per search
ingestion:
chunk_size: 1024 # Text chunk size
chunk_overlap: 100 # Overlap between chunks
watcher:
poll_interval: 30.0 # Seconds between scans
debounce: 5.0 # Wait before syncingOr skip config entirely -- Tessera auto-detects your workspace. Set TESSERA_WORKSPACE=/path/to/docs to specify a folder.
Roadmap
See ROADMAP.md for the full plan from v0.6 to v1.0.
Phase | Version | What changes |
Sponge | v0.7 | Manual memory becomes automatic learning |
Radar | v0.8 | Reactive search becomes proactive intelligence |
Gateway | v0.9 | MCP-only becomes multi-interface (HTTP API) |
Cortex | v1.0 | Search tool becomes Claude's persistent brain |
License
AGPL-3.0 -- see LICENSE.
Commercial licensing: bessl.framework@gmail.com