Uses Hugging Face models to generate local embeddings for documents, enabling semantic search and token-efficient retrieval within the notebook library.
Enables the server to chunk, index, and perform semantic searches on Markdown files stored in notebook collections.
Integrates with Ollama as a local fallback for generating text embeddings used in document indexing and search.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@Notebook Library MCP Serversearch the Research notebook for any mentions of transformer models"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
Notebook Library MCP Server
Token-efficient document retrieval for substrate AI agents. Drop PDFs, text files, and markdown into notebook folders — they get chunked, embedded, and indexed for semantic search. Queries return only the most relevant passages (~2,500 tokens) instead of loading entire documents (50,000+).
What It Does
Your AI agent gets a notebook_library tool with these actions:
Action | Description |
| See all available notebooks |
| Create a new notebook collection |
| Semantic search within a notebook (the main one!) |
| List documents in a notebook |
| Deep-read a specific document chunk by chunk |
| Get statistics about a notebook |
| Re-sync after adding/changing files |
| Remove a document from the search index |
Supported file formats: .pdf, .txt, .md, .text, .markdown
Architecture
Embedding strategy (multi-tier fallback):
Hugging Face (
jinaai/jina-embeddings-v2-base-de) — local, free, multilingualOllama (
nomic-embed-text) — local fallback if HF fails
No external API keys needed. Everything runs locally.
Setup Guide
1. Install Dependencies
From your substrate root:
Key dependencies:
chromadb==0.4.18— vector databasetransformers+torch— Hugging Face embeddings (primary)ollama— embedding fallbackPyMuPDF— PDF text extractionwatchdog— file system monitoring
Note: First run will download the Hugging Face embedding model (~270MB). This is a one-time download.
2. Create Data Directories
3. Copy the MCP Server Files
Copy the entire mcp_servers/notebook_library/ directory into your substrate:
4. Copy the Tool Wrapper
Copy these two files into your backend/tools/ directory:
backend/tools/notebook_library_tool.py — The tool function your consciousness loop calls. This imports NotebookManager directly (no subprocess).
backend/tools/notebook_library_tool_schema.json — The tool schema so your agent knows how to call it.
5. Register the Tool in Your Consciousness Loop
Three integration points:
a) Import in integration_tools.py
Add to your imports:
Add the wrapper method to your IntegrationTools class:
Add 'notebook_library_tool' to your tool schema loading list so the JSON schema gets picked up.
b) Add tool call handler in consciousness_loop.py
In your tool execution block (where you handle elif tool_name == "..." cases), add:
c) Verify schema loading
The tool schema file (notebook_library_tool_schema.json) must be in backend/tools/ alongside your other tool schemas. The schema loader should pick it up automatically if it follows the same pattern as your other tools.
6. Add Documents
Create notebook folders and drop files in:
Documents are auto-ingested when your agent first queries the notebook, or you can trigger a manual sync via the sync_notebook action.
Environment Variables (Optional)
All have sensible defaults. Override only if needed:
Variable | Default | Description |
|
| Where notebook folders live |
|
| Vector database storage |
|
| Ollama server (fallback embeddings) |
|
| Ollama model name |
|
| Characters per chunk |
|
| Overlap between chunks |
Important: Update OLLAMA_BASE_URL to point to your own Ollama instance if you're using the Ollama fallback. The default points to the original developer's local network.
How It Works
Ingestion: Documents are split into chunks (~2000 chars each with 200 char overlap), embedded using Hugging Face or Ollama, and stored in ChromaDB collections (one per notebook).
Querying: Your agent's query gets embedded with the same model, then ChromaDB finds the most similar chunks via cosine similarity. Only the top N passages are returned (default 5).
File tracking: A manifest system (MD5 hashes) tracks which files have been ingested. Changed files get re-processed; unchanged files are skipped.
File watching: A watchdog-based file watcher monitors notebook folders and auto-ingests new/modified files with a 2-second debounce.
Example Agent Usage
Once integrated, your agent can use it like:
Troubleshooting
"No notebooks found" — Make sure data/notebooks/ exists and has at least one subfolder with files in it.
Slow first query — The first query to a notebook triggers ingestion (chunking + embedding all documents). Subsequent queries are fast. For large collections, run sync_notebook first.
Embedding model download — First run downloads the Jina embeddings model (~270MB). If this fails behind a firewall, the system falls back to Ollama. Make sure either HF model access or an Ollama instance is available.
ChromaDB version mismatch — Pin to chromadb==0.4.18. Newer versions may have breaking API changes.
OLLAMA_BASE_URL — If you see Ollama connection errors and you're not using Ollama, that's fine — it's just the fallback failing after HF already succeeded. If HF also fails, update this URL to your Ollama instance.