Enables querying Google's NotebookLM service to retrieve zero-hallucination answers synthesized from user-uploaded documentation, with support for multiple notebooks and citation-backed responses.
Supports HTTP REST API integration with Make.com, allowing automation scenarios to query NotebookLM knowledge bases for documentation-driven responses.
Provides HTTP REST API integration for n8n workflows, allowing automation tasks to query NotebookLM knowledge bases and receive synthesized answers from documentation.
Offers HTTP REST API access for Zapier integrations, enabling automated workflows to query NotebookLM notebooks and retrieve documentation-based answers.
NotebookLM MCP Server
Chat directly with NotebookLM for zero-hallucination answers based on your own notebooks
MCP Installation • HTTP REST API • Why NotebookLM • Examples • Documentation
🚀 Two Ways to Use This Server
1️⃣ HTTP REST API (New! Recommended for n8n, Zapier, Make.com)
Use NotebookLM from any tool via HTTP REST API:
Option A: Install from npm
Option B: Install from source (Required for HTTP mode)
Query the API:
Perfect for:
✅ n8n workflows and automation
✅ Zapier, Make.com integrations
✅ Custom web applications
✅ Backend APIs
👉 Full HTTP setup guide: deployment/docs/01-INSTALL.md
2️⃣ MCP stdio (For Claude Code, Cursor, Codex)
Use NotebookLM directly from your AI coding assistant:
Perfect for:
✅ Claude Code, Cursor, Codex
✅ Any MCP-compatible AI assistant
✅ Direct CLI integration
The Problem
When you tell Claude Code or Cursor to "search through my local documentation", here's what happens:
Massive token consumption: Searching through documentation means reading multiple files repeatedly
Inaccurate retrieval: Searches for keywords, misses context and connections between docs
Hallucinations: When it can't find something, it invents plausible-sounding APIs
Expensive & slow: Each question requires re-reading multiple files
The Solution
Let your tools chat directly with NotebookLM — Google's zero-hallucination knowledge base powered by Gemini 2.5 that provides intelligent, synthesized answers from your docs.
The real advantage: No more manual copy-paste. Your agent/workflow asks NotebookLM directly and gets answers back. Build deep understanding through automatic follow-ups.
Why NotebookLM, Not Local RAG?
Approach | Token Cost | Setup Time | Hallucinations | Answer Quality |
Feed docs to Claude | 🔴 Very high (multiple file reads) | Instant | Yes - fills gaps | Variable retrieval |
Web search | 🟡 Medium | Instant | High - unreliable sources | Hit or miss |
Local RAG | 🟡 Medium-High | Hours (embeddings, chunking) | Medium - retrieval gaps | Depends on setup |
NotebookLM MCP | 🟢 Minimal | 5 minutes | Zero - refuses if unknown | Expert synthesis |
What Makes NotebookLM Superior?
Pre-processed by Gemini: Upload docs once, get instant expert knowledge
Natural language Q&A: Not just retrieval — actual understanding and synthesis
Multi-source correlation: Connects information across 50+ documents
Citation-backed: Every answer includes source references
No infrastructure: No vector DBs, embeddings, or chunking strategies needed
HTTP REST API
Quick Start
👉 See
API Endpoints
Method | Endpoint | Description |
|
| Check server health |
|
| Ask a question to NotebookLM |
|
| List all notebooks |
|
| Add notebook (with live validation) |
|
| Remove a notebook |
|
| Set active notebook |
|
| List active sessions |
|
| Close a session |
👉 Full API documentation: deployment/docs/03-API.md
n8n Integration
Perfect for n8n workflows:
👉 n8n guide: deployment/docs/04-N8N-INTEGRATION.md
Background Daemon Mode
Run the HTTP server as a background process without keeping a terminal window open:
Features:
✅ Runs in background without terminal window
✅ Auto-restart on crash
✅ Centralized log management (logs/pm2-*.log)
✅ Memory limit protection (1GB max)
✅ Production-ready process management
Configuration: Edit ecosystem.config.cjs to customize PM2 behavior (env vars, restart policy, etc.)
Installation & Documentation
📖 Installation Guide — Step-by-step setup
🔧 Configuration — Environment variables, security
📡 API Reference — Complete endpoint documentation
📚 Notebook Library — Multi-notebook management
✅ Testing Suite — Automated validation scripts
🐛 Troubleshooting — Common issues
MCP Installation
Add to ~/.cursor/mcp.json:
Generic MCP config:
MCP Quick Start
Install the MCP server (see above)
Authenticate (one-time)
Say in your chat (Claude/Codex):
A Chrome window opens → log in with Google
Create your knowledge base
Go to notebooklm.google.com → Create notebook → Upload your docs:
📄 PDFs, Google Docs, markdown files
🔗 Websites, GitHub repos
🎥 YouTube videos
📚 Multiple sources per notebook
Share: ⚙️ Share → Anyone with link → Copy
Let Claude use it
That's it. Claude now asks NotebookLM whatever it needs, building expertise before writing code.
Related Project: Claude Code Skill (by original author)
The original author PleasePrompto also created a Python-based Claude Code Skill as an alternative approach:
🔗 NotebookLM Claude Code Skill - Python skill for Claude Code
When to use which approach?
Feature | This Project (MCP + HTTP) | Original Skill (Python) |
Protocol | MCP (Model Context Protocol) | Claude Skills |
Installation |
| Clone to
|
Sessions | ✅ Persistent browser sessions | Fresh browser per query |
Compatibility | ✅ Claude Code, Cursor, Codex, any MCP client | Claude Code only |
HTTP API | ✅ Works with n8n, Zapier, Make.com | ❌ Not available |
Language | TypeScript | Python |
Use case | Long conversations, automation workflows | Quick one-off queries |
Which one should you choose?
Use this MCP project if you want:
Persistent sessions (faster repeated queries)
Compatibility with multiple tools (Cursor, Codex, etc.)
HTTP REST API for n8n/Zapier automation
TypeScript-based development
Use the original Skill if you prefer:
Python-based workflow
Simpler clone-and-use installation
Stateless queries (no session management)
Only using Claude Code locally
Both use the same Patchright browser automation technology and provide zero-hallucination answers from NotebookLM.
Real-World Example
Building an n8n Workflow Without Hallucinations
Challenge: n8n's API is new — Claude hallucinates node names and functions.
Solution:
Downloaded complete n8n documentation → merged into manageable chunks
Uploaded to NotebookLM
Told Claude: "Build me a Gmail spam filter workflow. Use this NotebookLM: [link]"
Watch the AI-to-AI conversation:
Result: Perfect workflow on first try. No debugging hallucinated APIs.
Core Features
Zero Hallucinations
NotebookLM refuses to answer if information isn't in your docs. No invented APIs.
Multi-Notebook Library
Manage multiple NotebookLM notebooks with automatic validation, duplicate detection, and smart selection.
Autonomous Research
Claude asks follow-up questions automatically, building complete understanding before coding.
Deep, Iterative Research
Claude automatically asks follow-up questions to build complete understanding
Each answer triggers deeper questions until Claude has all the details
Example: For n8n workflow, Claude asked multiple sequential questions about Gmail integration, error handling, and data transformation
HTTP REST API
Use NotebookLM from n8n, Zapier, Make.com, or any HTTP client. No MCP required.
Cross-Tool Sharing
Set up once, use everywhere. Claude Code, Codex, Cursor, n8n — all can access the same library.
Architecture
Common Commands (MCP Mode)
Intent | Say | Result |
Authenticate | "Open NotebookLM auth setup" or "Log me in to NotebookLM" | Chrome opens for login |
Add notebook | "Add [link] to library" | Saves notebook with metadata |
List notebooks | "Show our notebooks" | Lists all saved notebooks |
Research first | "Research this in NotebookLM before coding" | Multi-question session |
Select notebook | "Use the React notebook" | Sets active notebook |
Update notebook | "Update notebook tags" | Modify metadata |
Remove notebook | "Remove [notebook] from library" | Deletes from library |
View browser | "Show me the browser" | Watch live NotebookLM chat |
Fix auth | "Repair NotebookLM authentication" | Clears and re-authenticates |
Switch account | "Re-authenticate with different Google account" | Changes account |
Clean restart | "Run NotebookLM cleanup" | Removes all data for fresh start |
Comparison to Alternatives
vs. Downloading docs locally
You: Download docs → Claude: "search through these files"
Problem: Claude reads thousands of files → massive token usage, often misses connections
NotebookLM: Pre-indexed by Gemini, semantic understanding across all docs
vs. Web search
You: "Research X online"
Problem: Outdated info, hallucinated examples, unreliable sources
NotebookLM: Only your trusted docs, always current, with citations
vs. Local RAG setup
You: Set up embeddings, vector DB, chunking strategy, retrieval pipeline
Problem: Hours of setup, tuning retrieval, still gets "creative" with gaps
NotebookLM: Upload docs → done. Google handles everything.
FAQ
Is it really zero hallucinations? Yes. NotebookLM is specifically designed to only answer from uploaded sources. If it doesn't know, it says so.
What about rate limits? Free tier has daily query limits per Google account. Quick account switching supported for continued research.
How secure is this? Chrome runs locally. Your credentials never leave your machine. Use a dedicated Google account if concerned.
Can I see what's happening?
Yes! Say "Show me the browser" (MCP mode) or set HEADLESS=false (HTTP mode) to watch the live NotebookLM conversation.
What makes this better than Claude's built-in knowledge? Your docs are always current. No training cutoff. No hallucinations. Perfect for new libraries, internal APIs, or fast-moving projects.
The Bottom Line
Without NotebookLM: Write code → Find it's wrong → Debug hallucinated APIs → Repeat
With NotebookLM: Research first → Write correct code → Ship faster
Stop debugging hallucinations. Start shipping accurate code.
Disclaimer
This tool automates browser interactions with NotebookLM to make your workflow more efficient. However, a few friendly reminders:
About browser automation: While I've built in humanization features (realistic typing speeds, natural delays, mouse movements) to make the automation behave more naturally, I can't guarantee Google won't detect or flag automated usage. I recommend using a dedicated Google account for automation rather than your primary account—think of it like web scraping: probably fine, but better safe than sorry!
About CLI tools and AI agents: CLI tools like Claude Code, Codex, and similar AI-powered assistants are incredibly powerful, but they can make mistakes. Please use them with care and awareness:
Always review changes before committing or deploying
Test in safe environments first
Keep backups of important work
Remember: AI agents are assistants, not infallible oracles
I built this tool for myself because I was tired of the copy-paste dance between NotebookLM and my editor. I'm sharing it in the hope it helps others too, but I can't take responsibility for any issues, data loss, or account problems that might occur. Use at your own discretion and judgment.
That said, if you run into problems or have questions, feel free to open an issue on GitHub. I'm happy to help troubleshoot!
Roadmap
Future features planned for upcoming releases:
🚀 Stateless Server Mode
Goal: Run the HTTP server without keeping a terminal window open permanently.
Planned options:
PM2 integration (Recommended): Cross-platform process manager with auto-restart, monitoring, and logs
Simple setup:
npm run daemon:start(uses PM2 with optimized config)Automatic startup on system boot
Built-in log rotation and monitoring
Windows Service: Native Windows service installation via
nssmornode-windowsServerless-ready mode: Lambda/cold-start compatible with lazy browser session initialization
Status: Planned for v1.2.0
🤖 Auto-fill Notebook Metadata
Goal: Automatically generate notebook name, description, and keywords when adding a notebook.
How it works:
When adding a notebook with empty metadata, automatically ask NotebookLM:
"If you had to name this notebook in one word, what would it be?"
"Give me 10 relevant keywords for this content"
"Describe this notebook in one sentence"
Parse the response and auto-populate metadata fields
Available as both automatic mode (on add) and manual command (
/notebooks/:id/auto-fill)
Benefits:
No more manual metadata entry
Consistent, AI-generated descriptions
Better notebook organization and searchability
Status: Planned for v1.2.0 or v1.3.0
💡 Have an idea?
Open a discussion to suggest new features!
Contributing
Found a bug? Have a feature idea? Open an issue or submit a PR!
See CONTRIBUTING.md for contribution guidelines.
License
MIT — Use freely in your projects.
See LICENSE for details.
Built with frustration about hallucinated APIs, powered by Google's NotebookLM
⭐ Star on GitHub if this saves you debugging time!
This server cannot be installed