Skip to main content
Glama

add_notebook

Manually add a NotebookLM notebook to your library by specifying its URL, content description, topics, and use cases when auto-discovery is unavailable or you need precise control over metadata.

Instructions

📝 MANUAL ENTRY — Add notebook with manually specified metadata (use auto_discover_notebook instead)

When to Use

  • Auto-discovery failed or unavailable

  • User has specific metadata requirements

  • User prefers manual control

Conversation Workflow (Mandatory)

When the user says: "I have a NotebookLM with X"

FIRST: Try auto_discover_notebook for faster setup ONLY IF user refuses auto-discovery or it fails:

  1. Ask URL: "What is the NotebookLM URL?"

  2. Ask content: "What knowledge is inside?" (1–2 sentences)

  3. Ask topics: "Which topics does it cover?" (3–5)

  4. Ask use cases: "When should we consult it?"

  5. Propose metadata and confirm:

    • Name: [suggested]

    • Description: [from user]

    • Topics: [list]

    • Use cases: [list] "Add it to your library now?"

  6. Only after explicit "Yes" → call this tool

Rules

  • Do not add without user permission

  • Prefer auto_discover_notebook when possible

  • Do not guess metadata — ask concisely

  • Confirm summary before calling the tool

Example

User: "I have a notebook with n8n docs" You: "Want me to auto-generate the metadata?" (offer auto_discover_notebook first) User: "No, I'll specify it myself" You: Ask URL → content → topics → use cases; propose summary User: "Yes" You: Call add_notebook

Visit https://notebooklm.google/ → Login (free: 100 notebooks, 50 sources each, 500k words, 50 daily queries)

  1. Click "+ New" (top right) → Upload sources (docs, knowledge)

  2. Click "Share" (top right) → Select "Anyone with the link"

  3. Click "Copy link" (bottom left) → Give this link to Claude

(Upgraded: Google AI Pro/Ultra gives 5x higher limits)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
urlYesThe NotebookLM notebook URL
nameYesDisplay name for the notebook (e.g., 'n8n Documentation')
descriptionYesWhat knowledge/content is in this notebook
topicsYesTopics covered in this notebook
content_typesNoTypes of content (e.g., ['documentation', 'examples', 'best practices'])
use_casesNoWhen should Claude use this notebook (e.g., ['Implementing n8n workflows'])
tagsNoOptional tags for organization
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively communicates that this is a write operation ('Add notebook'), requires explicit user permission ('Only after explicit "Yes"'), and has specific workflow constraints (the mandatory conversation steps). However, it doesn't mention potential side effects like rate limits, authentication requirements, or error conditions that might be relevant for a tool that adds data to a library.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

While well-structured with clear sections, the description is excessively long (over 400 words). Much of the content (like the detailed 'How to Get a NotebookLM Share Link' section and extensive example) could be streamlined or moved elsewhere. The core information about the tool's purpose and usage is front-loaded, but the overall length reduces efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a 7-parameter write operation with no annotations or output schema, the description does a good job of providing context. It covers the tool's purpose, when to use it, a detailed workflow, rules, and an example. However, it lacks information about what happens after successful execution (return values or confirmation) and doesn't address potential error scenarios or system limitations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the schema already documents all 7 parameters thoroughly. The description doesn't add any additional semantic context about the parameters beyond what's in the schema descriptions. It references some parameters indirectly in the workflow (URL, description, topics, use cases) but doesn't provide new information about their meaning or usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Add notebook with manually specified metadata' and explicitly distinguishes it from its sibling 'auto_discover_notebook' in the very first line. It specifies the verb ('Add'), resource ('notebook'), and scope ('manually specified metadata'), making it immediately distinguishable from alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides extensive and explicit guidance on when to use this tool versus alternatives. It includes a dedicated 'When to Use' section with three specific scenarios, a 'Conversation Workflow' that mandates trying auto_discover_notebook first, and explicit rules like 'Prefer auto_discover_notebook when possible' and 'Do not add without user permission.' This gives the agent clear decision criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/roomi-fields/notebooklm-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server