notebooklm

📁 teng-lin/notebooklm-py 📅 Jan 18, 2026
70
总安装量
67
周安装量
#3138
全站排名
安装命令
npx skills add https://github.com/teng-lin/notebooklm-py --skill notebooklm

Agent 安装分布

claude-code 55
opencode 51
codex 44
gemini-cli 41
cursor 36
antigravity 34

Skill 文档

NotebookLM Automation

Complete programmatic access to Google NotebookLM—including capabilities not exposed in the web UI. Create notebooks, add sources (URLs, YouTube, PDFs, audio, video, images), chat with content, generate all artifact types, and download results in multiple formats.

Installation

From PyPI (Recommended):

pip install notebooklm-py

From GitHub (use latest release tag, NOT main branch):

# Get the latest release tag (using curl)
LATEST_TAG=$(curl -s https://api.github.com/repos/teng-lin/notebooklm-py/releases/latest | grep '"tag_name"' | cut -d'"' -f4)
pip install "git+https://github.com/teng-lin/notebooklm-py@${LATEST_TAG}"

⚠️ DO NOT install from main branch (pip install git+https://github.com/teng-lin/notebooklm-py). The main branch may contain unreleased/unstable changes. Always use PyPI or a specific release tag, unless you are testing unreleased features.

After installation, install the Claude Code skill:

notebooklm skill install

Prerequisites

IMPORTANT: Before using any command, you MUST authenticate:

notebooklm login          # Opens browser for Google OAuth
notebooklm list           # Verify authentication works

If commands fail with authentication errors, re-run notebooklm login.

CI/CD, Multiple Accounts, and Parallel Agents

For automated environments, multiple accounts, or parallel agent workflows:

Variable Purpose
NOTEBOOKLM_HOME Custom config directory (default: ~/.notebooklm)
NOTEBOOKLM_AUTH_JSON Inline auth JSON – no file writes needed

CI/CD setup: Set NOTEBOOKLM_AUTH_JSON from a secret containing your storage_state.json contents.

Multiple accounts: Use different NOTEBOOKLM_HOME directories per account.

Parallel agents: The CLI stores notebook context in a shared file (~/.notebooklm/context.json). Multiple concurrent agents using notebooklm use can overwrite each other’s context.

Solutions for parallel workflows:

  1. Always use explicit notebook ID (recommended): Pass -n <notebook_id> (for wait/download commands) or --notebook <notebook_id> (for others) instead of relying on use
  2. Per-agent isolation: Set unique NOTEBOOKLM_HOME per agent: export NOTEBOOKLM_HOME=/tmp/agent-$ID
  3. Use full UUIDs: Avoid partial IDs in automation (they can become ambiguous)

Agent Setup Verification

Before starting workflows, verify the CLI is ready:

  1. notebooklm status → Should show “Authenticated as: email@…”
  2. notebooklm list --json → Should return valid JSON (even if empty notebooks list)
  3. If either fails → Run notebooklm login

When This Skill Activates

Explicit: User says “/notebooklm”, “use notebooklm”, or mentions the tool by name

Intent detection: Recognize requests like:

  • “Create a podcast about [topic]”
  • “Summarize these URLs/documents”
  • “Generate a quiz from my research”
  • “Turn this into an audio overview”
  • “Create flashcards for studying”
  • “Generate a video explainer”
  • “Make an infographic”
  • “Create a mind map of the concepts”
  • “Download the quiz as markdown”
  • “Add these sources to NotebookLM”

Autonomy Rules

Run automatically (no confirmation):

  • notebooklm status – check context
  • notebooklm auth check – diagnose auth issues
  • notebooklm list – list notebooks
  • notebooklm source list – list sources
  • notebooklm artifact list – list artifacts
  • notebooklm language list – list supported languages
  • notebooklm language get – get current language
  • notebooklm language set – set language (global setting)
  • notebooklm artifact wait – wait for artifact completion (in subagent context)
  • notebooklm source wait – wait for source processing (in subagent context)
  • notebooklm research status – check research status
  • notebooklm research wait – wait for research (in subagent context)
  • notebooklm use <id> – set context (⚠️ SINGLE-AGENT ONLY – use -n flag in parallel workflows)
  • notebooklm create – create notebook
  • notebooklm ask "..." – chat queries
  • notebooklm source add – add sources

Ask before running:

  • notebooklm delete – destructive
  • notebooklm generate * – long-running, may fail
  • notebooklm download * – writes to filesystem
  • notebooklm artifact wait – long-running (when in main conversation)
  • notebooklm source wait – long-running (when in main conversation)
  • notebooklm research wait – long-running (when in main conversation)

Quick Reference

Task Command
Authenticate notebooklm login
Diagnose auth issues notebooklm auth check
Diagnose auth (full) notebooklm auth check --test
List notebooks notebooklm list
Create notebook notebooklm create "Title"
Set context notebooklm use <notebook_id>
Show context notebooklm status
Add URL source notebooklm source add "https://..."
Add file notebooklm source add ./file.pdf
Add YouTube notebooklm source add "https://youtube.com/..."
List sources notebooklm source list
Wait for source processing notebooklm source wait <source_id>
Web research (fast) notebooklm source add-research "query"
Web research (deep) notebooklm source add-research "query" --mode deep --no-wait
Check research status notebooklm research status
Wait for research notebooklm research wait --import-all
Chat notebooklm ask "question"
Chat (new conversation) notebooklm ask "question" --new
Chat (specific sources) notebooklm ask "question" -s src_id1 -s src_id2
Chat (with references) notebooklm ask "question" --json
Get source fulltext notebooklm source fulltext <source_id>
Get source guide notebooklm source guide <source_id>
Generate podcast notebooklm generate audio "instructions"
Generate podcast (JSON) notebooklm generate audio --json
Generate podcast (specific sources) notebooklm generate audio -s src_id1 -s src_id2
Generate video notebooklm generate video "instructions"
Generate quiz notebooklm generate quiz
Check artifact status notebooklm artifact list
Wait for completion notebooklm artifact wait <artifact_id>
Download audio notebooklm download audio ./output.mp3
Download video notebooklm download video ./output.mp4
Download report notebooklm download report ./report.md
Download mind map notebooklm download mind-map ./map.json
Download data table notebooklm download data-table ./data.csv
Download quiz notebooklm download quiz quiz.json
Download quiz (markdown) notebooklm download quiz --format markdown quiz.md
Download flashcards notebooklm download flashcards cards.json
Download flashcards (markdown) notebooklm download flashcards --format markdown cards.md
Delete notebook notebooklm notebook delete <id>
List languages notebooklm language list
Get language notebooklm language get
Set language notebooklm language set zh_Hans

Parallel safety: Use explicit notebook IDs in parallel workflows. Commands supporting -n shorthand: artifact wait, source wait, research wait/status, download *. Download commands also support -a/--artifact. Other commands use --notebook. For chat, use --new to start fresh conversations (avoids conversation ID conflicts).

Partial IDs: Use first 6+ characters of UUIDs. Must be unique prefix (fails if ambiguous). Works for: use, delete, wait commands. For automation, prefer full UUIDs to avoid ambiguity.

Command Output Formats

Commands with --json return structured data for parsing:

Create notebook:

$ notebooklm create "Research" --json
{"id": "abc123de-...", "title": "Research"}

Add source:

$ notebooklm source add "https://example.com" --json
{"source_id": "def456...", "title": "Example", "status": "processing"}

Generate artifact:

$ notebooklm generate audio "Focus on key points" --json
{"task_id": "xyz789...", "status": "pending"}

Chat with references:

$ notebooklm ask "What is X?" --json
{"answer": "X is... [1] [2]", "conversation_id": "...", "turn_number": 1, "is_follow_up": false, "references": [{"source_id": "abc123...", "citation_number": 1, "cited_text": "Relevant passage from source..."}, {"source_id": "def456...", "citation_number": 2, "cited_text": "Another passage..."}]}

Source fulltext (get indexed content):

$ notebooklm source fulltext <source_id> --json
{"source_id": "...", "title": "...", "char_count": 12345, "content": "Full indexed text..."}

Understanding citations: The cited_text in references is often a snippet or section header, not the full quoted passage. The start_char/end_char positions reference NotebookLM’s internal chunked index, not the raw fulltext. Use SourceFulltext.find_citation_context() to locate citations:

fulltext = await client.sources.get_fulltext(notebook_id, ref.source_id)
matches = fulltext.find_citation_context(ref.cited_text)  # Returns list[(context, position)]
if matches:
    context, pos = matches[0]  # First match; check len(matches) > 1 for duplicates

Extract IDs: Parse the id, source_id, or task_id field from JSON output.

Generation Types

All generate commands support:

  • -s, --source to use specific source(s) instead of all sources
  • --language to set output language (defaults to configured language or ‘en’)
  • --json for machine-readable output (returns task_id and status)
  • --retry N to automatically retry on rate limits with exponential backoff
Type Command Options Download
Podcast generate audio --format [deep-dive|brief|critique|debate], --length [short|default|long] .mp3
Video generate video --format [explainer|brief], --style [auto|classic|whiteboard|kawaii|anime|watercolor|retro-print|heritage|paper-craft] .mp4
Slide Deck generate slide-deck --format [detailed|presenter], --length [default|short] .pdf
Infographic generate infographic --orientation [landscape|portrait|square], --detail [concise|standard|detailed] .png
Report generate report --format [briefing-doc|study-guide|blog-post|custom] .md
Mind Map generate mind-map (sync, instant) .json
Data Table generate data-table description required .csv
Quiz generate quiz --difficulty [easy|medium|hard], --quantity [fewer|standard|more] .json/.md/.html
Flashcards generate flashcards --difficulty [easy|medium|hard], --quantity [fewer|standard|more] .json/.md/.html

Features Beyond the Web UI

These capabilities are available via CLI but not in NotebookLM’s web interface:

Feature Command Description
Batch downloads download <type> --all Download all artifacts of a type at once
Quiz/Flashcard export download quiz --format json Export as JSON, Markdown, or HTML (web UI only shows interactive view)
Mind map extraction download mind-map Export hierarchical JSON for visualization tools
Data table export download data-table Download structured tables as CSV
Source fulltext source fulltext <id> Retrieve the indexed text content of any source
Programmatic sharing share commands Manage sharing permissions without the UI

Common Workflows

Research to Podcast (Interactive)

Time: 5-10 minutes total

  1. notebooklm create "Research: [topic]" — if fails: check auth with notebooklm login
  2. notebooklm source add for each URL/document — if one fails: log warning, continue with others
  3. Wait for sources: notebooklm source list --json until all status=READY — required before generation
  4. notebooklm generate audio "Focus on [specific angle]" (confirm when asked) — if rate limited: wait 5 min, retry once
  5. Note the artifact ID returned
  6. Check notebooklm artifact list later for status
  7. notebooklm download audio ./podcast.mp3 when complete (confirm when asked)

Research to Podcast (Automated with Subagent)

Time: 5-10 minutes, but continues in background

When user wants full automation (generate and download when ready):

  1. Create notebook and add sources as usual
  2. Wait for sources to be ready (use source wait or check source list --json)
  3. Run notebooklm generate audio "..." --json → parse artifact_id from output
  4. Spawn a background agent using Task tool:
    Task(
      prompt="Wait for artifact {artifact_id} in notebook {notebook_id} to complete, then download.
              Use: notebooklm artifact wait {artifact_id} -n {notebook_id} --timeout 600
              Then: notebooklm download audio ./podcast.mp3 -a {artifact_id} -n {notebook_id}",
      subagent_type="general-purpose"
    )
    
  5. Main conversation continues while agent waits

Error handling in subagent:

  • If artifact wait returns exit code 2 (timeout): Report timeout, suggest checking artifact list
  • If download fails: Check if artifact status is COMPLETED first

Benefits: Non-blocking, user can do other work, automatic download on completion

Document Analysis

Time: 1-2 minutes

  1. notebooklm create "Analysis: [project]"
  2. notebooklm source add ./doc.pdf (or URLs)
  3. notebooklm ask "Summarize the key points"
  4. notebooklm ask "What are the main arguments?"
  5. Continue chatting as needed

Bulk Import

Time: Varies by source count

  1. notebooklm create "Collection: [name]"
  2. Add multiple sources:
    notebooklm source add "https://url1.com"
    notebooklm source add "https://url2.com"
    notebooklm source add ./local-file.pdf
    
  3. notebooklm source list to verify

Source limits: Varies by plan—Standard: 50, Plus: 100, Pro: 300, Ultra: 600 sources per notebook. See NotebookLM plans for details. The CLI does not enforce these limits; they are applied by your NotebookLM account. Supported types: PDFs, YouTube URLs, web URLs, Google Docs, text files, Markdown, Word docs, audio files, video files, images

Bulk Import with Source Waiting (Subagent Pattern)

Time: Varies by source count

When adding multiple sources and needing to wait for processing before chat/generation:

  1. Add sources with --json to capture IDs:
    notebooklm source add "https://url1.com" --json  # → {"source_id": "abc..."}
    notebooklm source add "https://url2.com" --json  # → {"source_id": "def..."}
    
  2. Spawn a background agent to wait for all sources:
    Task(
      prompt="Wait for sources {source_ids} in notebook {notebook_id} to be ready.
              For each: notebooklm source wait {id} -n {notebook_id} --timeout 120
              Report when all ready or if any fail.",
      subagent_type="general-purpose"
    )
    
  3. Main conversation continues while agent waits
  4. Once sources are ready, proceed with chat or generation

Why wait for sources? Sources must be indexed before chat or generation. Takes 10-60 seconds per source.

Deep Web Research (Subagent Pattern)

Time: 2-5 minutes, runs in background

Deep research finds and analyzes web sources on a topic:

  1. Create notebook: notebooklm create "Research: [topic]"
  2. Start deep research (non-blocking):
    notebooklm source add-research "topic query" --mode deep --no-wait
    
  3. Spawn a background agent to wait and import:
    Task(
      prompt="Wait for research in notebook {notebook_id} to complete and import sources.
              Use: notebooklm research wait -n {notebook_id} --import-all --timeout 300
              Report how many sources were imported.",
      subagent_type="general-purpose"
    )
    
  4. Main conversation continues while agent waits
  5. When agent completes, sources are imported automatically

Alternative (blocking): For simple cases, omit --no-wait:

notebooklm source add-research "topic" --mode deep --import-all
# Blocks for up to 5 minutes

When to use each mode:

  • --mode fast: Specific topic, quick overview needed (5-10 sources, seconds)
  • --mode deep: Broad topic, comprehensive analysis needed (20+ sources, 2-5 min)

Research sources:

  • --from web: Search the web (default)
  • --from drive: Search Google Drive

Output Style

Progress updates: Brief status for each step

  • “Creating notebook ‘Research: AI’…”
  • “Adding source: https://example.com…”
  • “Starting audio generation… (task ID: abc123)”

Fire-and-forget for long operations:

  • Start generation, return artifact ID immediately
  • Do NOT poll or wait in main conversation – generation takes 5-45 minutes (see timing table)
  • User checks status manually, OR use subagent with artifact wait

JSON output: Use --json flag for machine-readable output:

notebooklm list --json
notebooklm auth check --json
notebooklm source list --json
notebooklm artifact list --json

JSON schemas (key fields):

notebooklm list --json:

{"notebooks": [{"id": "...", "title": "...", "created_at": "..."}]}

notebooklm auth check --json:

{"checks": {"storage_exists": true, "json_valid": true, "cookies_present": true, "sid_cookie": true, "token_fetch": true}, "details": {"storage_path": "...", "auth_source": "file", "cookies_found": ["SID", "HSID", "..."], "cookie_domains": [".google.com"]}}

notebooklm source list --json:

{"sources": [{"id": "...", "title": "...", "status": "ready|processing|error"}]}

notebooklm artifact list --json:

{"artifacts": [{"id": "...", "title": "...", "type": "Audio Overview", "status": "in_progress|pending|completed|unknown"}]}

Status values:

  • Sources: processing → ready (or error)
  • Artifacts: pending or in_progress → completed (or unknown)

Error Handling

On failure, offer the user a choice:

  1. Retry the operation
  2. Skip and continue with something else
  3. Investigate the error

Error decision tree:

Error Cause Action
Auth/cookie error Session expired Run notebooklm auth check then notebooklm login
“No notebook context” Context not set Use -n <id> or --notebook <id> flag (parallel), or notebooklm use <id> (single-agent)
“No result found for RPC ID” Rate limiting Wait 5-10 min, retry
GENERATION_FAILED Google rate limit Wait and retry later
Download fails Generation incomplete Check artifact list for status
Invalid notebook/source ID Wrong ID Run notebooklm list to verify
RPC protocol error Google changed APIs May need CLI update

Exit Codes

All commands use consistent exit codes:

Code Meaning Action
0 Success Continue
1 Error (not found, processing failed) Check stderr, see Error Handling
2 Timeout (wait commands only) Extend timeout or check status manually

Examples:

  • source wait returns 1 if source not found or processing failed
  • artifact wait returns 2 if timeout reached before completion
  • generate returns 1 if rate limited (check stderr for details)

Known Limitations

Rate limiting: Audio, video, quiz, flashcards, infographic, and slide deck generation may fail due to Google’s rate limits. This is an API limitation, not a bug.

Reliable operations: These always work:

  • Notebooks (list, create, delete, rename)
  • Sources (add, list, delete)
  • Chat/queries
  • Mind-map, study-guide, report, data-table generation

Unreliable operations: These may fail with rate limiting:

  • Audio (podcast) generation
  • Video generation
  • Quiz and flashcard generation
  • Infographic and slide deck generation

Workaround: If generation fails:

  1. Check status: notebooklm artifact list
  2. Retry after 5-10 minutes
  3. Use the NotebookLM web UI as fallback

Processing times vary significantly. Use the subagent pattern for long operations:

Operation Typical time Suggested timeout
Source processing 30s – 10 min 600s
Research (fast) 30s – 2 min 180s
Research (deep) 15 – 30+ min 1800s
Notes instant n/a
Mind-map instant (sync) n/a
Quiz, flashcards 5 – 15 min 900s
Report, data-table 5 – 15 min 900s
Audio generation 10 – 20 min 1200s
Video generation 15 – 45 min 2700s

Polling intervals: When checking status manually, poll every 15-30 seconds to avoid excessive API calls.

Language Configuration

Language setting controls the output language for generated artifacts (audio, video, etc.).

Important: Language is a GLOBAL setting that affects all notebooks in your account.

# List all 80+ supported languages with native names
notebooklm language list

# Show current language setting
notebooklm language get

# Set language for artifact generation
notebooklm language set zh_Hans  # Simplified Chinese
notebooklm language set ja       # Japanese
notebooklm language set en       # English (default)

Common language codes:

Code Language
en English
zh_Hans 中文(简体) – Simplified Chinese
zh_Hant 中文(繁體) – Traditional Chinese
ja 日本語 – Japanese
ko 한국어 – Korean
es Español – Spanish
fr Français – French
de Deutsch – German
pt_BR Português (Brasil)

Override per command: Use --language flag on generate commands:

notebooklm generate audio --language ja   # Japanese podcast
notebooklm generate video --language zh_Hans  # Chinese video

Offline mode: Use --local flag to skip server sync:

notebooklm language set zh_Hans --local  # Save locally only
notebooklm language get --local  # Read local config only

Troubleshooting

notebooklm --help              # Main commands
notebooklm auth check          # Diagnose auth issues
notebooklm auth check --test   # Full auth validation with network test
notebooklm notebook --help     # Notebook management
notebooklm source --help       # Source management
notebooklm research --help     # Research status/wait
notebooklm generate --help     # Content generation
notebooklm artifact --help     # Artifact management
notebooklm download --help     # Download content
notebooklm language --help     # Language settings

Diagnose auth: notebooklm auth check – shows cookie domains, storage path, validation status Re-authenticate: notebooklm login Check version: notebooklm --version Update skill: notebooklm skill install