notebooklm-mcp-pro
Production-grade Model Context Protocol server for Google NotebookLM that lets any MCP-capable client (Claude, ChatGPT, Cursor, etc.) interact with NotebookLM notebooks, sources, chats, and artifacts.
README
<div align="center">
notebooklm-mcp-pro
Production-grade Model Context Protocol server for Google NotebookLM
Connect any MCP-capable client to Google NotebookLM. Works with Claude Desktop, Claude.ai, ChatGPT, Cursor, VS Code Continue, and any client that speaks MCP or OpenAPI.
Documentation ยท Quick Start ยท Tools ยท Integrations
</div>
โจ Features
- One Python package.
- One CLI.
- One server factory.
- Local stdio transport.
- Remote Streamable HTTP transport.
- Bearer token authentication.
- GitHub OAuth authentication.
- ChatGPT Custom Actions through OpenAPI 3.1.
- Plugin manifest at
/.well-known/ai-plugin.json. - OAuth metadata endpoints.
- Notebook tools.
- Source ingestion tools.
- Chat tools.
- Research tools.
- Artifact generation tools.
- Artifact lifecycle tools.
- Language tools.
- Admin tools.
- ChatGPT-compatible
search. - ChatGPT-compatible
fetch. - Typed Pydantic inputs.
- Typed output models where the MCP surface needs stable shapes.
- Tool safety annotations.
- Confirmation checks for destructive operations.
- SQLite task tracking.
- SQLite OAuth sessions.
- Structured logging through structlog.
- Offline unit tests.
- Subprocess stdio integration test.
- HTTP auth tests.
- OpenAPI tests.
- Coverage gate.
- Ruff formatting.
- Strict mypy.
- MkDocs Material documentation.
- Docker image.
- Docker Compose template.
- Railway template.
- Fly.io template.
- Kubernetes manifests.
- Release workflow with wheel, sdist, SBOM, Sigstore bundles, PyPI, and GHCR publishing.
- OpenSSF Scorecard workflow with SARIF upload.
- ClusterFuzzLite scheduled fuzzing for configuration-boundary validation.
- Digest-pinned Docker base images.
- SHA-pinned GitHub Actions.
Why this exists
Google NotebookLM is useful for research notebooks, source-grounded chat, study material, and artifact generation.
MCP clients need a stable programmatic bridge.
notebooklm-mcp-pro provides that bridge.
It exposes NotebookLM actions as MCP tools.
It exposes NotebookLM records as MCP resources.
It exposes workflow starters as MCP prompts.
It also exposes an OpenAPI action surface for clients that integrate through HTTP schemas.
๐ฆ Installation
uv
uv tool install notebooklm-mcp-pro
nlm-mcp --version
pip
python -m pip install --upgrade notebooklm-mcp-pro
nlm-mcp --version
pipx
pipx install notebooklm-mcp-pro
nlm-mcp --version
Full optional install
python -m pip install "notebooklm-mcp-pro[all]"
From source
git clone https://github.com/oaslananka/notebooklm-mcp-pro
cd notebooklm-mcp-pro
make bootstrap
make test
NotebookLM login
Run the NotebookLM browser login once:
notebooklm-py login
The default auth file is:
~/.config/nlm-mcp/notebooklm_auth.json
Override it with:
export NLM_MCP_NOTEBOOKLM_AUTH_FILE=/secure/path/notebooklm_auth.json
For containers:
export NLM_MCP_NOTEBOOKLM_AUTH_JSON='{"cookies":[],"origins":[]}'
Treat this JSON as a secret.
๐ Quick Start
Local stdio
pip install notebooklm-mcp-pro
notebooklm-py login
nlm-mcp stdio
Use this mode for local desktop clients.
It does not add an HTTP auth layer.
The caller process controls access.
Remote HTTP with bearer token
export NLM_MCP_TRANSPORT=http
export NLM_MCP_AUTH_MODE=token
export NLM_MCP_BEARER_TOKEN="$(python -c 'import secrets; print(secrets.token_urlsafe(32))')"
export NLM_MCP_BASE_URL=https://your-server.example.com
nlm-mcp serve --host 0.0.0.0 --port 8080
Test:
curl https://your-server.example.com/healthz
curl -H "Authorization: Bearer $NLM_MCP_BEARER_TOKEN" \
https://your-server.example.com/mcp
Remote HTTP with GitHub OAuth
export NLM_MCP_TRANSPORT=http
export NLM_MCP_AUTH_MODE=github-oauth
export NLM_MCP_BASE_URL=https://your-server.example.com
export NLM_MCP_GITHUB_CLIENT_ID=your-client-id
export NLM_MCP_GITHUB_CLIENT_SECRET=your-client-secret
export NLM_MCP_OAUTH_ALLOWED_USERS=oaslananka
nlm-mcp serve --host 0.0.0.0 --port 8080
Users start at:
https://your-server.example.com/auth/login
๐ Integrations
Claude Desktop
Add this to the desktop config file.
macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
Windows:
%APPDATA%\Claude\claude_desktop_config.json
Linux:
~/.config/Claude/claude_desktop_config.json
Config:
{
"mcpServers": {
"notebooklm": {
"command": "nlm-mcp",
"args": ["stdio"],
"env": {
"NLM_MCP_LOG_LEVEL": "WARNING"
}
}
}
}
With uvx:
{
"mcpServers": {
"notebooklm": {
"command": "uvx",
"args": ["notebooklm-mcp-pro", "stdio"]
}
}
}
Claude.ai Web
Deploy the HTTP server with a public HTTPS URL.
Use:
https://your-server.example.com/mcp
Choose bearer token or OAuth based on server configuration.
Run admin.health to verify.
ChatGPT Custom Actions
Deploy the HTTP server.
Import:
https://your-server.example.com/openapi.json
Set authentication to bearer token when NLM_MCP_AUTH_MODE=token.
The action endpoints are:
POST /tools/{tool_name}
The manifest is:
GET /.well-known/ai-plugin.json
Cursor
Use the same local stdio config shape:
{
"mcpServers": {
"notebooklm": {
"command": "nlm-mcp",
"args": ["stdio"]
}
}
}
VS Code Continue
Use local stdio or remote HTTP depending on your Continue configuration.
Local command:
nlm-mcp stdio
Remote endpoint:
https://your-server.example.com/mcp
๐ Tools
Notebook tools
| Tool | Purpose | Safety |
|---|---|---|
notebook.list |
List notebooks | read-only |
notebook.create |
Create a notebook | mutating |
notebook.get |
Get notebook metadata | read-only |
notebook.rename |
Rename a notebook | idempotent |
notebook.delete |
Delete a notebook | destructive, confirmation required |
notebook.share_public |
Toggle public sharing | destructive, confirmation required when enabling |
notebook.share_invite |
Invite collaborator | mutating, confirmation required |
notebook.share_status |
Read sharing settings | read-only |
Source tools
| Tool | Purpose | Safety |
|---|---|---|
source.add_url |
Add a web URL | mutating |
source.add_youtube |
Add a YouTube video | mutating |
source.add_file |
Upload a local file | mutating |
source.add_gdrive |
Add a Google Drive document | mutating |
source.add_text |
Add pasted text | mutating |
source.list |
List sources | read-only |
source.get |
Get source metadata | read-only |
source.get_fulltext |
Get indexed text | read-only |
source.refresh |
Re-index a source | idempotent |
source.wait |
Wait for indexing | read-only, blocking |
source.remove |
Remove a source | destructive, confirmation required |
Chat tools
| Tool | Purpose |
|---|---|
chat.ask |
Ask a one-shot question |
chat.query |
OpenAPI alias for asking |
chat.stream_query |
Stream-oriented alias returning a completed result |
chat.conversation_start |
Start or identify a conversation |
chat.continue |
Continue a conversation |
chat.history |
Read conversation history |
chat.save_to_notes |
Save content as a note |
chat.save_note |
Alias for note save |
chat.list_notes |
List notes |
Research tools
| Tool | Purpose |
|---|---|
research.web_start |
Start web research |
research.drive_start |
Start Drive research |
research.status |
Poll research status |
research.wait |
Wait for research and optionally import sources |
Generation tools
| Tool | Output |
|---|---|
generate.audio_overview |
Audio overview |
generate.video_overview |
Video overview |
generate.cinematic_video |
Cinematic video |
generate.slide_deck |
Slide deck |
generate.infographic |
Infographic |
generate.quiz |
Quiz |
generate.flashcards |
Flashcards |
generate.report |
Report |
generate.data_table |
Data table |
generate.mind_map |
Mind map |
Artifact lifecycle tools
| Tool | Purpose |
|---|---|
artifact.list |
List artifacts and tracked tasks |
artifact.status |
Poll task status |
artifact.wait |
Wait for task completion |
artifact.download |
Download an artifact |
artifact.delete |
Delete an artifact when supported |
artifact.cancel |
Cancel a task when supported |
artifact.revise_slide |
Revise one slide |
Language tools
| Tool | Purpose |
|---|---|
language.list |
List supported languages |
language.get |
Read current output language |
language.set |
Set account-global output language |
Compatibility tools
| Tool | Purpose |
|---|---|
search |
Return matching record IDs |
fetch |
Return full record by ID |
Admin tools
| Tool | Purpose |
|---|---|
admin.health |
Server health |
admin.version |
Package and runtime version |
โ๏ธ Configuration
| Variable | Default | Description |
|---|---|---|
NLM_MCP_TRANSPORT |
stdio |
stdio or http |
NLM_MCP_HTTP_HOST |
0.0.0.0 |
HTTP bind host |
NLM_MCP_HTTP_PORT |
8080 |
HTTP bind port |
NLM_MCP_HTTP_PATH |
/mcp |
MCP endpoint path |
NLM_MCP_BASE_URL |
unset | Public URL |
NLM_MCP_AUTH_MODE |
none |
none, token, or github-oauth |
NLM_MCP_BEARER_TOKEN |
unset | Token auth secret |
NLM_MCP_GITHUB_CLIENT_ID |
unset | OAuth client ID |
NLM_MCP_GITHUB_CLIENT_SECRET |
unset | OAuth client secret |
NLM_MCP_OAUTH_ALLOWED_USERS |
unset | GitHub username allowlist |
NLM_MCP_NOTEBOOKLM_AUTH_FILE |
~/.config/nlm-mcp/notebooklm_auth.json |
NotebookLM auth file |
NLM_MCP_NOTEBOOKLM_AUTH_JSON |
unset | Inline NotebookLM auth JSON |
NLM_MCP_DATA_DIR |
~/.local/share/nlm-mcp |
Runtime data directory |
NLM_MCP_LOG_LEVEL |
INFO |
Log level |
NLM_MCP_LOG_FORMAT |
json |
json or console |
See Configuration for the full table.
๐ณ Docker
Build
docker build -f deploy/Dockerfile -t notebooklm-mcp-pro:dev .
Run
docker run --rm -p 8080:8080 \
-e NLM_MCP_TRANSPORT=http \
-e NLM_MCP_AUTH_MODE=token \
-e NLM_MCP_BEARER_TOKEN=replace-with-generated-token \
-e NLM_MCP_BASE_URL=http://localhost:8080 \
notebooklm-mcp-pro:dev
Compose
docker compose -f deploy/docker-compose.yml up --build
Pull
docker pull ghcr.io/oaslananka/notebooklm-mcp-pro:latest
HTTP Endpoints
| Endpoint | Purpose | Auth |
|---|---|---|
GET /healthz |
health check | exempt |
GET /openapi.json |
OpenAPI schema | exempt |
GET /.well-known/ai-plugin.json |
plugin manifest | exempt |
GET /.well-known/oauth-protected-resource |
OAuth resource metadata | exempt |
GET /.well-known/oauth-authorization-server |
OAuth server metadata | exempt |
GET /auth/login |
GitHub OAuth login | exempt |
GET /auth/callback |
GitHub OAuth callback | exempt |
POST /tools/{tool_name} |
OpenAPI tool action | authenticated |
/mcp |
Streamable HTTP MCP endpoint | authenticated |
Architecture
flowchart TB
Desktop["Desktop MCP client"] --> Stdio["stdio transport"]
Remote["Remote MCP/OpenAPI client"] --> HTTP["Streamable HTTP"]
HTTP --> Auth["Auth middleware"]
Stdio --> Server["FastMCP server"]
Auth --> Server
Server --> NotebookTools["Notebook tools"]
Server --> SourceTools["Source tools"]
Server --> ArtifactTools["Artifact tools"]
Server --> Resources["MCP resources"]
Server --> Prompts["MCP prompts"]
NotebookTools --> Backend["NotebookLMBackend"]
SourceTools --> Backend
ArtifactTools --> Backend
Backend --> NLM["notebooklm-py"]
NLM --> Google["Google NotebookLM"]
Server --> Store["SQLite task and OAuth store"]
๐ Security
- Do not expose HTTP mode publicly without auth.
- Use bearer tokens for personal deployments.
- Use GitHub OAuth for multi-user deployments.
- Store NotebookLM auth JSON in a secret manager.
- Mount auth files read-only in containers.
- Keep
NLM_MCP_BASE_URLon HTTPS for OAuth. - Artifact downloads are constrained to the artifacts directory.
- Destructive tools require explicit confirmation.
- CI runs lint, typecheck, tests, dependency audit, static analysis, and secret scanning.
- OpenSSF Scorecard runs on
mainand publishes SARIF. - ClusterFuzzLite runs scheduled Atheris fuzzing for settings validation.
- GitHub Actions use top-level read-only permissions and job-level write scopes.
- Release assets are signed with Sigstore bundles.
- Docker build images are pinned by digest.
See Security.
Development
make bootstrap
make lint
make typecheck
make test
make docs
Generate the catalog:
make catalog
Run the HTTP server:
make run-http
Run the stdio server:
make run-stdio
Release
Releases are cut from tags:
git tag v1.0.0
git push origin v1.0.0
The release workflow validates the tag, builds distributions, generates an SBOM, signs release assets with Sigstore, publishes to PyPI, pushes GHCR images, and creates a GitHub release.
Roadmap
Planned follow-up work:
- Additional OAuth providers.
- Shared Redis-backed rate limiting.
- Hosted UI widgets for richer artifact previews.
- More recorded NotebookLM fixtures.
- More deployment blueprints.
See docs/ROADMAP.md.
๐ค Contributing
Contributions are welcome when they are scoped, tested, and documented.
Read CONTRIBUTING.md.
Before opening a PR:
make lint
make typecheck
make test
make docs
Use Conventional Commits.
๐ License
MIT License.
See LICENSE.
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.