Digital Brain MCP
A persistent long-term memory system that enables AI clients to store and recall notes, code, and research via semantic search. It utilizes Google Gemini embeddings and Supabase pgvector to provide a secure, searchable 'Second Brain' for MCP-compatible applications.
README
š§ Digital Brain MCP
A Second Brain powered by Model Context Protocol (MCP), Google Gemini Embedding 2, and Supabase pgvector ā deployed on Vercel.
Connect any MCP-compatible AI client (Claude, Cursor, OpenCode, Copilot, etc.) and give it persistent long-term memory. Store notes, code, research, decisions, and any knowledge ā then recall it instantly with semantic search.
Architecture
AI Client (Claude / Cursor / OpenCode / Copilot)
ā
ā¼ MCP Protocol (Streamable HTTP + SSE)
ā Authorization: Bearer <api-key>
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
ā Vercel (Next.js) ā
ā /api/mcp/[transport] ā
ā ā
ā āāā Auth Middleware āāā ā
ā ā Bearer token check ā ā
ā āāāāāāāāāāāāāāāāāāāāāāā ā
ā ā
ā Tools: ā
ā ⢠store_memory ā
ā ⢠search_memory ā
ā ⢠list_memories ā
ā ⢠update_memory ā
ā ⢠delete_memory ā
ā ⢠get_stats ā
āāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāā
ā
āāāāāāā“āāāāāā
ā¼ ā¼
āāāāāāāāāāā āāāāāāāāāāāāāāāā
ā Gemini ā ā Supabase ā
ā Embed 2 ā ā PostgreSQL ā
ā API ā ā + pgvector ā
āāāāāāāāāāā ā vector(768) ā
āāāāāāāāāāāāāāāā
How It Works
- You say (in Claude/Cursor/etc): "Remember that the EBR system uses Azure Functions for the API layer"
- MCP client calls your Digital Brain's
store_memorytool - Gemini Embedding 2 converts the text into a 768-dimension vector
- Supabase stores the text + vector in PostgreSQL with pgvector
- Later, you ask: "What tech does the EBR system use?"
search_memoryembeds your query, runs cosine similarity search, returns the matching memory
Security Model
The server uses Bearer token authentication on every request:
- Fail-closed: If no API keys are configured, ALL requests are rejected
- Multi-key support: Set multiple comma-separated keys in
DIGITAL_BRAIN_API_KEYSso each client gets its own key (and you can rotate independently) - Row Level Security (RLS): Enabled on the Supabase
memoriestable ā onlyservice_rolecan access data. The anon key has zero access. - Service Role Key: Only stored server-side in Vercel env vars, never exposed to clients
Generating API Keys
# Generate a strong 256-bit key
openssl rand -hex 32
Tech Stack
| Component | Technology | Purpose |
|---|---|---|
| Embeddings | Gemini Embedding 2 (gemini-embedding-2-preview) |
Multimodal embeddings ā text, images, audio, video, PDF all in one vector space |
| Vector DB | Supabase + pgvector | PostgreSQL with vector similarity search (HNSW index, cosine distance) |
| MCP Server | Next.js + mcp-handler |
Exposes tools via MCP protocol with SSE transport |
| Hosting | Vercel | Serverless deployment, auto-scaling, scale-to-zero |
| Session Store | Upstash Redis (via Vercel KV) | Redis-backed SSE session management |
| Auth | Bearer token middleware | API key validation on every request |
Why 768 dimensions?
Gemini Embedding 2 outputs 3072 dimensions by default but supports Matryoshka Representation Learning (MRL) ā you can truncate to 768 with minimal quality loss. This saves ~75% storage and makes queries significantly faster, which matters a lot more for a personal knowledge base than that last fraction of accuracy.
MCP Tools Reference
store_memory
Save a new piece of knowledge to the Digital Brain.
| Parameter | Type | Required | Description |
|---|---|---|---|
content |
string | ā | The text content to store |
source |
string | Where it came from (e.g. "conversation", "web-research", a URL) |
|
tags |
string[] | Tags for categorization (e.g. ["work", "azure", "ebr"]) |
|
content_type |
enum | text, note, code, conversation, research, decision, reference |
|
metadata |
object | Arbitrary structured metadata |
search_memory
Semantic search across everything stored. Your query is embedded and matched by cosine similarity.
| Parameter | Type | Required | Description |
|---|---|---|---|
query |
string | ā | Natural language search query |
limit |
number | Max results (default 10, max 50) | |
threshold |
number | Minimum similarity 0ā1 (default 0.4) | |
filter_tags |
string[] | Only return memories with at least one matching tag |
list_memories
Browse memories with optional filters (no embedding needed).
| Parameter | Type | Required | Description |
|---|---|---|---|
content_type |
string | Filter by type | |
tags |
string[] | Filter by tags | |
limit |
number | Max results (default 20, max 100) | |
offset |
number | Pagination offset |
update_memory
Modify an existing memory. If content changes, a new embedding is generated automatically.
| Parameter | Type | Required | Description |
|---|---|---|---|
id |
number | ā | Memory ID (from search/list results) |
content |
string | New content (re-embeds automatically) | |
tags |
string[] | Replace tags | |
source |
string | Update source | |
metadata |
object | Replace metadata |
delete_memory
Permanently remove a memory by ID.
| Parameter | Type | Required | Description |
|---|---|---|---|
id |
number | ā | Memory ID to delete |
get_stats
Get brain statistics: total count, breakdown by type, and top tags.
No parameters.
Setup Guide
Prerequisites
- Node.js 18+
- A Supabase account (free tier works)
- A Google AI Studio API key (free tier)
- A Vercel account (free Hobby plan works)
Step 1: Clone the Repo
git clone https://github.com/YOUR_USERNAME/digital-brain-mcp.git
cd digital-brain-mcp
npm install
Step 2: Set Up Supabase
- Create a new Supabase project (or use an existing one)
- Go to SQL Editor in the Supabase dashboard
- Copy the contents of
supabase/migrations/001_create_memories.sql - Paste and run the entire SQL script
- This creates: the
memoriestable, pgvector extension, HNSW index, search functions, RLS policies, and stat helpers
Get your credentials from Supabase ā Settings ā API:
SUPABASE_URLā the Project URLSUPABASE_SERVICE_ROLE_KEYā theservice_rolesecret (NOT the anon key)
Step 3: Get a Gemini API Key
- Go to Google AI Studio
- Create a new API key
- Save it as
GEMINI_API_KEY
Step 4: Generate Your MCP API Key
openssl rand -hex 32
Save the output as DIGITAL_BRAIN_API_KEYS.
Step 5: Local Development
# Create .env.local with your keys
cp .env.example .env.local
# Edit .env.local with your actual values
# Start the dev server
npm run dev
The MCP endpoint will be at http://localhost:3000/api/mcp/sse.
Step 6: Deploy to Vercel
- Push the repo to GitHub
- Import the project in Vercel
- Set environment variables in Vercel dashboard:
DIGITAL_BRAIN_API_KEYSā your generated key(s)GEMINI_API_KEYā your Google AI keySUPABASE_URLā your Supabase project URLSUPABASE_SERVICE_ROLE_KEYā your Supabase service role key
- Create a KV (Redis) store: Vercel dashboard ā Storage ā Create KV Database
- This auto-sets
REDIS_URL
- This auto-sets
- Set a firewall bypass for MCP: Settings ā Security ā Firewall ā Add rule:
- Condition: "Request path contains
/api/mcp" - Action: "Bypass"
- Condition: "Request path contains
- Deploy!
Your production MCP endpoint: https://digital-brain-mcp.vercel.app/api/mcp/sse
Connecting AI Clients
Claude Desktop / Claude Code
Add to your Claude MCP config (~/.claude/claude_desktop_config.json or project .mcp.json):
{
"mcpServers": {
"digital-brain": {
"type": "stdio",
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://digital-brain-mcp.vercel.app/api/mcp/sse",
"--header",
"Authorization:Bearer YOUR_API_KEY_HERE"
]
}
}
}
Cursor
Go to Settings ā Cursor Settings ā Tools & MCP ā Add Server:
- Type: SSE
- URL:
https://digital-brain-mcp.vercel.app/api/mcp/sse - Headers:
Authorization: Bearer YOUR_API_KEY_HERE
OpenCode / Any MCP Client
Use the SSE endpoint https://digital-brain-mcp.vercel.app/api/mcp/sse with an Authorization: Bearer <key> header.
Perplexity / Computer
Connect via the MCP config pattern above, or access the Supabase database directly through an existing connector.
Project Structure
digital-brain-mcp/
āāā src/
ā āāā app/
ā ā āāā api/
ā ā ā āāā mcp/
ā ā ā āāā [transport]/
ā ā ā āāā route.ts ā MCP endpoint (tools + auth)
ā ā āāā layout.tsx ā Root layout
ā ā āāā page.tsx ā Landing page
ā āāā lib/
ā āāā auth.ts ā Bearer token authentication
ā āāā embeddings.ts ā Gemini Embedding 2 client
ā āāā supabase.ts ā Supabase client + data helpers
āāā supabase/
ā āāā migrations/
ā āāā 001_create_memories.sql ā Full database schema
āāā .env.example ā Template for environment variables
āāā .mcp.json ā MCP client connection config
āāā package.json
āāā tsconfig.json
āāā next.config.js
āāā README.md ā This file
Example Usage
Once connected, you can say things like:
-
"Remember that the Revvity Signals API uses OAuth 2.0 client credentials flow" ā Calls
store_memorywith appropriate tags -
"What do I know about authentication patterns?" ā Calls
search_memory, finds semantically related memories -
"Show me all my code snippets" ā Calls
list_memorieswithcontent_type: "code" -
"How many memories do I have?" ā Calls
get_stats
Cost Estimate
| Service | Free Tier | Paid Threshold |
|---|---|---|
| Supabase | 500 MB database, 1 GB storage | ~650K memories at 768d before hitting limit |
| Vercel | Hobby plan (100 GB bandwidth) | Heavy team usage |
| Gemini API | Generous free quota | Thousands of embeddings/day |
| Upstash Redis | 10K commands/day | Heavy concurrent sessions |
For personal second-brain use, everything stays well within free tiers.
Future Enhancements
- [ ] Multimodal storage: Store images/PDFs directly (Gemini Embedding 2 supports them natively)
- [ ] Auto-tagging: Use an LLM to suggest tags for new memories
- [ ] Bulk import: CLI tool to import from Obsidian, Notion, or markdown files
- [ ] Scheduled embedding refresh: Re-embed old memories when the model improves
- [ ] Multi-user support: Add user_id column and JWT auth for shared deployments
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.