MCP Memory Server

MCP Memory Server

A persistent local vector memory server that allows users to store and search project-specific context using LanceDB and local embeddings. It enables MCP-compliant editors to maintain long-term memory across different projects without requiring external API keys.

Category
Visit Server

README

Fremem (formerly MCP Memory Server)

License Python Release

A persistent vector memory server for Windsurf, VS Code, and other MCP-compliant editors.

🌟 Philosophy

  • Privacy-first, local-first AI memory: Your data stays on your machine.
  • No vendor lock-in: Uses open standards and local files.
  • Built for MCP: Designed specifically to enhance Windsurf, Cursor, and other MCP-compatible IDEs.

ℹ️ Status (v0.2.0)

Stable:

  • βœ… Local MCP memory with Windsurf/Cursor
  • βœ… Multi-project isolation
  • βœ… Ingestion of Markdown docs

Not stable yet:

  • 🚧 Auto-ingest (file watching)
  • 🚧 Memory pruning
  • 🚧 Remote sync

Note: There are two ways to run this server:

  1. Local IDE (stdio): Used by Windsurf/Cursor (default).
  2. Docker/Server (HTTP): Used for remote deployments or Docker (exposes port 8000).

πŸ₯ Health Check

To verify the server binary runs correctly:

# From within the virtual environment
python -m fremem.server --help

βœ… Quickstart (5-Minute Setup)

There are two ways to set this up: Global Install (recommended for ease of use) or Local Dev.

Option A: Global Install (Like npm -g)

This method allows you to run fremem from anywhere without managing virtual environments manually.

1. Install pipx (if not already installed):

MacOS (via Homebrew):

brew install pipx
pipx ensurepath
# Restart your terminal after this!

Linux/Windows: See pipx installation instructions.

2. Install fremem:

# Install from PyPI
pipx install fremem

# Verify installation
fremem --help

Configure Windsurf / VS Code:

Since pipx puts the executable in your PATH, the config is simpler:

{
  "mcpServers": {
    "memory": {
      "command": "fremem",
      "args": [],
      "env": {
        "MCP_MEMORY_PATH": "/Users/YOUR_USERNAME/mcp-memory-data"
      }
    }
  }
}

Note on MCP_MEMORY_PATH: This is where fremem will store its persistent database. You can point this to any directory you like (checks locally or creating it if it doesn't exist). We recommend something like ~/mcp-memory-data or ~/.fremem-data. It must be an absolute path.

Option B: Local Dev Setup

1. Clone and Setup

git clone https://github.com/iamjpsharma/fremem.git
cd fremem

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate

# Install dependencies AND the package in editable mode
pip install -e .

2. Configure Windsurf / VS Code (Local Dev)

Add this to your mcpServers configuration (e.g., ~/.codeium/windsurf/mcp_config.json):

Note: Replace /ABSOLUTE/PATH/TO/fremem with the actual full path to the cloned directory.

{
  "mcpServers": {
    "memory": {
      "command": "/ABSOLUTE/PATH/TO/fremem/.venv/bin/python",
      "args": ["-m", "fremem.server"],
      "env": {
        "MCP_MEMORY_PATH": "/ABSOLUTE/PATH/TO/fremem/mcp_memory_data"
      }
    }
  }
}

In local dev mode, it's common to store the data inside the repo (ignored by git), but you can use any absolute path.

πŸš€ Usage

0. HTTP Server (New)

You can run the server via HTTP (SSE) if you prefer:

# Run on port 8000
python -m fremem.server_http

Access the SSE endpoint at http://localhost:8000/sse and send messages to http://localhost:8000/messages.

🐳 Run with Docker

To run the server in a container:

# Build the image
docker build -t fremem .

# Run the container
# Mount your local data directory to /data inside the container
docker run -p 8000:8000 -v $(pwd)/mcp_memory_data:/data fremem

The server will be available at http://localhost:8000/sse.

1. Ingestion (Adding Context)

Use the included helper script ingest.sh to add files to a specific project.

# ingest.sh <project_name> <file1> <file2> ...

# Example: Project "Thaama"
./ingest.sh project-thaama \
  docs/architecture.md \
  src/main.py

# Example: Project "OpenClaw"
./ingest.sh project-openclaw \
  README.md \
  CONTRIBUTING.md

πŸ’‘ Project ID Naming Convention

It is recommended to use a consistent prefix for your project IDs to avoid collisions:

  • project-thaama
  • project-openclaw
  • project-myapp

2. Connect in Editor

Once configured, the following tools will be available to the AI Assistant:

  • memory_search(project_id, q, filter=None): Semantic search. Supports metadata filtering (e.g., filter={"type": "code"}). Returns distance scores.
  • memory_add(project_id, id, text): Manual addition.
  • memory_list_sources(project_id): specific files ingested.
  • memory_delete_source(project_id, source): Remove a specific file.
  • memory_stats(project_id): Get chunk count.
  • memory_reset(project_id): Clear all memories for a project.

The AI will effectively have "long-term memory" of the files you ingested.

πŸ›  Troubleshooting

  • "fremem: command not found" after installing:

    • This means pipx installed the binary to a location not in your system's PATH (e.g., ~/.local/bin).
    • Fix: Run pipx ensurepath and restart your terminal.
    • Manual Fix: Add export PATH="$PATH:$HOME/.local/bin" to your shell config (e.g., ~/.zshrc).
  • "No MCP server found" or Connection errors:

    • Check the output of pwd to ensure your absolute paths in mcp_config.json are 100% correct.
    • Ensure the virtual environment (.venv) is created and dependencies are installed.
  • "Wrong project_id used":

    • The AI sometimes guesses the project ID. You can explicitly tell it: "Use project_id 'project-thaama'".
  • Embedding Model Downloads:

    • On the first run, the server downloads the all-MiniLM-L6-v2 model (approx 100MB). This may cause a slight delay on the first request.

πŸ—‘οΈ Uninstalling

To remove fremem from your system:

If installed via pipx (Global):

pipx uninstall fremem

If installed locally (Dev): Just delete the directory.

πŸ“ Repo Structure

/
β”œβ”€β”€ src/fremem/
β”‚   β”œβ”€β”€ server.py       # Main MCP server entry point
β”‚   β”œβ”€β”€ ingest.py       # Ingestion logic
β”‚   └── db.py           # LanceDB wrapper
β”œβ”€β”€ ingest.sh           # Helper script
β”œβ”€β”€ requirements.txt    # Top-level dependencies
β”œβ”€β”€ pyproject.toml      # Package config
β”œβ”€β”€ mcp_memory_data/    # Persistent vector storage (gitignored)
└── README.md

πŸ—ΊοΈ Roadmap

βœ… Completed (v0.1.x)

  • [x] Local vector storage (LanceDB)
  • [x] Multi-project isolation
  • [x] Markdown ingestion
  • [x] PDF ingestion
  • [x] Semantic chunking strategies
  • [x] Windows support + editable install fixes
  • [x] HTTP transport wrapper (SSE)
  • [x] Fix resource listing errors (clean MCP UX)
  • [x] Robust docs + 5-minute setup
  • [x] Multi-IDE support (Windsurf, Cursor-compatible MCP)

πŸš€ Near-Term (v0.2.x – Production Readiness)

🧠 Memory Governance

  • [x] List memory sources per project
  • [x] Delete memory by source (file-level deletion)
  • [x] Reset memory per project
  • [x] Replace / reindex mode (prevent stale chunks)
  • [x] Memory stats (chunk count, last updated, size)

🎯 Retrieval Quality

  • [x] Metadata filtering (e.g., type=decision | rules | context)
  • [x] Similarity scoring in results
  • [ ] Hybrid search (semantic + keyword)
  • [ ] Return evidence + similarity scores with search results
  • [ ] Configurable top_k defaults per project

βš™οΈ Dev Workflow

  • [ ] Auto-ingest on git commit / file change
  • [ ] mcp-memory init <project-id> bootstrap command
  • [ ] Project templates (PROJECT_CONTEXT.md, DECISIONS.md, AI_RULES.md)

🧠 Advanced RAG (v0.3.x – Differentiators)

  • [ ] Hierarchical retrieval (summary-first, detail fallback)
  • [ ] Memory compression (old chunks β†’ summaries)
  • [ ] Temporal ranking (prefer newer decisions)
  • [ ] Scoped retrieval (planner vs coder vs reviewer agents)
  • [ ] Query rewrite / expansion for better recall

🏒 Team / SaaS Mode (Optional)

Philosophy: Local-first remains the default. SaaS is an optional deployment mode.

πŸ” Auth & Multi-Tenancy

  • [ ] Project-level auth (API keys or JWT)
  • [ ] Org / team separation
  • [ ] Audit logs for memory changes

☁️ Remote Storage Backends (Pluggable)

  • [ ] S3-compatible vector store backend
  • [ ] Postgres / pgvector backend
  • [ ] Sync & Federation (Local ↔ Remote)

🚫 Non-Goals

  • ❌ No mandatory cloud dependency
  • ❌ No vendor lock-in
  • ❌ No chat history as β€œmemory” by default (signal > noise)
  • ❌ No model fine-tuning

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured