Memory MCP Server

Memory MCP Server

SQLite-backed memory storage for MCP agents with optional semantic search via OpenAI embeddings, enabling agents to remember, recall, and manage contextual information across sessions.

Category
Visit Server

README

@ideadesignmedia/memory-mcp

SQLite-backed memory for MCP agents. Ships a CLI and programmatic API.

Highlights

  • Uses sqlite3 (async) for broad prebuilt support; no brittle native build steps.
  • Optional FTS5 indexing for better search; falls back to LIKE when unavailable.
  • Input validation and sane limits to guard against oversized payloads.
  • Auto-generates semantic embeddings via OpenAI when a key is provided; otherwise falls back to text-only scoring.

Install / Run

Quick run (no install):

npx -y @ideadesignmedia/memory-mcp --db=/abs/path/memory.db --topk=6

Install locally (dev dependency) and run:

npm i -D @ideadesignmedia/memory-mcp
npx memory-mcp --db=/abs/path/memory.db --topk=6

Other ecosystem equivalents:

  • pnpm: pnpm dlx @ideadesignmedia/memory-mcp --db=... --topk=6
  • yarn (classic): yarn dlx @ideadesignmedia/memory-mcp --db=... --topk=6

CLI usage

You can invoke it directly (if globally installed) or via npx as shown above.

Optional flags:

  • --embed-key=sk-... supply the embedding API key (same as MEMORY_EMBEDDING_KEY).
  • --embed-model=text-embedding-3-small override the embedding model (same as MEMORY_EMBED_MODEL).

Codex config example

Using npx so no global install is required. Add to ~/.codex/config.toml:

[mcp_servers.memory]
command = "npx"
args = ["-y", "@ideadesignmedia/memory-mcp", "--db=/abs/path/memory.db", "--topk=6"]

Programmatic API

import { MemoryStore, runStdioServer } from "@ideadesignmedia/memory-mcp";

const store = new MemoryStore("./memory.db");
// All store methods are async
const id = await store.insert({
  ownerId: "user-123",
  type: "preference",
  subject: "favorite color",
  content: "blue",
});

// Run as an MCP server over stdio
await runStdioServer({
  dbPath: "./memory.db",
  defaultTopK: 6,
  embeddingApiKey: process.env.MEMORY_EMBEDDING_KEY, // optional
});

Tools

All tools are safe for STDIO. The server writes logs to stderr only.

  • memory-remember

    • Create a concise memory for an owner. Provide ownerId, type (slot), short subject, and content. Optionally set importance (0–1), ttlDays, pinned, consent, sensitivity (tags), and embedding.
    • Response is minimal for LLMs (no embeddings or extra metadata):
      {
        "id": "mem_...",
        "item": { "id": "mem_...", "type": "preference", "subject": "favorite color", "content": "blue" },
        "content": [ { "type": "text", "text": "{\"id\":\"mem_...\",\"type\":\"preference\",\"subject\":\"favorite color\",\"content\":\"blue\"}" } ]
      }
      
  • memory-recall

    • Retrieve up to k relevant memories for an owner via text/semantic search. Accepts optional natural-language query, optional embedding, and optional slot (type).
    • Response is minimal per item: { id, type, subject, content }.
    • Tip: If you need to delete, use recall to find the id, then call memory-forget.
  • memory-list

    • List recent memories for an owner, optionally filtered by slot (type).
    • Response is minimal per item: { id, type, subject, content }.
  • memory-forget

    • Delete a memory by id. Consider recalling/listing first if you need to verify the item.
    • Tip: Do not create a new memory to indicate "forgotten"—delete the original instead.
  • memory-export

    • Export all memories for an owner as a JSON array. Useful for backup/migration.
    • Response items are minimal: { id, type, subject, content }.
  • memory-import

    • Bulk import memories for an owner. Each item mirrors the memory schema (type, subject, content, metadata, optional embedding). Max 1000 items per call.

Embeddings

Embeddings

Embeddings are optional—without a key the server relies on text search and recency heuristics.

Set MEMORY_EMBEDDING_KEY (or pass --embed-key=... to the CLI) to automatically create embeddings when remembering/importing memories and to embed recall queries. The default model is text-embedding-3-small; override it with MEMORY_EMBED_MODEL or --embed-model. To disable the built-in generator when using the programmatic API, pass embeddingProvider: null to createMemoryMcpServer. To specify a key programmatically, pass embeddingApiKey: "sk-...".

Limits and validation

  • memory-remember: subject max 160 chars, content max 1000, sensitivity up to 32 tags.
  • memory-recall: optional query max 1000 chars; if omitted, listing is capped internally.
  • memory-import: up to 1000 items per call; each item has the same field limits as remember.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured