Remembra
Persistent memory layer for AI agents with entity resolution, PII detection, AES-256-GCM encryption at rest, and hybrid search. Self-hosted. 100% on LoCoMo benchmark.
README
<p align="center"> <img src="https://remembra.dev/logo.svg" alt="Remembra Logo" width="120"> </p>
<h1 align="center">Remembra</h1>
<p align="center"> <strong>The memory layer for AI that actually works.</strong><br> Persistent memory with entity resolution, temporal decay, and graph-aware recall.<br> Self-host in minutes. No vendor lock-in. </p>
<p align="center"> <a href="https://pypi.org/project/remembra/"><img src="https://img.shields.io/pypi/v/remembra?color=blue&label=PyPI" alt="PyPI"></a> <a href="https://www.npmjs.com/package/remembra"><img src="https://img.shields.io/npm/v/remembra?color=green&label=npm" alt="npm"></a> <a href="https://github.com/remembra-ai/remembra/stargazers"><img src="https://img.shields.io/github/stars/remembra-ai/remembra?style=social" alt="GitHub Stars"></a> <a href="https://opensource.org/licenses/MIT"><img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License: MIT"></a> <a href="https://docs.remembra.dev"><img src="https://img.shields.io/badge/docs-remembra.dev-blue" alt="Documentation"></a> </p>
<p align="center"> <a href="https://docs.remembra.dev">Documentation</a> • <a href="https://remembra.dev">Website</a> • <a href="#quick-start">Quick Start</a> • <a href="#why-remembra">Why Remembra?</a> • <a href="https://twitter.com/remembradev">Twitter</a> • <a href="https://discord.gg/Bzv3JshRa3">Discord</a> </p>
<!-- mcp-name: io.github.remembra-ai/remembra -->
🚀 What's New in v0.8.2
- 🔐 AES-256-GCM Field Encryption — Encrypt memory content at rest with OWASP-compliant key derivation
- 🛡️ Enterprise Security Suite — PII detection, anomaly monitoring, audit logging
- 📦 MCP Registry Published — Discoverable as
io.github.remembra-ai/remembrain Claude Desktop - ⚡ One-Command Quick Start —
curl | bashzero-config setup with Ollama embeddings - 🔌 Multi-Provider Support — OpenAI, Anthropic Claude, Ollama for embeddings & entity extraction
- 📊 Usage Warning Banners — API responses include usage thresholds at 60/80/95%
The Problem
Every AI app needs memory. Your chatbot forgets users between sessions. Your agent can't recall decisions from yesterday. Your assistant asks the same questions over and over.
The current solutions suck:
- Mem0: $249/mo for graph features, self-hosting docs are trash
- Zep: Academic, complex to deploy
- Letta: Research-grade, not production-ready
- LangChain Memory: Too basic, no persistence
The Solution
from remembra import Memory
memory = Memory(user_id="user_123")
# Store — entities and facts extracted automatically
memory.store("Had a meeting with Sarah from Acme Corp. She prefers email over Slack.")
# Recall — semantic search finds relevant memories
result = memory.recall("How should I contact Sarah?")
print(result.context)
# → "Sarah from Acme Corp prefers email over Slack."
# It knows "Sarah" and "Acme Corp" are entities. It builds relationships.
# It persists across sessions, reboots, context windows. Forever.
⚡ Quick Start (2 Minutes)
One Command Install
curl -sSL https://raw.githubusercontent.com/remembra-ai/remembra/main/quickstart.sh | bash
That's it. Remembra + Qdrant + Ollama start locally. No API keys needed.
Or with Docker Compose directly:
git clone https://github.com/remembra-ai/remembra && cd remembra
docker compose -f docker-compose.quickstart.yml up -d
Try it:
# Store a memory
curl -X POST http://localhost:8787/api/v1/memories/store \
-H "Content-Type: application/json" \
-d '{"content": "Alice is CEO of Acme Corp", "user_id": "demo"}'
# Recall it
curl -X POST http://localhost:8787/api/v1/memories/recall \
-H "Content-Type: application/json" \
-d '{"query": "Who runs Acme?", "user_id": "demo"}'
Connect to Claude (MCP)
Claude Desktop — add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"remembra": {
"command": "remembra-mcp",
"env": {
"REMEMBRA_URL": "http://localhost:8787",
"REMEMBRA_USER_ID": "default"
}
}
}
}
Claude Code:
claude mcp add remembra -e REMEMBRA_URL=http://localhost:8787 -- remembra-mcp
Cursor — add to .cursor/mcp.json:
{
"mcpServers": {
"remembra": {
"command": "remembra-mcp",
"env": {
"REMEMBRA_URL": "http://localhost:8787"
}
}
}
}
Now ask Claude: "Remember that Alice is CEO of Acme Corp" — then later: "Who runs Acme?"
Python SDK
pip install remembra
from remembra import Memory
memory = Memory(user_id="user_123")
memory.store("Had a meeting with Sarah from Acme Corp. She prefers email over Slack.")
result = memory.recall("How should I contact Sarah?")
print(result.context) # "Sarah from Acme Corp prefers email over Slack."
TypeScript SDK
npm install remembra
import { Remembra } from 'remembra';
const memory = new Remembra({ url: 'http://localhost:8787' });
await memory.store('User prefers dark mode');
const result = await memory.recall('preferences');
🔥 Why Remembra?
Feature Comparison
| Feature | Remembra | Mem0 | Zep/Graphiti | Letta | Engram |
|---|---|---|---|---|---|
| One-Command Install | ✅ curl | bash |
✅ pip | ✅ pip | ⚠️ Complex | ✅ brew |
| Entity Resolution | ✅ Free | 💰 $249/mo | ✅ | ❌ | ❌ |
| Conflict Detection | ✅ Unique | ❌ | ❌ | ❌ | ❌ |
| PII Detection | ✅ Built-in | ❌ | ❌ | ❌ | ❌ |
| Hybrid Search | ✅ BM25+Vector | ❌ | ✅ | ❌ | ❌ |
| 6 Embedding Providers | ✅ Hot-swap | ❌ (1-2) | ❌ (1) | ❌ | ❌ |
| Plugin System | ✅ | ❌ | ❌ | ✅ | ❌ |
| Sleep-Time Compute | ✅ | ❌ | ❌ | ✅ | ❌ |
| Self-Host + Billing | ✅ Stripe | ❌ | ❌ | ❌ | ❌ |
| Memory Spaces | ✅ Multi-tenant | ❌ | ❌ | ❌ | ❌ |
| MCP Server | ✅ Native | ✅ | ❌ | ❌ | ✅ |
| Pricing | Free / $49 / $99 | $19 → $249 | $25+ | Free | Free |
| License | MIT | Apache 2.0 | Apache 2.0 | Apache 2.0 | MIT |
Core Features
🧠 Smart Extraction — LLM-powered fact extraction from raw text
👥 Entity Resolution — "Adam", "Mr. Smith", "my husband" → same person
⏱️ Temporal Memory — TTL, decay curves, historical queries
🔍 Hybrid Search — Semantic + keyword for accurate recall
🔒 Security — PII detection, anomaly monitoring, audit logs
📊 Dashboard — Visual memory browser, entity graphs, analytics
📊 Benchmark Results
Tested on the LoCoMo benchmark (Snap Research, ACL 2024) — the standard academic benchmark for AI memory systems.
| Category | Accuracy | Questions |
|---|---|---|
| Single-hop (direct recall) | 100% | 37 |
| Multi-hop (cross-session reasoning) | 100% | 32 |
| Temporal (time-based queries) | 100% | 13 |
| Open-domain (world knowledge + memory) | 100% | 70 |
| Overall (memory categories) | 100% | 152 |
Scored with LLM judge (GPT-4o-mini). Adversarial detection not yet implemented. Run your own:
python benchmarks/locomo_runner.py --data /tmp/locomo/data/locomo10.json
📖 Documentation
| Resource | Description |
|---|---|
| Quick Start | Get running in minutes |
| Python SDK | Full Python reference |
| TypeScript SDK | JavaScript/TypeScript guide |
| MCP Server | Tool reference + setup guides for 9 tools |
| REST API | API reference |
| Self-Hosting | Docker deployment guide |
🛠️ MCP Server
Give any AI coding tool persistent memory with one command. Works with Claude Code, Cursor, VS Code + Copilot, Windsurf, JetBrains, Zed, OpenAI Codex, and any MCP-compatible client.
pip install remembra[mcp]
claude mcp add remembra -e REMEMBRA_URL=http://localhost:8787 -- remembra-mcp
Available Tools:
| Tool | Description |
|---|---|
store_memory |
Save facts, decisions, context |
recall_memories |
Semantic search across memories |
forget_memories |
GDPR-compliant deletion |
ingest_conversation |
Auto-extract from chat history |
health_check |
Verify connection |
🏗️ Architecture
┌─────────────────────────────────────────────────────────────┐
│ Your Application │
├──────────┬──────────────┬───────────────────────────────────┤
│ Python │ TypeScript │ MCP Server (Claude/Cursor) │
│ SDK │ SDK │ remembra-mcp │
├──────────┴──────────────┴───────────────────────────────────┤
│ Remembra REST API │
├──────────────┬──────────────┬───────────────┬───────────────┤
│ Extraction │ Entities │ Retrieval │ Security │
│ (LLM) │ (Graph) │ (Hybrid) │ (PII/Audit) │
├──────────────┴──────────────┴───────────────┴───────────────┤
│ Storage Layer │
│ Qdrant (vectors) + SQLite (metadata/graph) │
└─────────────────────────────────────────────────────────────┘
🤝 Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
# Clone
git clone https://github.com/remembra-ai/remembra
cd remembra
# Install dev dependencies
pip install -e ".[dev]"
# Run tests
pytest
# Start dev server
remembra-server --reload
📄 License
MIT License — Use it however you want.
⭐ Star History
If Remembra helps you, please star the repo! It helps others discover the project.
<p align="center"> Built with ❤️ by <a href="https://dolphytech.com">DolphyTech</a><br> <a href="https://remembra.dev">remembra.dev</a> • <a href="https://docs.remembra.dev">docs</a> • <a href="https://twitter.com/remembradev">twitter</a> • <a href="https://discord.gg/Bzv3JshRa3">discord</a> </p>
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.