memora

memora

Persistent memory with knowledge graph visualization, semantic/hybrid search, importance scoring, and cloud sync (S3/R2) for cross-session context management.

Category
Visit Server

README

<img src="media/memora.gif" width="60" align="left" alt="Memora Demo">

Memora

<br clear="left">

A lightweight Model Context Protocol (MCP) server that persists shared memories in SQLite. Compatible with Claude Code, Codex CLI, and other MCP-aware clients.

<img src="media/viz_graph_exp.png" alt="Knowledge Graph Visualization" width="600">

Features

  • Persistent Storage - SQLite-backed database with optional cloud sync (S3, GCS, Azure)
  • Semantic Search - Vector embeddings (TF-IDF, sentence-transformers, or OpenAI)
  • Event Notifications - Poll-based system for inter-agent communication
  • Advanced Queries - Full-text search, date ranges, tag filters (AND/OR/NOT)
  • Cross-references - Auto-linked related memories based on similarity
  • Hierarchical Organization - Explore memories by section/subsection
  • Export/Import - Backup and restore with merge strategies
  • Knowledge Graph - Interactive HTML visualization with filtering
  • Live Graph Server - Auto-starts HTTP server for remote access via SSH
  • Statistics & Analytics - Tag usage, trends, and connection insights
  • Zero Dependencies - Works out-of-box with Python stdlib (optional backends available)

Install

# From GitHub
pip install git+https://github.com/agentic-mcp-tools/memora.git

# With extras
pip install -e ".[cloud]"       # S3/R2/GCS cloud storage (boto3)
pip install -e ".[embeddings]"  # semantic search (sentence-transformers)
pip install -e ".[all]"         # cloud + embeddings + dev tools

Usage

The server runs automatically when configured in Claude Code. Manual invocation:

# Default (stdio mode for MCP)
memora-server

# With graph visualization server
memora-server --graph-port 8765

# HTTP transport (alternative to stdio)
memora-server --transport streamable-http --host 127.0.0.1 --port 8080

Claude Code Config

Add to .mcp.json in your project root:

Local DB

{
  "mcpServers": {
    "memory": {
      "command": "memora-server",
      "args": [],
      "env": {
        "MEMORA_DB_PATH": "~/.local/share/memora/memories.db",
        "MEMORA_ALLOW_ANY_TAG": "1",
        "MEMORA_GRAPH_PORT": "8765"
      }
    }
  }
}

Cloud DB (S3/R2)

{
  "mcpServers": {
    "memory": {
      "command": "memora-server",
      "args": [],
      "env": {
        "AWS_PROFILE": "memora",
        "AWS_ENDPOINT_URL": "https://<account-id>.r2.cloudflarestorage.com",
        "MEMORA_STORAGE_URI": "s3://memories/memories.db",
        "MEMORA_CLOUD_ENCRYPT": "true",
        "MEMORA_ALLOW_ANY_TAG": "1",
        "MEMORA_GRAPH_PORT": "8765"
      }
    }
  }
}

Codex CLI Config

Add to ~/.codex/config.toml:

[mcp_servers.memory]
  command = "memora-server"  # or full path: /path/to/bin/memora-server
  args = ["--no-graph"]
  env = {
    AWS_PROFILE = "memora",
    AWS_ENDPOINT_URL = "https://<account-id>.r2.cloudflarestorage.com",
    MEMORA_STORAGE_URI = "s3://memories/memories.db",
    MEMORA_CLOUD_ENCRYPT = "true",
    MEMORA_ALLOW_ANY_TAG = "1",
  }

Environment Variables

Variable Description
MEMORA_DB_PATH Local SQLite database path (default: ~/.local/share/memora/memories.db)
MEMORA_STORAGE_URI Cloud storage URI for S3/R2 (e.g., s3://bucket/memories.db)
MEMORA_CLOUD_ENCRYPT Encrypt database before uploading to cloud (true/false)
MEMORA_CLOUD_COMPRESS Compress database before uploading to cloud (true/false)
MEMORA_CACHE_DIR Local cache directory for cloud-synced database
MEMORA_ALLOW_ANY_TAG Allow any tag without validation against allowlist (1 to enable)
MEMORA_TAG_FILE Path to file containing allowed tags (one per line)
MEMORA_TAGS Comma-separated list of allowed tags
MEMORA_GRAPH_PORT Port for the knowledge graph visualization server (default: 8765)
MEMORA_EMBEDDING_MODEL Embedding backend: tfidf (default), sentence-transformers, or openai
SENTENCE_TRANSFORMERS_MODEL Model for sentence-transformers (default: all-MiniLM-L6-v2)
OPENAI_API_KEY API key for OpenAI embeddings (required when using openai backend)
OPENAI_EMBEDDING_MODEL OpenAI embedding model (default: text-embedding-3-small)
AWS_PROFILE AWS credentials profile from ~/.aws/credentials (useful for R2)
AWS_ENDPOINT_URL S3-compatible endpoint for R2/MinIO
R2_PUBLIC_DOMAIN Public domain for R2 image URLs

Semantic Search & Embeddings

Memora supports three embedding backends for semantic search:

Backend Install Quality Speed
tfidf (default) None Basic keyword matching Fast
sentence-transformers pip install sentence-transformers True semantic understanding Medium
openai pip install openai High quality API latency

Automatic: Embeddings and cross-references are computed automatically when you memory_create, memory_update, or memory_create_batch.

Manual rebuild required when:

  • Changing MEMORA_EMBEDDING_MODEL after memories exist
  • Switching to a different sentence-transformers model
# After changing embedding model, rebuild all embeddings
memory_rebuild_embeddings

# Then rebuild cross-references to update the knowledge graph
memory_rebuild_crossrefs

Neovim Integration

Browse memories directly in Neovim with Telescope. Copy the plugin to your config:

# For kickstart.nvim / lazy.nvim
cp nvim/memora.lua ~/.config/nvim/lua/kickstart/plugins/

Usage: Press <leader>sm to open the memory browser with fuzzy search and preview.

Requires: telescope.nvim, plenary.nvim, and memora installed in your Python environment.

Knowledge Graph Export

Export memories as an interactive HTML knowledge graph visualization:

# Via MCP tool
memory_export_graph(output_path="~/memories_graph.html", min_score=0.25)

Interactive vis.js graph with tag/section filtering, memory tooltips, Mermaid diagram rendering, and auto-resized image thumbnails. Click nodes to view content, drag to explore.

Live Graph Server

A built-in HTTP server starts automatically with the MCP server, serving the graph visualization on-demand.

Access locally:

http://localhost:8765/graph

Remote access via SSH:

ssh -L 8765:localhost:8765 user@remote
# Then open http://localhost:8765/graph in your browser

Configuration:

{
  "env": {
    "MEMORA_GRAPH_PORT": "8765"
  }
}

Use different ports on different machines to avoid conflicts when forwarding multiple servers.

To disable: add "--no-graph" to args in your MCP config.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured