Mengram

Mengram

AI memory layer with 3 types — semantic (facts), episodic (events), procedural (workflows that evolve from failures). 21 MCP tools.

Category
Visit Server

README

<div align="center">

<picture> <source media="(prefers-color-scheme: dark)" srcset="https://img.shields.io/badge/Mengram-a855f7?style=for-the-badge&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxMjAgMTIwIj48cGF0aCBkPSJNNjAgMTYgUTkyIDE2IDk2IDQ4IFExMDAgNzggNzIgODggUTUwIDk2IDM4IDc2IFEyNiA1OCA0NiA0NiBRNjIgMzggNzAgNTIgUTc2IDY0IDYyIDY4IiBmaWxsPSJub25lIiBzdHJva2U9IiNmZmYiIHN0cm9rZS13aWR0aD0iOCIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIi8+PGNpcmNsZSBjeD0iNjIiIGN5PSI2OCIgcj0iOCIgZmlsbD0iI2ZmZiIvPjwvc3ZnPg=="> <img alt="Mengram" src="https://img.shields.io/badge/Mengram-a855f7?style=for-the-badge&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxMjAgMTIwIj48cGF0aCBkPSJNNjAgMTYgUTkyIDE2IDk2IDQ4IFExMDAgNzggNzIgODggUTUwIDk2IDM4IDc2IFEyNiA1OCA0NiA0NiBRNjIgMzggNzAgNTIgUTc2IDY0IDYyIDY4IiBmaWxsPSJub25lIiBzdHJva2U9IiNmZmYiIHN0cm9rZS13aWR0aD0iOCIgc3Ryb2tlLWxpbmVjYXA9InJvdW5kIi8+PGNpcmNsZSBjeD0iNjIiIGN5PSI2OCIgcj0iOCIgZmlsbD0iI2ZmZiIvPjwvc3ZnPg=="> </picture>

Give your AI agents memory that actually learns

PyPI npm License: Apache 2.0 PyPI Downloads

Website · Get API Key · Docs · Console · Examples

</div>

pip install mengram-ai   # or: npm install mengram-ai
from cloud.client import CloudMemory
m = CloudMemory(api_key="om-...")       # Free key → mengram.io

m.add([{"role": "user", "content": "I use Python and deploy to Railway"}])
m.search("tech stack")                  # → facts
m.episodes(query="deployment")          # → events
m.procedures(query="deploy")            # → workflows that evolve from failures

Why Mengram?

Every AI memory tool stores facts. Mengram stores 3 types of memory — and procedures evolve when they fail.

Mengram Mem0 Zep Letta
Semantic memory (facts, preferences) Yes Yes Yes Yes
Episodic memory (events, decisions) Yes No No Partial
Procedural memory (workflows) Yes No No No
Procedures evolve from failures Yes No No No
Cognitive Profile Yes No No No
Multi-user isolation Yes Yes Yes No
Knowledge graph Yes Yes Yes Yes
LangChain + CrewAI + MCP Yes Partial Partial Partial
Import ChatGPT / Obsidian Yes No No No
Pricing Free tier $19-249/mo Enterprise Self-host

Get Started in 30 Seconds

1. Get a free API key at mengram.io (email or GitHub)

2. Install

pip install mengram-ai

3. Use

from cloud.client import CloudMemory

m = CloudMemory(api_key="om-...")

# Add a conversation — auto-extracts facts, events, and workflows
m.add([
    {"role": "user", "content": "Deployed to Railway today. Build passed but forgot migrations — DB crashed. Fixed by adding a pre-deploy check."},
])

# Search across all 3 memory types at once
results = m.search_all("deployment issues")
# → {semantic: [...], episodic: [...], procedural: [...]}

<details> <summary><b>JavaScript / TypeScript</b></summary>

npm install mengram-ai
const { MengramClient } = require('mengram-ai');
const m = new MengramClient('om-...');

await m.add([{ role: 'user', content: 'Fixed OOM by adding Redis cache layer' }]);
const results = await m.searchAll('database issues');
// → { semantic: [...], episodic: [...], procedural: [...] }

</details>

<details> <summary><b>REST API (curl)</b></summary>

# Add memory
curl -X POST https://mengram.io/v1/add \
  -H "Authorization: Bearer om-..." \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "I prefer dark mode and vim keybindings"}]}'

# Search all 3 types
curl -X POST https://mengram.io/v1/search/all \
  -H "Authorization: Bearer om-..." \
  -d '{"query": "user preferences"}'

</details>

3 Memory Types

Semantic — facts, preferences, knowledge

m.search("tech stack")
# → ["Uses Python 3.12", "Deploys to Railway", "PostgreSQL with pgvector"]

Episodic — events, decisions, outcomes

m.episodes(query="deployment")
# → [{summary: "DB crashed due to missing migrations", outcome: "resolved", date: "2025-05-12"}]

Procedural — workflows that evolve

Week 1:  "Deploy" → build → push → deploy
                                         ↓ FAILURE: forgot migrations
Week 2:  "Deploy" v2 → build → run migrations → push → deploy
                                                          ↓ FAILURE: OOM
Week 3:  "Deploy" v3 → build → run migrations → check memory → push → deploy ✅

This happens automatically when you report failures:

m.procedure_feedback(proc_id, success=False,
                     context="OOM error on step 3", failed_at_step=3)
# → Procedure evolves to v3 with new step added

Or fully automatic — just add conversations and Mengram detects failures and evolves procedures:

m.add([{"role": "user", "content": "Deploy failed again — OOM on the build step"}])
# → Episode created → linked to "Deploy" procedure → failure detected → v3 created

Cognitive Profile

One API call generates a system prompt from all memories:

profile = m.get_profile()
# → "You are talking to Ali, a developer in Almaty. Uses Python, PostgreSQL,
#    and Railway. Recently debugged pgvector deployment. Prefers direct
#    communication and practical next steps."

Insert into any LLM's system prompt for instant personalization.

Import Existing Data

Kill the cold-start problem:

mengram import chatgpt ~/Downloads/chatgpt-export.zip --cloud   # ChatGPT history
mengram import obsidian ~/Documents/MyVault --cloud              # Obsidian vault
mengram import files notes/*.md --cloud                          # Any text/markdown

Integrations

<table> <tr> <td width="50%">

MCP Server — Claude Desktop, Cursor, Windsurf

{
  "mcpServers": {
    "mengram": {
      "command": "mengram",
      "args": ["server", "--cloud"],
      "env": { "MENGRAM_API_KEY": "om-..." }
    }
  }
}

21 tools for memory management.

</td> <td width="50%">

LangChain

from integrations.langchain import (
    MengramChatMessageHistory,
    MengramRetriever,
)

history = MengramChatMessageHistory(
    api_key="om-...", user_id="user-1"
)
retriever = MengramRetriever(api_key="om-...")

</td> </tr> <tr> <td>

CrewAI

from integrations.crewai import create_mengram_tools

tools = create_mengram_tools(api_key="om-...")
# → 5 tools: search, remember, profile,
#   save_workflow, workflow_feedback

agent = Agent(role="Support", tools=tools)

</td> <td>

OpenClaw

openclaw plugins install openclaw-mengram

Auto-recall before every turn, auto-capture after. 12 tools, slash commands, Graph RAG.

GitHub · npm

</td> </tr> </table>

Multi-User Isolation

One API key, many users — each sees only their own data:

m.add([...], user_id="alice")
m.add([...], user_id="bob")

m.search_all("preferences", user_id="alice")  # Only Alice's memories
m.get_profile(user_id="alice")                 # Alice's cognitive profile

Agent Templates

Clone, set API key, run in 5 minutes:

Template Stack What it shows
DevOps Agent Python SDK Procedures that evolve from deployment failures
Customer Support CrewAI Agent with 5 memory tools, remembers returning customers
Personal Assistant LangChain Cognitive profile + auto-saving chat history
cd examples/devops-agent && pip install -r requirements.txt
export MENGRAM_API_KEY=om-...
python main.py

API Reference

Endpoint Description
POST /v1/add Add memories (auto-extracts all 3 types)
POST /v1/search Semantic search
POST /v1/search/all Unified search (semantic + episodic + procedural)
GET /v1/episodes/search Search events and decisions
GET /v1/procedures/search Search workflows
PATCH /v1/procedures/{id}/feedback Report outcome — triggers evolution
GET /v1/procedures/{id}/history Version history + evolution log
GET /v1/profile Cognitive Profile
GET /v1/triggers Smart Triggers (reminders, contradictions, patterns)
POST /v1/agents/run Memory agents (Curator, Connector, Digest)
GET /v1/me Account info

Full interactive docs: mengram.io/docs

Community

License

Apache 2.0 — free for commercial use.


<div align="center">

Get your free API key · Built by Ali Baizhanov · mengram.io

</div>

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured