Engram

Engram

Perstistant Memory Layer for Agents

Category
Visit Server

README

🧠 Engram

The intelligence layer for AI agents

npm version License: BSL 1.1 GitHub stars

Every AI agent is born smart but amnesiac. Engram fixes that. It doesn't just store memories -- it learns, consolidates patterns, detects contradictions, and surfaces context you didn't ask for.

<a href="https://glama.ai/mcp/servers/@tstockham96/engram"> <img width="380" height="200" src="https://glama.ai/mcp/servers/@tstockham96/engram/badge" alt="Engram MCP server" /> </a>


Install

npm install -g engram-sdk
engram init

That's it. Works with Claude Code, Cursor, or any MCP client. Also available as a REST API and TypeScript SDK.


Why Engram

Existing memory solutions are storage layers -- they save facts and retrieve them. Engram is an intelligence layer with three tiers:

Tier What it does Who has it
Explicit Memory Stores facts, preferences, conversation turns Everyone
Implicit Memory Detects behavioral patterns from how users work Engram only
Synthesized Memory Consolidation produces insights nobody asked for Engram only

Key insight: Engram invests intelligence at read time (when the query is known), not write time (when you don't know what'll matter). This is the fundamental architectural difference from Mem0, Zep, and LangMem.


Benchmarks

Evaluated on LOCOMO -- the standard benchmark for agent memory systems. Same benchmark Mem0 used to claim state of the art.

System Accuracy Tokens/Query
Engram 80.0% 1,504
Full Context 88.4% 23,423
Mem0 (published) 66.9% --
MEMORY.md 28.8% --

10 conversations, 1,540 questions, 4 categories. 19.6% relative improvement over Mem0 with 93.6% fewer tokens than full context.

Full context (dumping entire conversation history) scores highest but uses 30x more tokens and can't scale past context window limits. Engram closes most of the gap while using 96.6% fewer tokens.

Full benchmark methodology and per-category breakdown


Features

  • MCP Server -- 10 memory tools for Claude Code, Cursor, and any MCP client
  • REST API -- Full HTTP API for any language or framework
  • TypeScript SDK -- Embedded use for Node.js agents
  • CLI -- Interactive REPL, bulk operations, eval tools
  • Model-agnostic -- Works with Gemini, OpenAI, Ollama, Groq, Cerebras (any OpenAI-compatible provider)
  • Zero infrastructure -- SQLite, no Docker, no Neo4j, no Redis
  • Consolidation -- LLM-powered memory merging, contradiction detection, pattern discovery
  • Entity-aware recall -- Knows "Sarah" in the query should boost memories about Sarah
  • Bi-temporal model -- Tracks when facts were true, not just when they were stored
  • Spreading activation -- Graph-based context surfacing

Quick Start

MCP Setup (Claude Code / Cursor)

npm install -g engram-sdk
engram init

REST API

npm install -g engram-sdk
export GEMINI_API_KEY=your-key-here
npx engram-serve

Server starts on http://127.0.0.1:3800.

Remember and Recall

# Store a memory
curl -X POST http://localhost:3800/v1/memories \
  -H "Content-Type: application/json" \
  -d '{"content": "User prefers TypeScript over JavaScript", "type": "semantic"}'

# Recall relevant memories
curl "http://localhost:3800/v1/memories/recall?context=language+preferences&limit=5"

TypeScript SDK

import { Vault } from 'engram-sdk';

const vault = new Vault({ owner: 'my-agent' });

await vault.remember('User prefers TypeScript');
const memories = await vault.recall('language preferences');
await vault.consolidate();

API Reference

Full REST API and MCP tool documentation: engram.fyi/docs


Configuration

Variable Description Default
GEMINI_API_KEY Gemini API key for embeddings and consolidation --
ENGRAM_LLM_BASE_URL Custom API base URL (Groq, Cerebras, Ollama, etc.) provider default
ENGRAM_LLM_MODEL LLM model name provider default
ENGRAM_DB_PATH SQLite database path ~/.engram/default.db
PORT Server port 3800
ENGRAM_AUTH_TOKEN Bearer token for API auth --

Benchmarks & Eval Scripts

This repo contains the evaluation scripts used to benchmark Engram:

  • eval-locomo.ts -- LOCOMO benchmark (the main result)
  • eval-letta.ts -- Letta Context-Bench evaluation
  • eval-codebase-v2.ts -- Enterprise codebase navigation benchmark
  • eval-enron.ts -- Email corpus evaluation

See EVAL.md for methodology and paper/engram-paper.md for the full research paper.


Pricing

Tier Price Memories Agents
Free $0 1,000 1
Developer $29/mo 10,000 1
Team $99/mo 50,000 5
Business $499/mo Unlimited Unlimited
Enterprise Custom Custom Custom

Hosted API coming soon. Self-hosting is free.


License

Proprietary License

Engram is proprietary software. You may install and use it freely for internal purposes. See LICENSE for full terms.

For commercial licensing, contact tstockham96@gmail.com.


Links

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured