agent-memory-mcp
An MCP server for managing agent memory using provenance tracking, decay-weighted retrieval, and feedback loops to optimize information recall. It allows agents to store insights in a local SQLite database and rank them based on confidence, age, and usefulness.
README
agent-memory-mcp
MCP server for agent memory with provenance tracking, decay-weighted recall, and feedback loops.
Most agent memory systems treat memories as free-floating facts. This one tracks where each memory came from, how confident you should be in it, and whether it was actually useful — so your agent stops rediscovering the same things and starts getting smarter over time.
Why this exists
Agents waste tokens. A lot of them. Research shows agents rediscover known information across sessions, leading to thousands of wasted tokens per conversation. Flat files are auditable but unsearchable. Vector DBs have great recall but no staleness signals. Structured state is brittle.
This is a memory layer that fixes the actual problems:
- Provenance chains — every memory records its source, extraction method, and confidence. You know why you believe something, not just what you believe.
- Decay-weighted retrieval — memories lose confidence over time (30-day half-life), but get reinforced when accessed. Recently-used memories bubble up naturally.
- Feedback flywheel — mark recalled memories as useful or not. Over time, the memories that actually help you rise to the top. The ones that don't, fade.
Install
npm install @kiraautonoma/agent-memory-mcp
Or run directly with npx:
npx @kiraautonoma/agent-memory-mcp
MCP Configuration
Add to your Claude Desktop / MCP client config:
{
"mcpServers": {
"memory": {
"command": "npx",
"args": ["-y", "@kiraautonoma/agent-memory-mcp"],
"env": {
"MEMORY_DB_PATH": "/path/to/your/memory.db"
}
}
}
}
Environment Variables
| Variable | Default | Description |
|---|---|---|
MEMORY_DB_PATH |
~/.agent-memory/memory.db |
Path to SQLite database |
MEMORY_DEBUG |
(unset) | Set to "1" for info logs, "verbose" for debug |
Tools
memory_store
Store a memory with provenance metadata.
{
"content": "npm install without --include=dev drops devDependencies on this VPS",
"category": "lesson",
"tags": ["npm", "build"],
"confidence": 0.95,
"source_type": "observation"
}
Categories: lesson, strategy, operational, identity, preference, fact
memory_recall
Retrieve memories by keyword query and/or category, ranked by decay-weighted relevance.
{
"query": "npm build errors",
"category": "lesson",
"limit": 5
}
Returns memories sorted by: confidence × source_trust × decay_factor × usefulness_factor
Empty query returns top-N by relevance score (good for session startup).
memory_feedback
Record whether a recalled memory was useful. This is the flywheel.
{
"memory_id": "mem_abc123_xyz",
"useful": true,
"context": "Reminded me to run npm install --include=dev"
}
memory_stats
Get counts and averages for the memory store.
{
"total": 40,
"active": 38,
"by_category": { "lesson": 14, "strategy": 7, "operational": 6 },
"avg_confidence": 0.93,
"feedback_count": 12
}
Usage Pattern
The intended pattern for autonomous agents:
Session start:
→ memory_recall("", { limit: 10 }) # load top memories into context
During session:
→ memory_recall("topic keywords") # retrieve relevant memories
After session:
→ memory_store(...) # save new insights
→ memory_feedback(id, useful=true) # reinforce what worked
Storage
SQLite database with WAL mode. Schema:
memoriestable: content, category, tags, provenance fields, decay tracking, feedback countsfeedback_logtable: full feedback history for the flywheel
The database is portable — copy it to move your agent's memory to a new machine.
What's different from Mem0 / Letta / Zep
| Feature | This | Mem0 | Letta | Zep |
|---|---|---|---|---|
| Provenance tracking | ✅ | ❌ | ❌ | ❌ |
| Decay-weighted retrieval | ✅ | ❌ | ❌ | Partial |
| Feedback flywheel | ✅ | ❌ | ❌ | ❌ |
| Local SQLite (no API key) | ✅ | ❌ | ❌ | ❌ |
| MCP native | ✅ | ❌ | ❌ | ❌ |
License
MIT
Recommended Servers
playwright-mcp
A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.
Magic Component Platform (MCP)
An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.
Audiense Insights MCP Server
Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.
VeyraX MCP
Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.
graphlit-mcp-server
The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.
Kagi MCP Server
An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.
E2B
Using MCP to run code via e2b.
Neon Database
MCP server for interacting with Neon Management API and databases
Exa Search
A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.
Qdrant Server
This repository is an example of how to create a MCP server for Qdrant, a vector search engine.