Agent Memory Bridge

Agent Memory Bridge

MCP-native, local-first memory for coding agents that turns real sessions into reusable decisions, gotchas, and domain knowledge.

Category
Visit Server

README

Agent Memory Bridge

简体中文

Glama License: MIT Python 3.11+

Two-channel MCP memory for coding agents.

Built for Codex-first workflows.

Most memory tools put everything into one bucket. Agent Memory Bridge keeps two different kinds of state separate:

  • memory for durable knowledge worth reusing later
  • signal for short-lived coordination events such as handoffs, review requests, and workflow state

The bridge then promotes raw session output through a small ladder:

session -> summary -> learn -> gotcha -> domain-note

The Problem

Coding agents lose too much between sessions. Teams either keep rediscovering the same fixes, or they end up storing raw transcripts that are expensive to search and noisy to reuse.

Agent Memory Bridge takes a narrower path:

  • MCP-native from day one
  • local-first runtime
  • SQLite + FTS5 instead of heavier infrastructure
  • session capture that turns real coding work into reusable memory

What Makes It Different

  1. It separates durable knowledge from coordination state.
  2. It stays small and inspectable instead of hiding behind a larger platform.
  3. It gives signals a minimal lifecycle: pending, claimed, acked, and expired.
  4. It promotes session output into compact machine-readable memory instead of treating summaries as the final artifact.

If you want a broader memory platform with SDKs, dashboards, connectors, or hosted-first deployment, projects like OpenMemory or Mem0 are closer to that shape.

For a longer positioning note, see docs/COMPARISON.md.

5-Minute Quickstart

Once the MCP server is registered in Codex, the shortest useful path is:

  1. write one durable memory
  2. write one coordination signal
  3. inspect the namespace
  4. claim and acknowledge the signal
store(
  namespace="project:demo",
  kind="memory",
  content="claim: Use WAL mode for concurrent readers."
)

store(
  namespace="project:demo",
  kind="signal",
  content="release note review ready",
  tags=["handoff:review"],
  ttl_seconds=600
)

stats(namespace="project:demo")
browse(namespace="project:demo", limit=10)

claim_signal(
  namespace="project:demo",
  consumer="reviewer-a",
  lease_seconds=300,
  tags_any=["handoff:review"]
)

ack_signal(id="<signal_id>", consumer="reviewer-a")

That shows the core split:

  • memory keeps what the agent learned
  • signal carries what another workflow needs to act on right now

Setup

Requirements:

  • Python 3.11+
  • Codex with MCP enabled
  • SQLite with FTS5 support

1. Install

PowerShell:

python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -e .[dev]

macOS / Linux:

python -m venv .venv
source .venv/bin/activate
pip install -e .[dev]

2. Create bridge config

Copy config.example.toml to:

$CODEX_HOME/mem-bridge/config.toml

The important defaults are:

  • [profile] controls the neutral runtime shape for namespace, actors, and title prefixes
  • [bridge] controls the live local database
  • [watcher], [reflex], and [consolidation] control the background pipeline

Recommended setup:

  • keep the live SQLite database local on each machine
  • keep shared profile or source vaults on NAS or shared storage if needed
  • move to a hosted backend later if you want true multi-machine live writes

Important: shared SQLite is fine as a transition or backup path, but it is not a strong multi-writer live backend.

3. Register the MCP server in Codex

Add this to $CODEX_HOME/config.toml:

[mcp_servers.agentMemoryBridge]
command = "D:\\path\\to\\agent-memory-bridge\\.venv\\Scripts\\python.exe"
args = ["-m", "agent_mem_bridge"]
cwd = "D:\\path\\to\\agent-memory-bridge"

[mcp_servers.agentMemoryBridge.env]
CODEX_HOME = "%USERPROFILE%\\.codex"
AGENT_MEMORY_BRIDGE_HOME = "%USERPROFILE%\\.codex\\mem-bridge"
AGENT_MEMORY_BRIDGE_CONFIG = "%USERPROFILE%\\.codex\\mem-bridge\\config.toml"

4. Start the service

Start the MCP server:

.\.venv\Scripts\python.exe -m agent_mem_bridge

Run the background bridge service:

.\.venv\Scripts\python.exe .\scripts\run_mem_bridge_service.py

Run one cycle only:

$env:AGENT_MEMORY_BRIDGE_RUN_ONCE = "1"
.\.venv\Scripts\python.exe .\scripts\run_mem_bridge_service.py

Optional startup install:

.\scripts\install_startup_watcher.ps1

Optional local Docker image:

docker build -t agent-memory-bridge:local .
docker --context desktop-linux run --rm -i agent-memory-bridge:local

MCP Tools

The public MCP surface stays small on purpose:

  • store and recall
  • browse and stats
  • forget and promote
  • claim_signal and ack_signal
  • export

The complexity stays behind the bridge:

  • watcher capture from Codex rollout files
  • checkpoint and closeout sync
  • reflex promotion
  • domain consolidation

Namespaces

Start simple:

  • global for a default shared bucket
  • project:<workspace> for project-local memory
  • domain:<name> for reusable domain knowledge

The framework is profile-agnostic. A specific operator profile can sit on top, but the bridge itself does not need to look or sound like that profile.

Trust and Health Checks

The bridge is meant to be inspectable, not magical:

  • browse, stats, forget, and export let you inspect and correct bridge state without opening SQLite
  • signal status is visible and queryable through pending, claimed, acked, and expired
  • watcher health checks verify that Codex rollout files still parse into usable summaries
  • the current test suite passes with 57 passed

Useful commands:

.\.venv\Scripts\python.exe -m pytest
.\.venv\Scripts\python.exe .\scripts\verify_stdio.py
.\.venv\Scripts\python.exe .\scripts\run_healthcheck.py --report-path .\examples\healthcheck-report.json
.\.venv\Scripts\python.exe .\scripts\run_watcher_healthcheck.py --report-path .\examples\watcher-health-report.json

More Docs

License

MIT. See LICENSE.

Recommended Servers

playwright-mcp

playwright-mcp

A Model Context Protocol server that enables LLMs to interact with web pages through structured accessibility snapshots without requiring vision models or screenshots.

Official
Featured
TypeScript
Magic Component Platform (MCP)

Magic Component Platform (MCP)

An AI-powered tool that generates modern UI components from natural language descriptions, integrating with popular IDEs to streamline UI development workflow.

Official
Featured
Local
TypeScript
Audiense Insights MCP Server

Audiense Insights MCP Server

Enables interaction with Audiense Insights accounts via the Model Context Protocol, facilitating the extraction and analysis of marketing insights and audience data including demographics, behavior, and influencer engagement.

Official
Featured
Local
TypeScript
VeyraX MCP

VeyraX MCP

Single MCP tool to connect all your favorite tools: Gmail, Calendar and 40 more.

Official
Featured
Local
graphlit-mcp-server

graphlit-mcp-server

The Model Context Protocol (MCP) Server enables integration between MCP clients and the Graphlit service. Ingest anything from Slack to Gmail to podcast feeds, in addition to web crawling, into a Graphlit project - and then retrieve relevant contents from the MCP client.

Official
Featured
TypeScript
Kagi MCP Server

Kagi MCP Server

An MCP server that integrates Kagi search capabilities with Claude AI, enabling Claude to perform real-time web searches when answering questions that require up-to-date information.

Official
Featured
Python
E2B

E2B

Using MCP to run code via e2b.

Official
Featured
Neon Database

Neon Database

MCP server for interacting with Neon Management API and databases

Official
Featured
Exa Search

Exa Search

A Model Context Protocol (MCP) server lets AI assistants like Claude use the Exa AI Search API for web searches. This setup allows AI models to get real-time web information in a safe and controlled way.

Official
Featured
Qdrant Server

Qdrant Server

This repository is an example of how to create a MCP server for Qdrant, a vector search engine.

Official
Featured