Agent Memory Bridge
Uses SQLite with FTS5 as the local-first storage backend for durable agent memory and signals, enabling full-text search and persistent storage of coding session knowledge without requiring hosted infrastructure.
Agent Memory Bridge
Two-channel MCP memory for coding agents: durable knowledge + coordination signals.
MCP-native, currently optimized for Codex-first workflows.
v0.6.5 adds:
claim-selection fairness inside the oldest eligible signal window
stale same-consumer claims no longer outrank other pending work by accident
deterministic proof now checks the fairness contract along with claim / extend / ack / reclaim
slice-aware classifier calibration and benchmarked retrieval stay in place

Most memory tools put everything into one bucket. Agent Memory Bridge keeps two different kinds of state separate:
memoryfor durable knowledge worth reusing latersignalfor short-lived coordination events such as handoffs, review requests, and workflow state
The bridge then promotes raw session output through a small ladder:
session -> summary -> learn -> gotcha -> domain-note
The Problem
Coding agents lose too much between sessions. Teams either keep rediscovering the same fixes, or they end up storing raw transcripts that are expensive to search and noisy to reuse.
Agent Memory Bridge takes a narrower path:
MCP-native from day one
local-first runtime
SQLite + FTS5 instead of heavier infrastructure
session capture that turns real coding work into reusable memory
What Makes It Different
It separates durable knowledge from coordination state.
It stays small and inspectable instead of hiding behind a larger platform.
It gives signals a clean lifecycle:
claim -> extend -> ack / expire / reclaim, and fairer generic claim selection when several signals are pending.It promotes session output into compact machine-readable memory instead of treating summaries as the final artifact.
It can add classifier-assisted enrichment without making the bridge depend on that path to stay useful.
If you want a broader memory platform with SDKs, dashboards, connectors, or hosted-first deployment, projects like OpenMemory or Mem0 are closer to that shape.
For a longer positioning note, see docs/COMPARISON.md.
5-Minute Quickstart
Once the MCP server is registered in Codex, the shortest useful path is:
write one durable memory
write one coordination signal
inspect the namespace
claim, extend if needed, and acknowledge the signal
store(
namespace="project:demo",
kind="memory",
content="claim: Use WAL mode for concurrent readers."
)
store(
namespace="project:demo",
kind="signal",
content="release note review ready",
tags=["handoff:review"],
ttl_seconds=600
)
stats(namespace="project:demo")
browse(namespace="project:demo", limit=10)
claim_signal(
namespace="project:demo",
consumer="reviewer-a",
lease_seconds=300,
tags_any=["handoff:review"]
)
extend_signal_lease(
id="<signal_id>",
consumer="reviewer-a",
lease_seconds=300
)
ack_signal(id="<signal_id>", consumer="reviewer-a")That shows the core split:
memorykeeps what the agent learnedsignalcarries what another workflow needs to act on right now
Lease renewal is not reclaim. If a lease is still active, the current claimant can extend it. If it has gone stale, another worker should reclaim it instead.
When signal_id is omitted, claim_signal(...) now picks from the oldest eligible window with a small fairness bias so one polling consumer does not keep winning by accident.
Demo
There is now a short terminal demo in the repo:
Setup
Requirements:
Python 3.11+
Codex with MCP enabled
SQLite with FTS5 support
1. Install
PowerShell:
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -e .[dev]macOS / Linux:
python -m venv .venv
source .venv/bin/activate
pip install -e .[dev]2. Create bridge config
Copy config.example.toml to:
$CODEX_HOME/mem-bridge/config.tomlThe important defaults are:
[profile]controls the neutral runtime shape for namespace, actors, title prefixes, and an optional profile source root[bridge]controls the live local database[watcher],[reflex], and[service]control the background pipeline[classifier]controls the optional enrichment gateway used by reflex
The example config uses ~/.codex/mem-bridge/profile-source as a neutral local sample path so a fresh install does not inherit a personal profile name.
The classifier is optional:
mode = "off"keeps the current deterministic rule pathmode = "shadow"runs classification and records divergence without changing stored tagsmode = "assist"lets classifier tags enrich reflex output while keyword/rule logic remains the fallbackminimum_confidence = 0.6keeps assist-mode enrichment from merging low-confidence classifier tags
Recommended setup:
keep the live SQLite database local on each machine
keep shared profile or source vaults on NAS or shared storage if needed
move to a hosted backend later if you want true multi-machine live writes
Important: shared SQLite is fine as a transition or backup path, but it is not a strong multi-writer live backend.
3. Register the MCP server in Codex
Add this to $CODEX_HOME/config.toml:
[mcp_servers.agentMemoryBridge]
command = "D:\\path\\to\\agent-memory-bridge\\.venv\\Scripts\\python.exe"
args = ["-m", "agent_mem_bridge"]
cwd = "D:\\path\\to\\agent-memory-bridge"
[mcp_servers.agentMemoryBridge.env]
CODEX_HOME = "%USERPROFILE%\\.codex"
AGENT_MEMORY_BRIDGE_HOME = "%USERPROFILE%\\.codex\\mem-bridge"
AGENT_MEMORY_BRIDGE_CONFIG = "%USERPROFILE%\\.codex\\mem-bridge\\config.toml"4. Start the service
Start the MCP server:
.\.venv\Scripts\python.exe -m agent_mem_bridgeRun the background bridge service:
.\.venv\Scripts\python.exe .\scripts\run_mem_bridge_service.pyRun one cycle only:
$env:AGENT_MEMORY_BRIDGE_RUN_ONCE = "1"
.\.venv\Scripts\python.exe .\scripts\run_mem_bridge_service.pyOptional startup install:
.\scripts\install_startup_watcher.ps1Optional local Docker image:
docker build -t agent-memory-bridge:local .
docker --context desktop-linux run --rm -i agent-memory-bridge:localMCP Tools
The public MCP surface stays small on purpose:
storeandrecallbrowseandstatsforgetandpromoteclaim_signal,extend_signal_lease, andack_signalexport
The complexity stays behind the bridge:
watcher capture from Codex rollout files
checkpoint and closeout sync
reflex promotion
domain consolidation
Namespaces
Start simple:
globalfor a default shared bucketproject:<workspace>for project-local memorydomain:<name>for reusable domain knowledge
The framework is profile-agnostic. A specific operator profile can sit on top, but the bridge itself does not need to look or sound like that profile.
Trust and Health Checks
The bridge is meant to be inspectable, not magical:
browse,stats,forget, andexportlet you inspect and correct bridge state without opening SQLitesignal status is visible and queryable through
pending,claimed,acked, andexpiredwatcher health checks verify that Codex rollout files still parse into usable summaries
classifier shadow/assist behavior is covered by fixture-based regression tests
the current test suite passes with
80 passed
Useful commands:
.\.venv\Scripts\python.exe -m pytest
.\.venv\Scripts\python.exe .\scripts\verify_stdio.py
.\.venv\Scripts\python.exe .\scripts\run_healthcheck.py --report-path .\examples\healthcheck-report.json
.\.venv\Scripts\python.exe .\scripts\run_watcher_healthcheck.py --report-path .\examples\watcher-health-report.jsonProof and Benchmark
Retrieval quality is now benchmarked instead of guessed.
The bridge now has a small canonical proof and benchmark harness.
deterministic proof checks signal correctness, duplicate suppression, and recall timing
signal correctness now includes a fairness check for stale same-consumer reclaim bias
retrieval benchmark tracks
precision@1,precision@3, andexpected_top1_accuracythe retrieval report compares bridge recall against a simple file-scan baseline
learning-quality upgrades now ship with classifier-vs-fallback regression coverage
classifier calibration now runs on a larger reviewed sample set and reports exact matches, average score, missing tags, extra tags, and low-confidence filtering
the canonical retrieval fixture now includes more overlap-heavy memory and signal cases
On the current canonical fixture:
memory_expected_top1_accuracy = 1.0file_scan_expected_top1_accuracy = 0.636duplicate_suppression_rate = 1.0
On the current reviewed calibration set:
using the bundled deterministic fixture gateway
reviewed_sample_count = 16classifier_exact_match_rate = 0.875fallback_exact_match_rate = 0.062classifier_better_count = 13fallback_better_count = 2classifier_filtered_low_confidence_count = 2retrievalis currently the loosest slice withclassifier_exact_match_rate = 0.6
For a deterministic local replay of the published calibration snapshot:
.\.venv\Scripts\python.exe .\scripts\run_classifier_calibration.py --fixture-gatewayThis is not a leaderboard. It is a regression harness that keeps retrieval quality and coordination semantics honest as the bridge evolves.
More Docs
License
MIT. See LICENSE.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/zzhang82/Agent-Memory-Bridge'
If you have feedback or need assistance with the MCP directory API, please join our Discord server