mcp-memory-gateway
The MCP Memory Gateway is a context engineering server that captures agent feedback, enforces pre-action gates to block known mistakes, and injects relevant past context into AI coding agent sessions for improved reliability and continuity.
Feedback & Memory
Capture feedback (
capture_feedback,capture_memory_feedback): Record up/down signals with context, reasoning, and rubric scores; vague feedback is rejected with a clarification promptRecall past context (
recall,commerce_recall): Vector-search relevant past feedback, memories, and prevention rules for the current taskView summaries & analytics (
feedback_summary,feedback_stats,dashboard): Approval rate trends, gate enforcement stats, and prevention impact overviewsGenerate prevention rules (
prevention_rules,get_reliability_rules): Auto-generate blocking rules from repeated failure patterns
Pre-Action Gates & Safety
Satisfy gates (
satisfy_gate): Record evidence that a gate condition is met (e.g., PR threads checked) with a 5-minute TTLGate statistics (
gate_stats): See blocked/warned counts and top triggered gates
Session Continuity
Session handoff (
session_handoff): Write a primer capturing git state, last task, next step, and blockersSession primer (
session_primer): Restore context at session start from the most recent handoff
Workflow Planning & Diagnosis
List & plan intents (
list_intents,plan_intent): View available workflows and generate checkpointed execution plans with policy gatesDiagnose failures (
diagnose_failure): Root-cause analysis for failed or suspect workflow stepsBootstrap agents (
bootstrap_internal_agent): Normalize GitHub/Slack/Linear triggers into startup context with recall packs and worktree sandboxesDelegation handoffs (
start_handoff,complete_handoff): Manage sequential agent delegation with verification outcomes
Context Engineering
Context packs (
construct_context_pack,evaluate_context_pack): Build and evaluate bounded context packs for large projects, closing the retrieval learning loopContext provenance (
context_provenance): Audit trail of recent context and retrieval decisionsEstimate uncertainty (
estimate_uncertainty): Bayesian uncertainty estimates for risky tags before acting
Business Metrics
Business metrics (
get_business_metrics): Retrieve Revenue, Conversion, and Customer metrics from the Semantic LayerSemantic entity descriptions (
describe_semantic_entity,describe_reliability_entity): Canonical definitions and state of Customer, Revenue, or Funnel entities
Export & Fine-Tuning
Export DPO pairs (
export_dpo_pairs): Build preference pairs from promoted memories for model fine-tuningExport Databricks bundle (
export_databricks_bundle): Export RLHF logs and proof artifacts as a Databricks-ready analytics bundleGenerate skills (
generate_skill): Auto-generate Claude skill files (SKILL.md) from clustered failure patterns
ThumbGate
Thumbs up or thumbs down — and your AI coding agent never makes the same mistake twice.
Workflow Hardening Sprint · Open ThumbGate GPT · ChatGPT Actions setup · Install Claude Desktop Extension · Claude Plugin Guide · Install Codex Plugin · ThumbGate Bench · Perplexity Command Center · Live Dashboard · Pro Page
Popular buyer questions: Stop repeated AI agent mistakes · Cursor guardrails · Codex CLI guardrails · Gemini CLI memory + enforcement
Running Claude Desktop? Download Claude bundle · Install + submission guide · Review packet zip
Running Codex? Download the standalone Codex plugin bundle · Codex install guide
ThumbGate GPT: start here
Use ThumbGate in ChatGPT now: Open the live ThumbGate GPT, paste the action your AI agent wants to run, and ask whether to allow, block, or checkpoint it.
Try this first prompt:
Check this agent action before it runs: git push --force --tagsNo, users do not have to keep chatting inside the ThumbGate GPT to use ThumbGate. The GPT is the fast demo, guided setup path, and thumbs-up/down memory surface for ChatGPT users. The hard enforcement layer still runs where the work happens: your local coding agent, CI workflow, or MCP-compatible runtime after npx thumbgate init.
Developers can import the prepared GPT Actions OpenAPI spec with the ChatGPT Actions setup guide. Regular ChatGPT users should just open the GPT and type what happened.
Official directory pending review? Claude Code users can install today with /plugin marketplace add IgorGanapolsky/ThumbGate then /plugin install thumbgate@thumbgate-marketplace.
Using Perplexity Max? ThumbGate ships a Perplexity Command Center that runs AI-search visibility checks, Search API lead discovery, Agent API strategy briefs, and official Perplexity MCP config generation. It is scheduled in GitHub Actions and uploads artifacts without committing runtime .thumbgate state.
Need proof that gates improve safety without killing capability? Run ThumbGate Bench:
npm run thumbgate:benchIt scores deterministic GitHub, npm, database, Railway, shell, and filesystem scenarios with unsafeActionRate, capabilityRate, positivePromotionRate, and replayStability so teams can inspect the Reliability Gateway before a Workflow Hardening Sprint.
What problem does this solve?
AI agents repeat mistakes. You fix the same problem in session after session — force-push to main, broken migrations, unauthorized file edits — because the agent has no memory of your feedback.
┌─────────────────────────────────────────────────────────────┐
│ THE PROBLEM │
│ │
│ Session 1: Agent breaks something. You fix it. │
│ Session 2: Agent breaks it again. You fix it again. │
│ Session 3: Same thing. Again. │
│ │
│ THE SOLUTION │
│ │
│ Session 1: Agent breaks something. You 👎 it. │
│ Session 2: ⛔ Gate blocks the mistake before it happens. │
│ Session 3+: Never see it again. │
└─────────────────────────────────────────────────────────────┘ThumbGate is the control plane for AI coding agents — turning your feedback into enforced rules, not suggestions.
How It Works in 3 Steps
STEP 1 STEP 2 STEP 3
──────── ──────── ────────
You react ThumbGate learns The gate holds
👎 on a bad ──► Feedback becomes ──► Next time the
agent action a saved lesson agent tries the
and a block rule same thing:
👍 on a good ──► Good pattern gets ⛔ BLOCKED
agent action reinforced (or ✅ allowed)That's it. No manual rule-writing. No config files to maintain. Your reactions teach the agent what your team actually wants.
Before / After
WITHOUT THUMBGATE │ WITH THUMBGATE
───────────────────────────────┼───────────────────────────────
Session 1: │ Session 1:
Agent force-pushes to main. │ Agent force-pushes to main.
You correct it manually. │ You 👎 it.
│
Session 2: │ Session 2:
Agent force-pushes again. │ ⛔ Gate blocks force-push.
It learned nothing. │ Agent uses safe push instead.
│
Session 3: │ Session 3+:
Same mistake. Again. │ Permanently fixed.
And again. │The Feedback Loop
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ Capture │───►│ Learn │───►│ Remember │───►│ Rule │───►│ Gate │
│ │ │ │ │ │ │ │ │ │
│ 👍 / 👎 │ │ Feedback │ │ Stored │ │ Auto- │ │ Blocks │
│ │ │ becomes │ │ lessons │ │ generated│ │ bad │
│ │ │ a lesson │ │ & search │ │ from │ │ actions │
│ │ │ │ │ │ │ feedback │ │ live │
└──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘Get Started
Best first paid motion for teams: the Workflow Hardening Sprint — qualify one repeated failure before committing to a full rollout. Start intake →
Best first technical motion: install the CLI-first and let init wire hooks for the agent you already use.
Paid path for individual operators: ThumbGate Pro is the self-serve side lane for a personal dashboard and export-ready evidence.
Quick Start
npx thumbgate init # detects your agent and wires everything up
npx thumbgate doctor # health check
npx thumbgate lessons # see what's been learned
npx thumbgate explore # terminal explorer for lessons, gates, and stats
npx thumbgate dashboard # open local dashboardOr wire MCP directly: claude mcp add thumbgate -- npx --yes --package thumbgate thumbgate serve
Works with Claude Code, Cursor, Codex, Gemini CLI, Amp, OpenCode, and any MCP-compatible agent.
Install for Your Agent
Claude Code
npx thumbgate init --agent claude-codeWires hooks automatically. Works immediately.
Cursor
npx thumbgate init --agent cursorInstalls as a Cursor extension with 4 skills: capture feedback, manage rules, search lessons, recall context.
Codex
npx thumbgate init --agent codexBridges to Codex CLI with 6 skills including adversarial review and second-pass analysis.
Gemini CLI
npx thumbgate init --agent geminiAmp
npx thumbgate init --agent ampAny MCP-Compatible Agent
npx thumbgate serveStarts the MCP server on stdio. Connect from any MCP-compatible client.
Claude Desktop
Add to your claude_desktop_config.json:
{
"mcpServers": {
"thumbgate": {
"command": "npx",
"args": ["--yes", "--package", "thumbgate", "thumbgate", "serve"]
}
}
}Or download the packaged extension bundle and install directly.
Use Cases
Stop force-push to main — A gate blocks
git push --forceon protected branches before it runsPrevent repeated migration failures — Each mistake becomes a searchable lesson that fires before the next attempt
Block unauthorized file edits — Control which files agents can touch with path-based rules
Memory across sessions — The agent remembers your feedback from yesterday without any manual rule-writing
Shared team safety — One developer's thumbs-down protects the whole team from the same mistake
Auto-improving without feedback — Self-improvement mode evaluates outcomes and generates rules automatically
Feedback Sessions
Give the agent more context when a thumbs-down isn't enough:
👎 thumbs down
└─► open_feedback_session
└─► "you lied about deployment" (append_feedback_context)
└─► "tests were actually failing" (append_feedback_context)
└─► finalize_feedback_session
└─► lesson inferred from full conversationThumbGate uses up to 8 prior conversation entries to turn vague, history-aware negative signals into specific, actionable lessons. A 60-second follow-up window stays open for additional context via open_feedback_session → append_feedback_context → finalize_feedback_session.
Free and self-hosted users can invoke search_lessons directly through MCP, and via the CLI with npx thumbgate lessons.
Built-in Gates
┌─────────────────────────────────────────────────────────┐
│ ENFORCEMENT LAYER │
│ │
│ ⛔ force-push → blocks git push --force │
│ ⛔ protected-branch → blocks direct push to main │
│ ⛔ unresolved-threads → blocks push with open reviews │
│ ⛔ package-lock-reset → blocks destructive lock edits │
│ ⛔ env-file-edit → blocks .env secret exposure │
│ │
│ + custom gates in config/gates/custom.json │
└─────────────────────────────────────────────────────────┘Pricing
┌──────────────────┬──────────────────────────────┬──────────────────────┐
│ FREE │ TEAM $99/seat/mo (min 3) │ PRO $19/mo · $149/yr│
├──────────────────┼──────────────────────────────┼──────────────────────┤
│ Local CLI │ Workflow Hardening Sprint │ Personal dashboard │
│ Enforced gates │ Shared hosted lesson DB │ Export feedback data │
│ 3 captures/day │ Org-wide dashboard │ Review-ready exports │
│ 5 searches/day │ Approval + audit proof │ │
│ Unlimited recall │ Isolated execution guidance │ │
└──────────────────┴──────────────────────────────┴──────────────────────┘Start Workflow Hardening Sprint · Live Dashboard · See Pro
Where to start:
Teams: Begin with the Workflow Hardening Sprint — qualify one real repeated failure before committing to a full rollout
Solo operators: ThumbGate Pro adds a personal dashboard and export-ready evidence
Individuals & open source: Free CLI tier, self-hosted
Tech Stack
┌──────────────────────┬──────────────────────┬──────────────────────┐
│ STORAGE │ INTELLIGENCE │ ENFORCEMENT │
│ │ │ │
│ SQLite + FTS5 │ MemAlign dual recall │ PreToolUse hook │
│ LanceDB vectors │ Thompson Sampling │ engine │
│ JSONL logs │ (adaptive lesson │ Gates config │
│ File-based context │ selection) │ Hook wiring │
│ │ │ │
│ │ │ │
├──────────────────────┼──────────────────────┼──────────────────────┤
│ INTERFACES │ BILLING │ EXECUTION │
│ │ │ │
│ MCP stdio │ Stripe │ Railway │
│ HTTP API │ │ Cloudflare Workers │
│ CLI │ │ Docker Sandboxes │
│ Node.js >=18 │ │ │
└──────────────────────┴──────────────────────┴──────────────────────┘FAQ
Is ThumbGate a model fine-tuning tool? No. ThumbGate does not update model weights in frontier LLMs. It captures your feedback, stores lessons, injects context at runtime, and blocks bad actions before they execute.
How is this different from CLAUDE.md or .cursorrules? Those are suggestions the agent can ignore. ThumbGate gates are enforced — they physically block the action before it runs. They also auto-generate from feedback instead of requiring manual writing.
Does it work with my agent? Yes. It's MCP-compatible and works with Claude Code, Claude Desktop, Cursor, Codex, Gemini CLI, Amp, OpenCode, and any agent that supports MCP or pre-action hooks.
What's self-improvement mode? ThumbGate can watch for failure signals (test failures, reverted edits, error patterns) and auto-generate prevention rules — no thumbs-down required. Your agent gets smarter every session.
Is it free? Free tier: 3 daily feedback captures, 5 daily lesson searches, unlimited recall, enforced gates. History-aware distillation turns vague feedback into specific lessons. Pro is $19/mo or $149/yr for a personal dashboard and exports. Team rollout starts at $99/seat/mo (3-seat minimum) with shared hosted lesson DB, org dashboard, approval + audit proof, and isolated execution guidance.
Enterprise Story
ThumbGate is the control plane for AI coding agents:
Feedback becomes enforcement — repeated failures stop at the gate instead of reappearing in review.
Workflow Sentinel scores blast radius before execution, so risky PR, release, and publish flows are visible early.
High-risk local actions route into Docker Sandboxes; hosted team automations use a signed isolated sandbox lane.
Team rollout stays tied to Verification Evidence instead of trust-me operator claims.
Release Confidence
Every PR must carry a Changeset entry — each shipped version has a customer-readable explanation before publish.
Version-sync checks keep
package.json,CHANGELOG.md, plugin manifests, and installer metadata aligned.Final close-out requires verifying the exact
mainmerge commit, with proof anchored in Verification Evidence.
See Release Confidence for the full trust chain.
Docs
Commercial Truth — pricing, claims, what we don't say
Changeset Strategy — how release notes and version bumps are enforced
First Dollar Playbook — turning one painful workflow into the next booked pilot
Release Confidence — how changesets, version checks, and proof lanes make publishes inspectable
SemVer Policy — stable vs prerelease channel rules
Verification Evidence — proof artifacts
WORKFLOW.md — agent-run contract (scope, hard stops, proof commands)
Ready-for-agent issue template — intake for agent tasks
Pro overlay: thumbgate-pro — separate repo/package inheriting from this base.
License
MIT. See LICENSE.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/IgorGanapolsky/mcp-memory-gateway'
If you have feedback or need assistance with the MCP directory API, please join our Discord server