Skip to main content
Glama
22,816 servers. Last updated

"author:GhadiSaab" matching MCP servers:

  • A
    license
    B
    quality
    -
    maintenance
    A fully local persistent memory layer for LLM coding agents (Claude Code, Codex, Gemini CLI, OpenCode). A shell wrapper intercepts tool invocations, fires hooks on every tool call, then runs a 3-layer pipeline (extract → compress to ≤500-token digest → merge into project memory doc) at session end. The next session gets prior context injected automatically.
    Last updated
    6
    620