Skip to main content
Glama

Benchmarks

85.8% on LoCoMo (non-adversarial, end-to-end answer accuracy) — validated on 1,986 questions across 10 conversations with dual grading.

Result

Score

Conversational learning vs raw ingestion

+23 points (76.6% vs 53.0%, p<0.0001)

Architecture vs model effect

Architecture ~10x larger contributor

Poison resilience (1,135 adversarial memories)

-2.6 to -4.2 points only

TagCascade retrieval (tags-first + CE rerank)

+1.9 Hit@1 vs pure CE (p<0.0001)

Benchmark pipeline runs on a single GPU with no cloud dependencies. Roampal itself runs on CPU — no GPU required. Full methodology, data, and evaluation scripts: roampal-labs

Paper: "Beyond Ingestion: What Conversational Memory Learning Reveals on a Corrected LoCoMo Benchmark" (Logan Teague, April 2026)


Quick Start

pip install roampal
roampal init

Auto-detects installed tools. Restart your editor and start chatting.

Target a specific tool: roampal init --claude-code or roampal init --opencode

The core loop is identical — both platforms inject context, capture exchanges, and score outcomes. The delivery mechanism differs:

Claude Code

OpenCode

Context injection

Hooks (stdout)

Plugin (system prompt)

Exchange capture

Stop hook

Plugin session.idle event

Scoring

Main LLM via score_memories tool

Independent sidecar (your chosen model, disabled by default until configured)

Self-healing

Hooks auto-restart server on failure

Plugin auto-restarts server on failure

Claude Code prompts the main LLM to score each exchange via the score_memories tool. OpenCode never self-scores — an independent sidecar (a separate API call) reviews each exchange as a third party, removing self-assessment bias. The score_memories tool is not registered on OpenCode. Scoring is disabled by default until you explicitly configure it via roampal sidecar setup. During setup, Roampal detects local models (Ollama, LM Studio, etc.) and lets you choose a scoring model. Zen free models are available as an explicit opt-in choice for users without a local model or API key — they route through OpenCode's proxy which may log data. A cheap or local model works great — scoring doesn't need a powerful model.

v0.5.6: Hardening release — closes remaining coverage gaps from the v0.5.5.x verification audit. Phantom sweep after archived cleanup, auto-cleanup under capacity pressure, dedup observability, hardened delete permissions, archive-then-add cycle tests, sidecar prompt alignment with benchmark, async scoring queue (per-session deferred retry), MCP tool definition quality rewrite (TDQS), OpenCode Go auto-detect in sidecar setup wizard, and user name extraction fix.

v0.5.5.2: Hotfix — Windows plugin install now verifies copy succeeded (post-copy size check + manual read/write fallback for OneDrive/antivirus interference). Also installs to %APPDATA%\opencode\plugins as fallback since some Electron apps resolve config paths differently on Windows. Fixes remaining cases of issue #11 where roampal init --force reported success but the plugin was empty or in the wrong directory.

v0.5.5.1: Hotfix — OpenCode Desktop now correctly switches profiles when you switch projects in the UI (issue #10). Plugin reads the active session's directory via client.session.get() instead of caching the profile at module load, so a singleton plugin across a multi-project workspace still hits the right profile per message. Also: roampal init --force actually overwrites the OpenCode plugin file now (issue #11), with clearer errors when Desktop holds a file lock.

v0.5.5: Soft-delete for memory_bank — ChromaDB hard delete doesn't actually remove vectors from HNSW, causing phantom dedup matches that block new memories after GUI deletion. Replaced with status=archived metadata update plus status filter on all query/dedup paths. Also: scoring mutex → async queue (eliminates dropped requests), sidecar summary contamination fix (delimiter fencing).

v0.5.4: Profile binding is now per-request, not per-process. Every client (MCP server, OpenCode plugin, Python hooks for Claude Code / Cursor) sends an X-Roampal-Profile header so a single FastAPI server can cleanly serve multiple profiles simultaneously. Fixes issue #7 where OpenCode Desktop's per-project ROAMPAL_PROFILE in opencode.json was ignored because the singleton FastAPI bound the profile once at startup.

v0.5.3: Sidecar scoring now requires explicit configuration (no automatic fallback to Zen or localhost). Small local models (qwen2.5:3b, etc.) that return bare JSON arrays instead of OpenAI-shaped responses are handled transparently via server-side shape tolerance.

How It Works

When you type a message, Roampal automatically injects relevant context before your AI sees it:

You type:

fix the auth bug

Your AI sees:

═══ KNOWN CONTEXT ═══
• JWT refresh pattern fixed auth loop [id:patterns_a1b2] (3d, 90% proven, patterns)
• User prefers: never stage git changes [id:mb_c3d4] (memory_bank)
═══ END CONTEXT ═══

fix the auth bug

No manual calls. No workflow changes. It just works.

The Loop

  1. You type a message

  2. Roampal injects relevant context automatically (hooks in Claude Code, plugin in OpenCode)

  3. AI responds with full awareness of your history, preferences, and what worked before

  4. Outcome scored — good advice gets promoted, bad advice gets demoted

  5. Repeat — the system gets smarter every exchange

Five Memory Collections

Collection

Purpose

Lifetime

working

Current session context

24h — promotes if useful, deleted otherwise

history

Past conversations

30 days, outcome-scored

patterns

Proven solutions

Persistent while useful, promoted from history

memory_bank

Identity, preferences, goals

Permanent

books

Uploaded reference docs

Permanent

Commands

roampal init                # Auto-detect and configure installed tools
roampal init --claude-code  # Configure Claude Code explicitly
roampal init --opencode     # Configure OpenCode explicitly
roampal init --no-input     # Non-interactive setup (CI/scripts)
roampal start               # Start the HTTP server manually
roampal stop                # Stop the HTTP server
roampal status              # Check if server is running
roampal status --json       # Machine-readable status (for scripting)
roampal stats               # View memory statistics
roampal stats --json        # Machine-readable statistics (for scripting)
roampal doctor              # Diagnose installation issues
roampal summarize           # Summarize long memories (retroactive cleanup)
roampal score               # Score the last exchange (manual/testing)
roampal context             # Output recent exchange context
roampal ingest <file>       # Add documents to books collection
roampal books               # List all ingested books
roampal remove <title>      # Remove a book by title
roampal sidecar status      # Check scoring model configuration (OpenCode)
roampal sidecar setup       # Configure scoring model (OpenCode)
roampal sidecar test        # Test scoring model response format (OpenCode)
roampal retag               # Re-extract tags on memories using sidecar LLM
roampal sidecar disable     # Disable scoring (removes config, retrieval still works)

# Sidecar scope flags (v0.5.3+) — OpenCode merges project-local over user-global config:
roampal sidecar setup --scope user       # Write only to user-global config (~/.config/opencode/)
roampal sidecar setup --scope project    # Write only to project-local opencode.json in cwd ancestry
roampal sidecar setup                    # Auto-detects: uses project-local if shadow exists, otherwise user-global

# Sidecar scope flags for disable (v0.5.3+):
roampal sidecar disable --scope user       # Clear only from user-global config
roampal sidecar disable --scope project    # Clear only from project-local opencode.json
roampal sidecar disable                    # Auto-detects scope same as setup

# Named memory profiles (v0.5.1) — isolate memory per project, per client, etc.
roampal profile list                         # List registered profiles
roampal profile show                         # Show active profile and its path
roampal profile create <name>                # Create auto-located profile
roampal profile register <name> --path <dir> # Register an existing directory
roampal profile use <name>                   # Persist as user-global default
roampal profile unuse                        # Clear persistence
roampal profile switch <name>                # Persist + kill running server
roampal profile delete <name>                # Remove from registry
roampal start --profile <name>               # One-off launch on a profile

Named Memory Profiles (v0.5.1)

Run separate memory stores for different contexts — per project, per client (Claude Code vs OpenCode), work vs home. Profiles are managed entirely through the CLI; no config files to hand-edit.

roampal profile create work          # auto-located at <appdata>/Roampal/data/work/
roampal profile switch work          # persist + kill running server
# next MCP tool call spawns a fresh server on 'work'

Register an existing directory as a profile (no data migration):

roampal profile register project-a --path /existing/custom/path

Precedence (highest wins):

  1. --profile <name> flag

  2. ROAMPAL_PROFILE=<name> env var (set per-project in opencode.json or .claude.json env: {})

  3. roampal profile use <name> persisted default

  4. "default" fallback

MCP Tools

Your AI gets these memory tools:

Tool

Description

Platforms

search_memory

Deep search across all collections

Both

add_to_memory_bank

Store permanent facts (identity, preferences, goals)

Both

update_memory

Correct or update existing memories

Both

delete_memory

Remove outdated info

Both

score_memories

Score previous exchange outcomes

Claude Code

record_response

Store key takeaways from significant exchanges

Both

How scoring works: Claude Code's hooks prompt the main LLM to call score_memories every turn. OpenCode uses an independent sidecar that scores silently in the background — the model never sees a scoring prompt and score_memories is not registered as a tool. If the sidecar is unavailable, a warning prompts the user to run roampal sidecar setup. Choose your scoring model during roampal init or via roampal sidecar setup.

How Roampal Compares

Feature

Roampal Core

Claude Code built-in (CLAUDE.md / auto memory)

OpenCode built-in

Learns from outcomes

Yes — bad advice demoted, good advice promoted

No

No

Semantic retrieval

Yes — TagCascade + cross-encoder reranking

No — files loaded in full, no search

No memory system

Context injection

Automatic — relevant memories per query

Full CLAUDE.md every session, auto memory on demand

None

Atomic fact extraction

Yes — summaries + facts, two-lane retrieval

No — saves what Claude decides is useful

No

Works across projects

Yes — shared memory across all projects

Per-project only (per git repo)

No memory

Scales with history

Yes — 5 collections, promotion/demotion/decay

CLAUDE.md unbounded, auto memory first 200 lines

No memory

Fully local / private

Yes — ChromaDB on your machine

Yes

Yes

┌─────────────────────────────────────────────────────────┐
│  pip install roampal && roampal init                    │
│    Claude Code: hooks + MCP → ~/.claude/                │
│    OpenCode:    plugin + MCP → ~/.config/opencode/      │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  HTTP Hook Server (port 27182)                          │
│    Auto-started on first use, self-heals on failure     │
│    Manual control: roampal start / roampal stop         │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  User types message                                     │
│    → Hook/plugin calls HTTP server for context          │
│    → AI sees relevant memories, responds                │
│    → Exchange stored, scored (hooks or sidecar)         │
└─────────────────────────────────────────────────────────┘
                         │
                         ▼
┌─────────────────────────────────────────────────────────┐
│  Single-Writer Backend                                  │
│    FastAPI → UnifiedMemorySystem → ChromaDB             │
│    All clients share one server, isolated by session    │
└─────────────────────────────────────────────────────────┘

See dev/docs/ for full technical details.

Requirements

  • Python 3.10+

  • One of: Claude Code or OpenCode

  • Platforms: Windows, macOS, Linux (primarily developed and tested on Windows)

  • RAM: ~800MB available (cross-encoder reranker + embeddings + ChromaDB)

  • Disk: ~500MB for models (multilingual embedding + reranker, downloaded automatically on first use)

  • CPU: Any modern x86-64 processor with AVX2 (Intel Haswell 2013+ / AMD Excavator 2015+)

  • GPU: Not required — all inference runs on CPU via ONNX Runtime

Troubleshooting

  • Restart Claude Code (hooks load on startup)

  • Check HTTP server: curl http://127.0.0.1:27182/api/health

  • Verify ~/.claude.json has the roampal-core MCP entry with correct Python path

  • Check Claude Code output panel for MCP errors

  • Make sure you ran roampal init --opencode

  • Check that the server auto-started: curl http://127.0.0.1:27182/api/health

  • If not, start it manually: roampal start

This is expected. Roampal has self-healing -- if the HTTP server stops responding, it is automatically restarted and retried.

Still stuck? Ask your AI for help — it can read logs and debug Roampal issues directly.

Support

Roampal Core is completely free and open source.

License

Apache 2.0

Install Server
A
license - permissive license
B
quality
A
maintenance

Maintenance

Maintainers
13hResponse time
1dRelease cycle
62Releases (12mo)
Issues opened vs closed

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/roampal-ai/roampal-core'

If you have feedback or need assistance with the MCP directory API, please join our Discord server