Skip to main content
Glama
ashwanijha04

Extremis Memory Connector for all AI agents

🧠 extremis

Memory that gets smarter the more your agent uses it

Python License: MIT PyPI CI MCP Cloud

Deploy to Render

One click Β· auto-provisions Postgres Β· memory persists across restarts


The problem

Every team building an AI agent hits the same wall.

Your agent forgets everything the moment a conversation ends. So you add memory. You set up a vector database, write chunking logic, figure out retrieval ranking, handle stale entries, add multi-user isolation. Three weeks later you've built a half-working RAG pipeline and still haven't shipped the actual feature.

And even when you ship it β€” it doesn't learn. Every memory is treated identically. The fact your agent recalled a hundred times and the user loved sits next to one it got wrong once. Nothing improves. There's no feedback loop. You're running the same dumb cosine search forever.

The other problem is lock-in. Your vectors are in Pinecone. Moving them means re-embedding everything, rewriting your retrieval logic, and hoping nothing breaks.

extremis solves all three.


What makes extremis different

1. Memory that forgets intelligently

Every competitor focuses on storing memory. Nobody talks about forgetting.

Human memory doesn't keep everything forever β€” unimportant things fade, important things strengthen. Agents with infinite, flat memory become slow and noisy over time. Intelligent forgetting is the hard problem nobody is solving.

extremis does two things here: recency decay (old memories rank lower automatically) and asymmetric RL weighting (negative feedback hurts 1.5Γ— more than positive feedback helps, because mistakes should leave a stronger mark). The result is a memory that naturally surfaces what matters and buries what doesn't.

mem = Extremis(config=Config(
    recency_half_life_days=30,  # episodic memories halve in rank every 30 days
    rl_alpha=0.8,               # strong RL signal β€” useful things stick, useless things fade
))

# This memory will rank lower in every future search
mem.report_outcome([bad_memory_id], success=False, weight=1.0)
# β†’ score decreases by 1.5 (not 1.0 β€” the asymmetry is intentional)

2. Memory that explains itself

Agents make decisions based on memory. But why did it recall that specific memory? Without explainability you're guessing, debugging is painful, and auditing is impossible.

Every recall() result includes a plain-English reason:

results = mem.recall("what does the user prefer?")

for r in results:
    print(r.memory.content)
    print(r.reason)

# "User prefers concise answers, no filler words"
# β†’ "similarity 0.91 Β· score +4.0 Β· used 8Γ— Β· 3d old"

# "User prefers dark mode in all UIs"
# β†’ "semantic (always included) Β· similarity 0.73 Β· score +1.0 Β· used 3Γ— Β· 12d old"

# "User once mentioned preferring email over Slack"
# β†’ "similarity 0.54 Β· score -1.5 Β· first recall Β· 45d old"

The reason tells you: how semantically relevant it was, how much feedback has validated it, how many times it's been used, and how old it is. Auditable. Debuggable.


3. Cross-agent shared memory

Right now memory is per-agent. But the next wave of AI is agent teams β€” a research agent, a writing agent, a review agent, all working together. They need a shared brain.

extremis's namespace model already supports this. Multiple agents can read from and write to the same memory pool:

# All three agents share the same memory namespace
research = Extremis(config=Config(namespace="team_alpha"))
writer   = Extremis(config=Config(namespace="team_alpha"))
reviewer = Extremis(config=Config(namespace="team_alpha"))

# Research agent stores what it found
research.remember("GPT-4 outperforms Claude on math benchmarks by 12%")
research.remember("Source: Stanford HAI report, April 2026")

# Writing agent recalls it without any extra wiring
results = writer.recall("GPT-4 performance data")
# β†’ [GPT-4 outperforms Claude on math benchmarks by 12%]
# β†’ [Source: Stanford HAI report, April 2026]

# Knowledge graph is shared too
research.kg_add_entity("Stanford HAI", EntityType.ORG)
research.kg_add_relationship("Stanford HAI", "HAI Report", "published")
print(writer.kg_query("Stanford HAI"))  # same graph

4. No RAG pipeline to build

One pip install. Two lines of config. extremis handles embedding, storage, retrieval ranking, consolidation, and the knowledge graph. You call remember() and recall().

# Local β€” zero infra
from extremis import Extremis
mem = Extremis()

# Your existing vector store
mem = Extremis(config=Config(store="pinecone", pinecone_api_key="..."))

# Self-hosted server β€” no model download on the client
from extremis import HostedClient
mem = HostedClient(api_key="extremis_sk_...", base_url="http://your-server:8000")

# Same three lines work for all three
mem.remember("User is building a WhatsApp AI", conversation_id="c1")
results = mem.recall("what is the user building?")
mem.report_outcome([r.memory.id for r in results], success=True)

5. Backend portability β€” no lock-in

Your vectors in Pinecone. Your team moves to Chroma. Your product needs Postgres. One command, everything migrates β€” and re-embeds automatically if you're switching models:

extremis-migrate --from pinecone --to postgres \
  --source-pinecone-api-key pk_... \
  --dest-postgres-url postgresql://...

# Switching to OpenAI embeddings at the same time
extremis-migrate --from sqlite --to chroma \
  --dest-embedder text-embedding-3-small

Coming soon

Memory health dashboard β€” freshness score, contradiction count, retrieval hit rate, coverage gaps. Memory observability nobody is building yet.

Domain profiles β€” pre-built memory configurations for common agent types:

# Coming in v0.2
from extremis.profiles import SalesAgent, CodingAgent, SupportAgent

mem = Extremis(profile=SalesAgent())
# Knows to remember: customer names, deal stage, objections, preferences
# Knows to forget: small talk after 7 days, meeting logistics after 24h
# Attention: high for "budget", "decision maker", "timeline"


How it works

The intelligence layer

extremis sits above your vector store. RL scoring, the knowledge graph, consolidation, and attention scoring are all backend-independent β€” they work the same whether your vectors are in SQLite, Pinecone, or Chroma.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                     YOUR APP / AGENT                            β”‚
β”‚      remember() Β· recall() Β· report_outcome() Β· kg_*()         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                           β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                  EXTREMIS INTELLIGENCE LAYER                      β”‚
β”‚   RL scoring Β· Knowledge graph Β· Consolidation Β· Observer       β”‚
β”‚   Attention scorer Β· Namespace isolation Β· Log durability       β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
     β”‚              β”‚              β”‚              β”‚
β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”
β”‚SQLite  β”‚   β”‚Postgres β”‚   β”‚  Chroma  β”‚   β”‚Pinecone β”‚
β”‚(local) β”‚   β”‚+pgvectorβ”‚   β”‚ (local)  β”‚   β”‚(hosted) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

The memory flow

Every conversation
─────────────────
  remember("user said X")     ──▢  fsync to JSONL log (durable)
                                    + episodic memory (embedded + stored)

  recall("topic")             ──▢  embed query
                                    β†’ identity + procedural  (always included)
                                    β†’ semantic + episodic    (ranked by score)
                                    ← ranked results

  report_outcome(ids, +1/-1)  ──▢  adjust utility scores
                                    negative gets 1.5Γ— weight (human memory bias)

Periodically
────────────
  consolidate()               ──▢  read log since last checkpoint
                                    β†’ Claude Haiku extracts facts
                                    β†’ semantic/procedural memories written
                                    β†’ checkpoint advanced (safe to re-run)

Retrieval ranking

Every recalled memory gets a final_rank that balances three signals:

final_rank = cosine_similarity
           Γ— (1 + Ξ± Β· tanh(utility_score))   ← learned from feedback
           Γ— exp(βˆ’ln2 Β· age_days / half_life) ← recency decay

A memory that has proven useful (+1 feedback) ranks above an equally similar but unvalidated memory. Negative signals apply 1.5Γ— weight β€” the same asymmetry human threat-learning uses.

Memory layers

Layer

What it holds

Written by

Always recalled?

identity

Who the user fundamentally is

Human review only

βœ… Always

procedural

Behavioural rules: "ask about deadline first"

Consolidator

βœ… Always

semantic

Durable facts: "user is a solo Python developer"

Consolidator

By relevance

episodic

Timestamped conversation events

remember()

By relevance

working

Session-scoped, expires on a set datetime

remember_now()

By relevance

Knowledge graph

Beyond vectors, extremis maintains a structured graph β€” answers structural questions that semantic search can't:

mem.kg_add_entity("Alice", EntityType.PERSON)
mem.kg_add_entity("Acme Corp", EntityType.ORG)
mem.kg_add_relationship("Alice", "Acme Corp", "works_at", weight=0.95)
mem.kg_add_attribute("Alice", "timezone", "Asia/Dubai")
mem.kg_add_attribute("Alice", "tone", "formal")

# "Who does Alice work for?" β€” can't answer with cosine similarity alone
result = mem.kg_query("Alice")
# β†’ Entity + all relationships + all attributes + BFS traverse

# Two-hop traverse
graph = mem.kg_traverse("Alice", depth=2)

Attention scoring

Before deciding how much to engage with an incoming message, score it β€” free, zero LLM cost:

score = sender_score + channel_score + content_score + context_score  (0–100)

full      β‰₯ 75  β†’ engage fully
standard  β‰₯ 50  β†’ balanced response  
minimal   β‰₯ 25  β†’ brief acknowledgement
ignore    < 25  β†’ skip

Observer (log compression)

Compresses raw log entries into priority-tagged observations β€” no LLM, runs instantly:

πŸ”΄ CRITICAL  decisions, errors, deadlines, shipped/launched, reward signals
🟑 CONTEXT   reasons, insights, learnings, "because", "discovered"
🟒 INFO      everything else

Install

Requires Python 3.11+

If pip install extremis says "no matching distribution found" β€” your default pip points to Python 3.9 or older. This is common on macOS.

Check your version: python3 --version

Platform

Fix

macOS

brew install python@3.11 then use pip3.11

Linux

sudo apt install python3.11 python3.11-pip

Windows

python.org/downloads

# Confirm you have Python 3.11+
python3.11 --version

# Core β€” SQLite + local sentence-transformers (no API key needed)
pip3.11 install extremis

# + MCP server (Claude Desktop / Code)
pip3.11 install "extremis[mcp]"

# + Postgres backend
pip3.11 install "extremis[postgres]"

# + Chroma backend
pip3.11 install "extremis[chroma]"

# + Pinecone backend
pip3.11 install "extremis[pinecone]"

# + OpenAI embeddings (swap out the 90 MB model download)
pip3.11 install "extremis[openai]"

# + LLM client wrappers (Claude / OpenAI β€” automatic memory, one import change)
pip3.11 install "extremis[wrap-anthropic]"   # for Claude
pip3.11 install "extremis[wrap-openai]"      # for OpenAI

# + Hosted API server
pip3.11 install "extremis[server]"

# + Python SDK for hosted cloud
pip3.11 install "extremis[client]"

# Everything
pip3.11 install "extremis[all]"

Requires Python 3.11+

First run note β€” sentence-transformers downloads all-MiniLM-L6-v2 (~90 MB) on first use. One-time, cached to ~/.cache/huggingface/. To skip it, use OpenAI embeddings: EXTREMIS_EMBEDDER=text-embedding-3-small.


Quickest start β€” wrap your existing LLM client

Don't want to change your application logic at all? Change one import and get memory for free.

Claude (Anthropic)

# Before
import anthropic
client = anthropic.Anthropic(api_key="sk-ant-...")

# After β€” one line change, nothing else in your app changes
from extremis.wrap import Anthropic
from extremis import Extremis

client = Anthropic(api_key="sk-ant-...", memory=Extremis())

# Your existing code works unchanged
response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "What's my name?"}]
)
# ↑ extremis automatically recalled context before the call
#   and saved the conversation after β€” nothing else to do

OpenAI

from extremis.wrap import OpenAI
from extremis import Extremis

client = OpenAI(api_key="sk-...", memory=Extremis())

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What did we discuss last time?"}]
)

With hosted memory (zero local files)

from extremis.wrap import Anthropic
from extremis import HostedClient

# Memory lives in the cloud β€” no local DB, no model download
client = Anthropic(
    api_key="sk-ant-...",
    memory=HostedClient(api_key="extremis_sk_...", base_url="https://your-server.onrender.com"),
    session_id="user_123",   # group messages per user for consolidation
)

Install:

pip3.11 install "extremis[wrap-anthropic]"   # for Claude
pip3.11 install "extremis[wrap-openai]"      # for OpenAI

Full API quick start

from extremis import Extremis, MemoryLayer
from extremis.types import EntityType

mem = Extremis()  # ~/.extremis/ by default

# ── Remember ──────────────────────────────────────────────────
mem.remember("User is building a WhatsApp AI", conversation_id="conv_001")
mem.remember("User prefers concise answers", conversation_id="conv_001")

# Skip the log for time-sensitive or high-confidence facts
mem.remember_now(
    "Flight departs Thursday at 06:00",
    layer=MemoryLayer.EPISODIC,
    confidence=0.99,
)

# ── Recall ────────────────────────────────────────────────────
results = mem.recall("what product is the user building?", limit=5)
for r in results:
    print(f"[{r.memory.layer.value}] {r.memory.content}  rank={r.final_rank:.3f}")

# ── Feedback β†’ memories get smarter over time ─────────────────
mem.report_outcome([r.memory.id for r in results[:2]], success=True)

# ── Knowledge graph ───────────────────────────────────────────
mem.kg_add_entity("User", EntityType.PERSON)
mem.kg_add_entity("Friday", EntityType.PROJECT)
mem.kg_add_relationship("User", "Friday", "building")
mem.kg_add_attribute("User", "timezone", "Asia/Dubai")

print(mem.kg_query("User"))

# ── Attention scoring ─────────────────────────────────────────
result = mem.score_attention("URGENT: the API is down!", channel="dm")
print(result.level)   # β†’ "full"
print(result.score)   # β†’ 85

# ── Consolidation (nightly / on-demand) ───────────────────────
from extremis.consolidation import LLMConsolidator
consolidator = LLMConsolidator(mem._config, mem._embedder)
r = consolidator.run_pass(mem.get_log(), mem.get_local_store(), mem.get_local_store())
print(f"{r.memories_created} facts extracted from logs")

Storage backends

All backends share the same API. Swap with one env var.

Don't want anything stored locally?

Three options β€” all work out of the box:

Option

Local footprint

Cost

Postgres on Supabase / Neon

None

Free tier available

Pinecone

RL score sidecar only (~KB)

Free tier available

HostedClient (your own server)

None at all

Your hosting cost

Quickest: free Postgres on Supabase

# 1. Create project at supabase.com, grab the connection string
# 2. Enable pgvector: run "CREATE EXTENSION vector;" in the SQL editor
pip3.11 install "extremis[postgres]"
EXTREMIS_STORE=postgres EXTREMIS_POSTGRES_URL=postgresql://... python3.11 your_app.py

Zero footprint: HostedClient

from extremis import HostedClient
# deploy extremis-server on Railway/Fly/Render, point at it
mem = HostedClient(api_key="extremis_sk_...", base_url="https://your-server.railway.app")
# nothing written locally β€” not even the embedding model

SQLite β€” default, zero infrastructure

EXTREMIS_STORE=sqlite
EXTREMIS_FRIDAY_HOME=~/.extremis   # DB at ~/.extremis/local.db

Postgres + pgvector β€” production scale, ranking in SQL

pip3.11 install "extremis[postgres]"
EXTREMIS_STORE=postgres
EXTREMIS_POSTGRES_URL=postgresql://user:pass@host/extremis

Requires CREATE EXTENSION vector; in your database. Schema migrates automatically on first start.

Chroma β€” local vector DB, great for teams

pip3.11 install "extremis[chroma]"
EXTREMIS_STORE=chroma
EXTREMIS_CHROMA_PATH=~/.extremis/chroma

Pinecone β€” serverless hosted vectors

pip3.11 install "extremis[pinecone]"
EXTREMIS_STORE=pinecone
EXTREMIS_PINECONE_API_KEY=pk_...
EXTREMIS_PINECONE_INDEX=extremis

Create the index first (dimension must match your embedder):

from pinecone import Pinecone, ServerlessSpec
pc = Pinecone(api_key="pk_...")
pc.create_index("extremis", dimension=384, metric="cosine",
                spec=ServerlessSpec(cloud="aws", region="us-east-1"))

OpenAI embeddings β€” no model download

pip3.11 install "extremis[openai]"
EXTREMIS_EMBEDDER=text-embedding-3-small
OPENAI_API_KEY=sk-...
EXTREMIS_EMBEDDING_DIM=1536

Works with any storage backend. Removes the 90 MB local model download.


Migrating backends

Move all memories between backends in one command. extremis re-embeds automatically if the source and destination use different embedding models.

pip3.11 install "extremis[chroma,pinecone]"

# Escape Pinecone lock-in β†’ local SQLite
extremis-migrate --from pinecone --to sqlite \
  --source-pinecone-api-key pk_... \
  --source-pinecone-index my-index

# Local SQLite β†’ Postgres (upgrade to production)
extremis-migrate --from sqlite --to postgres \
  --dest-postgres-url postgresql://...

# Switch to OpenAI embeddings while migrating
extremis-migrate --from sqlite --to chroma \
  --dest-embedder text-embedding-3-small

# Dry run β€” count what would be migrated
extremis-migrate --from sqlite --to chroma --dry-run

Hosted API

Run extremis as a service β€” your users call it with an API key, all compute happens server-side. No model download on the client. No local database.

Status: The server is fully built and self-hostable today. A managed cloud at api.extremis.com is in progress β€” join the waitlist.

One-click deploy to Render (memory lives in Render Postgres)

Deploy to Render

Clicking this button deploys extremis-server and provisions a free Postgres database automatically via render.yaml. Memory lives in Render's managed Postgres β€” persistent across restarts and redeploys.

Getting your API key β€” check the logs, it's already there.

On first startup, extremis auto-generates a key and prints it in the server logs. In Render:

  1. Click your extremis service β†’ Logs tab

  2. Look for the block that says extremis β€” FIRST START

  3. Copy the key that starts with extremis_sk_...

============================================================
  extremis β€” FIRST START
============================================================
  No API keys found. Generated your first key:

  extremis_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

  Namespace: default
  Store this key β€” it will NOT be shown again.
============================================================

Connect from anywhere with zero local footprint:

from extremis import HostedClient
mem = HostedClient(api_key="extremis_sk_...", base_url="https://your-app.onrender.com")

To create additional keys (e.g. per user/namespace), use Render's Shell tab:

extremis-server create-key --namespace alice --label "alice prod"

Deploy to Railway (manual β€” 3 steps)

⚠️ Don't use SQLite on Railway. Container filesystems are ephemeral β€” memories are lost on every restart. Always use Railway Postgres.

  1. Create a new project on railway.app β†’ Deploy from GitHub repo β†’ select extremis

  2. Add a Postgres plugin: + New β†’ Database β†’ PostgreSQL

  3. Set these environment variables on the extremis service:

    EXTREMIS_STORE=postgres
    EXTREMIS_POSTGRES_URL=${{Postgres.DATABASE_URL}}

Railway injects the URL automatically. Memory now lives in Railway's managed Postgres.

Self-host locally in 2 minutes

pip3.11 install "extremis[server]"

# Generate an API key
extremis-server create-key --namespace alice --label "prod"
# β†’ extremis_sk_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx  (shown once, store it)

# Start the server
extremis-server serve --host 0.0.0.0 --port 8000

# Or with Docker (bundles Postgres + pgvector)
docker compose up

Connect from Python

from extremis import HostedClient

# Point at your self-hosted server
mem = HostedClient(api_key="extremis_sk_...", base_url="http://your-server:8000")

# Exact same API as Memory β€” nothing else changes
mem.remember("User is building a WhatsApp AI", conversation_id="c1")
results = mem.recall("WhatsApp")
mem.report_outcome([r.memory.id for r in results], success=True)

API endpoints

POST /v1/memories/remember     append to log + episodic store
POST /v1/memories/recall       semantic search, layered retrieval
POST /v1/memories/report       RL signal (+1/βˆ’1)
POST /v1/memories/store        direct write to any layer
POST /v1/memories/consolidate  LLM consolidation pass
GET  /v1/memories/observe      priority-tagged log compression
POST /v1/kg/write              add entity / relationship / attribute
POST /v1/kg/query              query + BFS graph traverse
POST /v1/attention/score       0–100 message priority score
GET  /v1/health

All requests require Authorization: Bearer extremis_sk_.... Namespace is derived from the key.

Key management

extremis-server create-key --namespace prod_user_123 --label "production"
extremis-server list-keys
extremis-server list-keys --namespace prod_user_123
extremis-server revoke-key --key-hash abc123...

Deploy to production

Railway / Render (fastest β€” 10 minutes):

  1. Point at the Dockerfile

  2. Set EXTREMIS_STORE=postgres and EXTREMIS_POSTGRES_URL

  3. Deploy

Fly.io:

fly launch
fly secrets set EXTREMIS_STORE=postgres EXTREMIS_POSTGRES_URL=postgresql://...
fly deploy

Self-hosted Docker:

docker build -t extremis-server .
docker run -p 8000:8000 \
  -e EXTREMIS_STORE=postgres \
  -e EXTREMIS_POSTGRES_URL=postgresql://... \
  -v lore_data:/data \
  extremis-server

MCP setup

Claude Desktop

pip3.11 install "extremis[mcp]"

Add to ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "extremis": {
      "command": "extremis-mcp",
      "env": {
        "EXTREMIS_FRIDAY_HOME": "~/.extremis",
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Restart Claude Desktop. Nine tools appear automatically.

Claude Code

claude mcp add extremis extremis-mcp \
  --env EXTREMIS_FRIDAY_HOME=~/.extremis \
  --env ANTHROPIC_API_KEY=sk-ant-...

SSE / HTTP mode

extremis-mcp --transport sse --port 8765

MCP tools

Tool

What it does

LLM cost

memory_remember

Append to log + episodic store

None

memory_recall

Semantic search, identity+procedural always included

None

memory_report_outcome

+1/βˆ’1 RL signal on recalled memories

None

memory_remember_now

Direct write to any layer (bypass log)

None

memory_consolidate

Distil logs into semantic/procedural memories

Haiku

memory_kg_write

Add entity / relationship / attribute

None

memory_kg_query

Query entity + BFS graph traverse

None

memory_observe

Compress log into πŸ”΄πŸŸ‘πŸŸ’ observations

None

memory_score_attention

Score a message 0–100

None


Multi-user / namespace isolation

Two isolation models:

Instance-level β€” each user gets their own process and EXTREMIS_FRIDAY_HOME. What Claude Desktop does naturally.

Namespace-level β€” one deployment, many users. All memories, logs, and graph data scoped per namespace. Zero leakage.

EXTREMIS_NAMESPACE=alice extremis-mcp   # Alice's memory
EXTREMIS_NAMESPACE=bob   extremis-mcp   # Bob's β€” completely separate, same DB
mem_alice = Extremis(config=Config(namespace="alice"))
mem_bob   = Extremis(config=Config(namespace="bob"))
# same DB file, zero crossover

Configuration

All settings via EXTREMIS_ environment variables or a .env file:

Variable

Default

Description

EXTREMIS_STORE

sqlite

Backend: sqlite Β· postgres Β· chroma Β· pinecone

EXTREMIS_NAMESPACE

default

User/agent isolation scope

EXTREMIS_FRIDAY_HOME

~/.extremis

Base dir for logs and SQLite DB

EXTREMIS_POSTGRES_URL

(empty)

Postgres DSN (required when store=postgres)

EXTREMIS_CHROMA_PATH

~/.extremis/chroma

ChromaDB persistence directory

EXTREMIS_PINECONE_API_KEY

(empty)

Pinecone API key

EXTREMIS_PINECONE_INDEX

extremis

Pinecone index name

EXTREMIS_EMBEDDER

all-MiniLM-L6-v2

Model name β€” sentence-transformers or OpenAI

EXTREMIS_EMBEDDING_DIM

384

Vector dimension (must match model)

EXTREMIS_OPENAI_API_KEY

(empty)

OpenAI key (required for OpenAI embedders)

EXTREMIS_CONSOLIDATION_MODEL

claude-haiku-4-5-20251001

LLM for consolidation

EXTREMIS_RL_ALPHA

0.5

Utility score weight in retrieval ranking

EXTREMIS_RECENCY_HALF_LIFE_DAYS

90

Recency decay half-life

EXTREMIS_ATTENTION_FULL_THRESHOLD

75

Score β‰₯ this β†’ full attention

EXTREMIS_ATTENTION_STANDARD_THRESHOLD

50

Score β‰₯ this β†’ standard

EXTREMIS_ATTENTION_MINIMAL_THRESHOLD

25

Score β‰₯ this β†’ minimal


How it compares

extremis

Mem0

LangChain

Zep

Raw Pinecone

Self-hostable

βœ…

❌ cloud only

βœ…

βœ…

βœ…

Backend-agnostic

βœ… 4 backends

❌

⚠️ manual

❌

β€”

RL-scored retrieval

βœ…

❌

❌

❌

❌

Asymmetric feedback (1.5Γ—)

βœ…

❌

❌

❌

❌

Knowledge graph

βœ…

❌

❌

βœ…

❌

5-layer memory

βœ…

⚠️ basic

⚠️ basic

⚠️ basic

❌

Log-first durability

βœ…

❌

❌

❌

❌

Migration CLI

βœ…

❌

❌

❌

β€”

Attention scoring

βœ…

❌

❌

❌

❌

MCP server (Claude)

βœ…

❌

❌

❌

❌

Hosted API

βœ… self-host

βœ…

❌

βœ…

β€”

Open source

βœ… MIT

⚠️ partial

βœ…

βœ…

β€”


Project structure

extremis/
β”œβ”€β”€ src/extremis/
β”‚   β”œβ”€β”€ api.py              ← Memory β€” the local API
β”‚   β”œβ”€β”€ client.py           ← HostedClient β€” the cloud API (same interface)
β”‚   β”œβ”€β”€ config.py           ← Config (EXTREMIS_ env vars)
β”‚   β”œβ”€β”€ types.py            ← Memory, Entity, Observation, AttentionResult, ...
β”‚   β”œβ”€β”€ interfaces.py       ← LogStore, MemoryStore, Embedder protocols
β”‚   β”œβ”€β”€ migrate.py          ← Migrator + extremis-migrate CLI
β”‚   β”œβ”€β”€ storage/
β”‚   β”‚   β”œβ”€β”€ sqlite.py       ← SQLiteMemoryStore
β”‚   β”‚   β”œβ”€β”€ postgres.py     ← PostgresMemoryStore (pgvector, ranking in SQL)
β”‚   β”‚   β”œβ”€β”€ chroma.py       ← ChromaMemoryStore
β”‚   β”‚   β”œβ”€β”€ pinecone_store.py ← PineconeMemoryStore
β”‚   β”‚   β”œβ”€β”€ kg.py           ← SQLiteKGStore
β”‚   β”‚   β”œβ”€β”€ log.py          ← FileLogStore (JSONL, fsync, checkpoints)
β”‚   β”‚   └── score_index.py  ← SQLiteScoreIndex (RL scores for external backends)
β”‚   β”œβ”€β”€ embeddings/
β”‚   β”‚   β”œβ”€β”€ sentence_transformers.py
β”‚   β”‚   └── openai.py
β”‚   β”œβ”€β”€ consolidation/
β”‚   β”‚   β”œβ”€β”€ consolidator.py ← LLMConsolidator (log β†’ Claude Haiku β†’ memories)
β”‚   β”‚   └── prompts.py
β”‚   β”œβ”€β”€ observer/
β”‚   β”‚   └── observer.py     ← HeuristicObserver (πŸ”΄πŸŸ‘πŸŸ’)
β”‚   β”œβ”€β”€ scorer/
β”‚   β”‚   └── attention.py    ← AttentionScorer (0–100)
β”‚   β”œβ”€β”€ mcp/
β”‚   β”‚   └── server.py       ← FastMCP server (9 tools)
β”‚   └── server/
β”‚       β”œβ”€β”€ app.py          ← FastAPI hosted API
β”‚       β”œβ”€β”€ auth.py         ← API key management
β”‚       β”œβ”€β”€ deps.py         ← FastAPI dependencies
β”‚       └── routes/         ← memories, kg, health
β”œβ”€β”€ Dockerfile
β”œβ”€β”€ docker-compose.yml
└── tests/                  ← 50 test files, no LLM calls

Contributing

See CONTRIBUTING.md. The quickest contribution is a new storage backend β€” implement the MemoryStore protocol in storage/ and add tests. We'll merge it.

Security

See SECURITY.md for reporting vulnerabilities.

License

MIT Β· Built by Ashwani Jha


A
license - permissive license
-
quality - not tested
B
maintenance

Maintenance

–Maintainers
–Response time
0dRelease cycle
2Releases (12mo)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ashwanijha04/extremis'

If you have feedback or need assistance with the MCP directory API, please join our Discord server