celiums-memory
Celiums-memory is an MCP server that gives AI assistants persistent memory and access to a vast expert knowledge base across sessions via 6 tools:
forage– Search Expert Knowledge: Query 500,000+ expert modules using natural language with hybrid full-text + semantic search, returning ranked results with titles, descriptions, and categories.absorb– Load Full Module Content: Retrieve complete text of a specific module by slug (typically 2,000–20,000 words), including code examples and best practices.sense– Get Recommendations: Describe a goal or task and receive personalized module recommendations grouped by relevance.map_network– Browse Knowledge Network: Explore all available categories, module counts, and top modules with no parameters needed.remember– Store Persistent Memories: Save facts, decisions, preferences, or code patterns across sessions. Memories are auto-classified, importance-scored, and analyzed with emotional context (PAD model). Supports project-scoped or global storage.recall– Retrieve Memories: Search stored memories using semantic + emotional relevance ranking (vector similarity, full-text, and emotional resonance), with optional project-scope filtering.
Additional features:
Per-user circadian rhythm tracking (timezone + chronotype adaptation)
IDE integration with Claude Code, Cursor, and VS Code via MCP
REST API for programmatic access
Multi-language support: English, Spanish, Portuguese, Chinese, Japanese
Privacy-first: local-first storage, API key auth, per-user isolation, zero telemetry
Deployment options: local SQLite, Docker stack (PostgreSQL + Qdrant + Valkey), or DigitalOcean 1-click
Supports Cloudflare Tunnel deployment for secure external access to the Celiums server infrastructure.
Offers one-click deployment option to deploy the complete Celiums stack on DigitalOcean droplets.
Provides Docker Compose deployment for running the full Celiums stack including PostgreSQL, Qdrant, and Valkey.
Includes expert knowledge modules about Express.js framework as part of the 5,100+ technical modules available for search and reference.
Includes expert knowledge modules about Hono framework as part of the 5,100+ technical modules available for search and reference.
Includes expert knowledge modules about Kubernetes security and other topics as part of the 5,100+ technical modules available for search and reference.
Supports PostgreSQL as the primary database backend for both knowledge modules and memory storage, with pgvector extension support.
Includes expert knowledge modules about React, React Server Components, and related topics as part of the 5,100+ technical modules available for search and reference.
Uses Valkey (Redis-compatible) for caching and memory storage as part of the triple-store persistence architecture.
Supports SQLite as an alternative lightweight database option for local development and single-file deployment.
Includes expert knowledge modules about TypeScript mastery and related topics as part of the 5,100+ technical modules available for search and reference.
Celiums
Your AI doesn't know what it doesn't know. And it forgets everything.
The open-source engine that gives AI persistent memory and instant access to 5,100+ expert knowledge modules — with a biological clock that adapts to each user.
Try the Live Demo · Quick Start · 6 Tools · How to Use · Architecture · Deploy · Docs
What's new in v1.2 — 2026-04-27
🆕 20 MCP tools (was 6): journal (5), write (7), research (8) added.
🆕 BYOK LLM — bring your own OpenAI-compatible endpoint. Works with OpenAI, Ollama, OpenRouter, Together, Groq, vLLM, LM Studio. No proprietary lock-in.
🆕 Ethics Engine layers B + C — CVaR-probabilistic risk scoring + 5-framework philosophical evaluation, on top of layer A.
🆕 Integration utilities — encrypted credential storage (
integrations/crypto.ts), schema for tenant integrations, opportunistic LLM-powered output formatting (humanize.ts), free-form-query intent classifier.🧹 OpenCore is fully self-contained — zero network calls if you don't configure an LLM. The engine boots clean with nothing but a database.
The Problem
Every time your AI assistant starts a new session, it starts from zero. It doesn't remember your preferences, your project decisions, your debugging history, or what you were working on yesterday. It hallucinates because it has no specialized knowledge — just general training data frozen at a cutoff date.
You spend more time re-explaining context than getting work done.
Related MCP server: memora
The Solution
Celiums combines two engines into one:
Engine | What it does | How |
Memory | Remembers everything — with emotion | PAD vectors, dopamine, circadian rhythm, 15 cognitive modules |
Knowledge | Knows what experts know | 5,100 curated technical modules, full-text search, 18 categories |
Both engines expose 6 MCP tools that any AI IDE can call autonomously. Install once, your AI has persistent memory AND expert knowledge forever.
See it in action: ask.celiums.ai
Talk to Celiums AI directly — it uses all 5,100 modules, remembers you across sessions, and has a real circadian rhythm. Zero-knowledge: your data is never used for training.
Quick Start
Option 1: npm (local, 60 seconds)
npm install -g @celiums/cli
celiums initThat's it. celiums init:
Asks your name, timezone, and if you're a morning or night person
Loads 5,100 expert knowledge modules
Auto-configures Claude Code, Cursor, and VS Code
Creates your personal cognitive profile (circadian rhythm adapts to YOU)
Option 2: Docker (VPS, 3 minutes)
# 1. Clone
git clone https://github.com/terrizoaguimor/celiums-memory.git
cd celiums-memory
# 2. Configure
cp .env.example .env # edit passwords
# 3. Start infrastructure (PostgreSQL + Qdrant + Valkey)
docker compose up -d
# 4. Install dependencies
pnpm install
# 5. Build + start Celiums
pnpm setupYou get: Celiums API on port 3210 + PostgreSQL + Qdrant + Valkey. On first run, 5,100 expert modules are loaded automatically.
Option 3: DigitalOcean 1-Click (coming soon)
One button. Deploys everything on your own DO droplet.
Configure your LLM (BYOK)
OpenCore tools (recall, remember, forage, absorb, sense, map_network, synthesize, bloom, cultivate) work without any LLM — pure local memory + knowledge base.
The AI-backed tools (journal, write, research) require an OpenAI-compatible chat endpoint. You bring your own key. The engine never talks to a Celiums-hosted service for inference.
# Option A — OpenAI (default endpoint)
export CELIUMS_LLM_API_KEY=sk-...
# Option B — Ollama (local, free, no API key)
export CELIUMS_LLM_BASE_URL=http://localhost:11434/v1
export CELIUMS_LLM_API_KEY=ollama
export CELIUMS_LLM_MODEL=llama3.2
# Option C — OpenRouter (any model, one key)
export CELIUMS_LLM_BASE_URL=https://openrouter.ai/api/v1
export CELIUMS_LLM_API_KEY=sk-or-...
export CELIUMS_LLM_MODEL=anthropic/claude-3.5-sonnet
# Option D — Together / Groq / Anyscale / vLLM / LM Studio
# Same pattern: set BASE_URL + API_KEY + (optional) MODEL.Env var | Default | Purpose |
|
| OpenAI-compatible endpoint root |
| (empty — required to enable AI tools) | Bearer token for the endpoint |
|
| Default chat model |
|
| Default embedding model |
| (empty — optional) | Corpus-search backend for |
If CELIUMS_LLM_API_KEY is not set, AI-backed tools are simply not registered — tools/list returns only OpenCore. The engine never errors at boot for missing optional config.
The Tools
When connected via MCP, your AI can call these autonomously. Tools split into OpenCore (always available, no LLM required) and AI-backed (require an OpenAI-compatible LLM key — see Configure your LLM below).
Knowledge tools — OpenCore
Tool | What it does | Example |
| Search for expert knowledge | "find modules about Kubernetes security" |
| Load a specific module | "load the react-server-components module" |
| Get recommendations for a goal | "what should I use for building a REST API?" |
| Browse all categories | "show me what knowledge areas are covered" |
Memory tools — OpenCore
Tool | What it does | Example |
| Store something in memory | "remember that we chose Hono over Express" |
| Retrieve by semantic relevance | "what framework decisions did we make?" |
| Consolidate memories into a narrative | "what did I learn this week?" |
| Expand a concept into related ideas | "explore variations of memory consolidation" |
| Deep-dive a topic | "cultivate hybrid retrieval" |
Journal tools — AI-backed (since v1.2)
Persistent agent diary that survives across discontinuous invocations. Every model carries its own journal — when a new model takes over, it can read the predecessor's entries but never claim it lived them.
Tool | What it does |
| Append a new entry (auto-embedded, importance-scored) |
| Semantic + tag + type search across the agent's history |
| Build a coherent arc with anti-confabulation guardrails |
| Answer a self-question grounded in entries only |
| The agent reacts to a user-shared entry |
Write tools — AI-backed (since v1.2)
Novelist-grade project state. Tracks secrets_known_at_chapter per character, worldbuilding rules with cost/exceptions, and timeline markers — flags structural continuity issues, not line-by-line prose problems.
write_project_create, write_project_get, write_character_create, write_scene_create, write_scene_update, write_continuity_check, write_export.
Research tools — AI-backed (since v1.2)
Persistent multi-session investigations with citations, findings, and gaps. Resume a project days later and see all prior context in one shot.
research_project_create, research_project_list, research_project_continue, research_finding_add, research_gap_add, research_search, research_synthesize, research_export.
research_searchandresearch_synthesizeneed a corpus-search backend (CELIUMS_SEARCH_URL, any service exposingPOST /v1/search). Without it the project/findings/gaps trackers still work fine.
What happens behind remember (the user sees nothing, it just works):
User: "remember that we chose Hono over Express for the API"
|
PAD Emotional Vector (pleasure: 0.4, arousal: 0.3, dominance: 0.5)
|
Theory of Mind (empathy matrix transforms user emotion)
|
Dopamine / Habituation (novelty detection, reward modulation)
|
Per-User Circadian (your timezone, your peak hour, your rhythm)
|
PFC Regulation (clamp safe bounds, suppress extremes)
|
Triple-Store Persist (PostgreSQL + Qdrant + Valkey)
|
"Remembered (importance: 0.72)"15 cognitive systems fire on a single remember call. The user just types one sentence.
How to Use It
Connect to your IDE
After celiums init, it's auto-wired. Or manually:
Claude Code:
claude mcp add celiums -- celiums start --mcpCursor — add to ~/.cursor/mcp.json:
{
"mcpServers": {
"celiums": { "command": "celiums", "args": ["start", "--mcp"] }
}
}VS Code — add to settings.json:
{
"mcp.servers": {
"celiums": { "type": "stdio", "command": "celiums", "args": ["start", "--mcp"] }
}
}Use the tools in conversation
Once connected, your AI uses the tools automatically. Just talk normally:
You: "Find me best practices for PostgreSQL optimization"
AI: -> calls forage(query="PostgreSQL optimization")
-> finds postgresql-best-practices-v2 (eval: 4.0)
-> presents the expert module content
You: "Remember that we decided to use JSONB for metadata columns"
AI: -> calls remember(content="decided to use JSONB for metadata columns")
-> stored with importance 0.68, mood: focused
You: "What database decisions have we made?"
AI: -> calls recall(query="database decisions")
-> finds: "decided to use JSONB for metadata" (score: 0.89)
-> presents with emotional contextREST API
If running as a server (Docker/VPS), the full API is available:
# Search modules
curl http://localhost:3210/v1/modules?q=react+hooks
# Get a specific module
curl http://localhost:3210/v1/modules/typescript-mastery
# Browse categories
curl http://localhost:3210/v1/categories
# Store a memory
curl -X POST http://localhost:3210/store \
-H "Content-Type: application/json" \
-d '{"content": "The API uses Hono framework", "userId": "dev1"}'
# Recall memories
curl -X POST http://localhost:3210/recall \
-H "Content-Type: application/json" \
-d '{"query": "what framework", "userId": "dev1"}'
# Check your circadian rhythm
curl http://localhost:3210/circadian?userId=dev1
# Update your timezone
curl -X PUT http://localhost:3210/profile \
-H "Content-Type: application/json" \
-d '{"userId": "dev1", "timezoneIana": "Asia/Tokyo", "timezoneOffset": 9}'
# MCP protocol (for AI clients)
curl -X POST http://localhost:3210/mcp \
-H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/list"}'
# Health check
curl http://localhost:3210/healthConfiguration
All settings via environment variables:
# Core
DATABASE_URL=postgresql://user:pass@localhost:5432/celiums_memory
QDRANT_URL=http://localhost:6333
VALKEY_URL=redis://localhost:6379
PORT=3210
# SQLite mode (alternative, single file, zero infrastructure)
SQLITE_PATH=./celiums.db
# Knowledge engine
KNOWLEDGE_DATABASE_URL=postgresql://user:pass@localhost:5432/celiums
# Onboarding (auto-configure on first run)
CELIUMS_USER_NAME=dev1
CELIUMS_LANGUAGE=en # en, es, pt-BR, zh-CN, ja
CELIUMS_TIMEZONE=America/New_York
CELIUMS_CHRONOTYPE=morning # morning, neutral, nightArchitecture
Your AI (Claude Code, Cursor, VS Code, any MCP client)
|
| MCP JSON-RPC (6 tools)
v
CELIUMS ENGINE (1 process, 1 port)
| |
| Knowledge Engine | Memory Engine
| forage, absorb, | remember, recall
| sense, map_network |
| | 15 cognitive modules:
| 5,100 modules | limbic, circadian, dopamine,
| 18 dev categories | personality, ToM, PFC, ANS,
| full-text search | habituation, reward,
| | interoception, consolidation,
| | lifecycle, autonomy,
| | recall engine, importance
| |
v v
Modules DB Memory DB
(SQLite or PostgreSQL) (SQLite or PG + Qdrant + Valkey)Per-User Circadian Rhythm
Each user gets their own biological clock:
curl http://localhost:3210/circadian?userId=dev1
# {
# "localHour": 10.5,
# "rhythmComponent": 0.99,
# "timeOfDay": "morning-peak",
# "circadianContribution": 0.30
# }A user in Tokyo gets different arousal than a user in New York at the same moment.
Capability Gating
Tools appear based on your configuration. No upgrade prompts, no locked features visible.
Tier | Tools | What you get |
OpenCore (free) | 6 | forage, absorb, sense, map_network, remember, recall + 5,100 modules |
+ Fleet (coming) | +8 | synthesize, bloom, cultivate, pollinate, decompose, fleet, construct |
+ Atlas (coming) | +12 | Real-time collaboration, 451K+ modules |
Deploy Modes
Local (SQLite)
SQLITE_PATH=./celiums.db celiums startEverything in one file. Perfect for individual developers.
Docker (full stack)
docker compose up -dPostgreSQL 17 + pgvector, Qdrant, Valkey. Optional Cloudflare Tunnel:
docker compose --profile tunnel up -dDigitalOcean 1-Click (coming soon)
One button creates a droplet with everything pre-configured.
Languages
Language | Status | |
English | Default | |
Espanol | Supported | |
Portugues (Brasil) | Supported | |
Chinese (Simplified) | Supported | |
Japanese | Supported |
Auto-detected from your OS during celiums init.
Packages
Package | Description |
| Cognitive engine (15 modules, PAD, circadian) |
| TypeScript types |
| 5,100 curated expert modules |
| Knowledge engine (search, modules, tools) |
| CLI ( |
| MCP protocol adapter |
| REST API adapter |
| OpenAI Function Calling adapter |
| Google A2A protocol adapter |
Security
Local-first. Your memories live ONLY on your machine or your own server. Nothing is sent to us.
API key auth. Bearer token required for all non-localhost requests.
Per-user isolation. Each user has their own memory space, emotional state, and circadian profile.
No telemetry. Zero analytics, zero tracking, zero phone-home.
Contributing
See CONTRIBUTING.md.
git clone https://github.com/terrizoaguimor/celiums-memory.git
cd celiums-memory
pnpm install
pnpm buildSupport This Project
This project is built on ADHD hyperfocus, too much coffee, and the stubborn belief that AI deserves a real brain. Every one of these 11,000+ lines was written between 20-hour coding sessions, fueled by curiosity and obsession.
If Celiums is useful to you, or if you believe AI should have emotions and not just compute, consider supporting the work.
Your contribution keeps the GPUs running, the coffee flowing, and this project alive.
License
Apache 2.0 — see LICENSE
Built with obsessive attention to detail.
celiums.ai · npm · GitHub
Maintenance
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrizoaguimor/celiums-memory'
If you have feedback or need assistance with the MCP directory API, please join our Discord server