Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PORTNoPort for the Celiums server3210
QDRANT_URLNoURL for Qdrant vector database
VALKEY_URLNoURL for Valkey (Redis) database
SQLITE_PATHNoPath to SQLite database file (alternative to PostgreSQL)
DATABASE_URLNoPostgreSQL database URL for memory storage
CELIUMS_LANGUAGENoLanguage for the interface (en, es, pt-BR, zh-CN, ja)en
CELIUMS_TIMEZONENoIANA timezone for the user
CELIUMS_USER_NAMENoUser name for onboarding
CELIUMS_CHRONOTYPENoChronotype for circadian rhythm (morning, neutral, night)
KNOWLEDGE_DATABASE_URLNoPostgreSQL database URL for knowledge modules

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
forageA

Search 500,000+ expert knowledge modules by natural language query. Returns ranked results with titles, descriptions, and categories. Use when the user needs technical guidance, best practices, or domain expertise. Behavior: performs hybrid search (full-text + semantic) across the knowledge base, ranks by relevance, returns top N matches. Example queries: "kubernetes horizontal pod autoscaler", "react hooks best practices", "HIPAA compliance checklist".

absorbA

Load the full content of a knowledge module by its exact name/slug. Returns the complete module text (typically 2,000-20,000 words) with code examples, best practices, and references. Use after forage to read a specific module in full. Behavior: looks up the module by slug, returns full markdown content. If not found, suggests using forage to search. Example: absorb("react-mastery") returns the complete React mastery guide.

senseA

Get personalized module recommendations based on a goal or task description. Uses keyword matching and category ranking (no AI inference). Faster than forage for broad exploration. Use when the user describes what they want to achieve and needs guidance on which modules to study. Behavior: analyzes the goal text, matches against module metadata, returns ranked suggestions grouped by relevance. Example: sense("I want to deploy a microservices app on Kubernetes with monitoring").

map_networkA

Browse the entire Celiums knowledge network organized by category. Returns all categories with module counts, top modules per category, and total statistics. Use to explore what knowledge is available, discover categories, or get an overview of the knowledge base. Behavior: queries the module index, groups by category, returns a structured map with counts. No parameters needed — returns the full network overview.

rememberA

Store information in persistent memory that survives across all sessions and machines. Memories are automatically classified by type (semantic, procedural, episodic) and importance. Use to save facts, preferences, decisions, context, or any information that should be recalled later. Behavior: stores the content with emotional analysis (PAD model), assigns importance score, updates circadian interaction tracking. Scoped to current project by default — use projectId="global" for cross-project memories like user preferences or business decisions.

recallA

Search persistent memory using semantic + emotional relevance ranking. Returns memories sorted by relevance, recency, and emotional resonance. Searches current project + global memories by default. Use to retrieve previously stored facts, decisions, preferences, or context. Behavior: performs hybrid retrieval (vector similarity + full-text + emotional resonance), applies spaced activation recall (SAR) filtering, returns ranked results with content, type, importance, and relevance score.

journal_writeA

Append a first-person entry to YOUR (the model's) persistent journal. Each agent_id (e.g. claude-opus-4-7, claude-sonnet-4-6, gpt-5, ...) has its OWN journal — they do NOT mix. importance is auto-computed: decisions/lessons/arcs are weighted higher; emotions are weighted lower. The content is embedded via the configured embedding model (CELIUMS_EMBED_MODEL) so journal_recall can find it semantically later. visibility=self (default) keeps the entry private; user-shared makes it eligible for journal_dialogue. preceded_by builds a causal chain — pass the ids of prior entries that led to this one.

journal_recallA

Search YOUR journal. Filters by entry_type, tags, and/or a semantic query (embedded via the configured embedding model, ranked by cosine similarity). By default scopes to YOUR agent_id; pass inherit_from=<predecessor_agent_id> to read a predecessor model's journal — those entries return with inherited_from set in the response, marking them as "read but not lived" (Option C of the succession-of-models design). DEFAULT excludes entries that have been superseded or recanted; pass include_superseded=true to see them.

journal_arcA

Build a coherent arc across YOUR recent entries using the configured LLM — with anti-confabulation guardrails. Output ALWAYS returns 4 keys: narrative, contradictions (entry pairs in tension), outliers (entries that don't fit), and confidence [0,1]. If outliers is empty you are probably confabulating coherence — the response is annotated with a WARNING. confidence < 0.7 is flagged as a "weak arc". Default window is the last month, max 50 entries. Excludes superseded entries.

journal_introspectA

Ask YOUR journal a self-question. Pulls semantically-relevant entries, then asks the configured LLM to answer in YOUR first-person voice grounded ONLY in those entries (no invention). Returns the answer plus entries_referenced and a hallucination_risk score (high if <3 entries grounded the answer, medium if <6, otherwise low). If entries don't support an answer, the answer literally is "no patterns found in journal".

journal_dialogueA

The user replies to one of your user-shared entries. The tool refuses with "entry is private" if visibility=self. Otherwise the configured LLM writes YOUR honest first-person reaction to their reply, and a new reflection entry is created with preceded_by=[entry_id] and content "User reply: …\n\nMy reaction: …". Both entries are tagged "dialogue".

write_project_createA

Create a writing project (novel, screenplay, long-form). Returns a project_id used by all other write_* tools. structureTemplate enables beat tracking against a known structure: three-act | save-the-cat | hero-journey | snowflake | free.

write_project_getA

Get full project state: metadata, all characters, scene count, total word count, and the 5 most recent scenes. Use to orient yourself when resuming work.

write_character_createA

Create or upsert a character. Voice sample is critical for continuity_check — it lets the editor detect when a character's dialogue drifts from their established voice. Pass voiceSample as a 100-300 word excerpt of how they speak.

write_scene_createA

Insert a scene at a specific position. POV character + location + time_marker enable continuity_check. scene_goal/conflict/outcome are optional but recommended — they make the scene's purpose explicit and improve revision suggestions.

write_scene_updateB

Replace a scene's content. Automatically snapshots the previous version into write_revision_log so the writer can diff between revisions later. Bumps the version counter.

write_continuity_check

Signature feature: structural continuity check using Opus 4.7. Loads the target scene, prior 20 scenes, all characters (with their secrets_known_at_chapter and voice samples), and worldbuilding rules. Outputs a JSON list of issues: secret-leak, description-drift, timeline conflict, worldbuilding violation, voice drift. Each issue includes severity, scene_position, description, and a suggested_fix. NO other writing tool does this — Sudowrite/Grammarly/ProWritingAid are line-by-line, this is structural.

write_exportA

Export the project as a markdown manuscript. Scenes are emitted in position order, grouped by chapter_id, with POV character and time markers as italic interstitials. Use as a clean preview or to ship into Notion / docx tooling later.

research_project_createA

Create a persistent research project. Returns a project_id that you can pass to all subsequent research_* calls. Projects survive across sessions — open it days later with research_project_continue and you get every prior finding, hypothesis, and open gap. Depth controls how aggressively the synthesizer explores: overview (5 docs), standard (10 docs), deep (20+ docs with adversarial verification).

research_project_listA

List all research projects for a user, with counts of findings and open gaps. Use to discover what investigations are already in progress.

research_project_continueA

Resume context from a paused research project. Returns the central question, recent 50 findings (with their claims, sources, confidence), and all currently-open gaps. Use this BEFORE asking new questions in an existing project so you don't duplicate work.

research_searchA

Hybrid search across the celiums knowledge corpus (BM25 + semantic kNN + reciprocal rank fusion). Returns ranked modules with name, display_name, description, category, and relevance score. Use to locate evidence before synthesize.

research_synthesizeA

Run a hybrid search and synthesize the top-K results into a careful, citation-bearing analysis using a frontier LLM (Opus 4.7 by default). Output explicitly distinguishes well-supported claims from claims it cannot back up with the retrieved evidence. Logs the query into the project session log.

research_finding_addA

Record an atomic claim with its evidence into the project. Each finding has a source kind (arxiv|wiki|curated|web), an optional ref/url, a confidence 0-1, and free-text notes. Findings are the building blocks; export consolidates them into a memo.

research_gap_addA

Flag an unresolved question — something you searched for but couldn't back up with evidence. Gaps are first-class: they keep your investigation honest and re-entry tools (next iteration) re-attempt them automatically.

research_exportA

Export the project as a markdown memo: question, findings (with sources + confidence), and open gaps. Use to send a brief to a teammate, paste into Notion, or feed into a downstream LLM as a project summary.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/terrizoaguimor/celiums-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server