Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
LLM_MODELNoThe model name to use with the LLM provider (e.g., 'granite3.3:8b').
LLM_BASE_URLNoThe base URL for the LLM provider API (e.g., 'http://host.docker.internal:11434/v1' for Ollama).
LLM_PROVIDERNoThe LLM provider to use for natural-language turn extraction (e.g., 'ollama', 'anthropic', 'openai', 'google'). If not set, natural-language turns will return zero facts.
EXTRACTION_ENABLEDNoEnable natural-language turn extraction (e.g., 'true' or 'false').

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
tellA

Assert a fact into the knowledge base. Stores knowledge that can be queried and used in logical reasoning. Supports auto-expiration via ttl (milliseconds) or validUntil (epoch ms), confidence scoring, and configurable conflict resolution.

teachA

Define a logical rule for automatic reasoning. When all conditions (body) are true, the conclusion (head) is automatically derivable via backward chaining. Use ?-prefixed variables (e.g., ?x, ?who). Supports Negation-as-Failure in body atoms. Example: 'If ?x is human AND NOT god(?x), THEN ?x is mortal'.

askA

Query the knowledge base using multi-step logical reasoning (backward chaining with unification). Finds all provable answers by applying rules and matching facts. Use ?-prefixed variables for unknowns you want to discover. Optionally returns full proof chains showing the reasoning steps.

forgetA

Retract a fact from the knowledge base. Any knowledge that was derived from this fact is also automatically forgotten via the Truth Maintenance System (cascading retraction). This is the inverse of 'tell'.

recallA

Time-travel query: recall what was known at a specific point in time. Returns facts that were valid at the given timestamp, respecting temporal bounds (validFrom, validUntil, ttl). Useful for debugging agent behavior or reconstructing past state.

contextA

Get the most relevant knowledge for your current reasoning step, ranked by composite salience (recency × frequency × priority). Returns a token-optimized context window. Supports three output formats: 'predicate' (machine-readable), 'natural' (LLM-optimized prose), 'structured' (grouped with metadata). Pass goals for goal-driven selection, sessionId for incremental diffing across turns.

compressA

Run memory consolidation: detects repeated episodic patterns (e.g., 'user asked about X five times') and creates semantic summaries. Reduces memory footprint in long-running agent sessions while preserving essential knowledge.

cleanupA

Run memory decay and eviction. Expires facts past their TTL and evicts low-salience facts when memory exceeds capacity. Call periodically in long-running agent sessions to prevent unbounded memory growth.

predicatesA

Discover the knowledge base schema. Lists all predicates (relationship types) currently stored, with their arity (argument count), fact count, and whether they have associated rules. Use this to understand what knowledge is available before querying.

aggregateA

Compute aggregations over matching facts. Supports COUNT (number of matches), SUM, MIN, MAX, and AVG over a numeric argument at a specified position. Example: COUNT all score(player, ?) facts, or AVG scores at argIndex=1.

bulk_assertA

Assert multiple facts in a single call for efficiency. Non-transactional: each fact is attempted independently — contradictions are reported as errors without aborting the batch. Returns counts of successful and failed assertions.

retract_patternA

Retract all facts matching a pattern in a single call. Use ?-prefixed variables as wildcards to retract multiple facts at once. Returns the count and list of retracted facts. Cascading retraction applies to each removed fact.

fork_scopeA

Fork a knowledge base scope — creates an independent copy of all facts in the source scope under a new target scope name. Use this for hypothetical reasoning ('What if Alice moves to London?') without modifying the main knowledge base. Similar to git branch for knowledge.

merge_scopeA

Merge facts from one scope back into another (default: global). Use this to commit hypothetical reasoning results back into the main knowledge base. Choose a conflict strategy: SOURCE_WINS overwrites, TARGET_WINS keeps existing, KEEP_BOTH retains both versions, REJECT aborts if any conflicts.

list_scopesA

List all named scopes in the knowledge base. Shows what hypothetical contexts or reasoning branches currently exist. The global (unscoped) partition is always present but not listed.

delete_scopeA

Delete a knowledge base scope and all facts within it. Use this to clean up completed or abandoned hypothetical reasoning branches. This is irreversible.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Auctalis/nocturnusai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server