Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
WAGGLE_MODELNoEmbedding model. Use deterministic for offline-safe Glama inspection.deterministic
WAGGLE_BACKENDNoBackend database type: sqlite or neo4jsqlite
WAGGLE_DB_PATHNoPath to SQLite database file when WAGGLE_BACKEND=sqlite/tmp/waggle-memory.db
MCP_PROXY_DEBUGNoEnable verbose mcp-proxy debuggingfalse
WAGGLE_HTTP_HOSTNoBind host for HTTP service0.0.0.0
WAGGLE_HTTP_PORTNoBind port for HTTP service8080
WAGGLE_LOG_LEVELNoLog levelINFO
WAGGLE_NEO4J_URINoNeo4j Bolt URI when WAGGLE_BACKEND=neo4j
WAGGLE_TRANSPORTNoTransport mode: stdio or httpstdio
WAGGLE_EXPORT_DIRNoOptional export directory/tmp/waggle-exports
WAGGLE_STARTUP_MODENoStartup mode. fast skips ML warmup so Glama can inspect tools quickly.fast
WAGGLE_NEO4J_DATABASENoNeo4j database name when WAGGLE_BACKEND=neo4j
WAGGLE_NEO4J_PASSWORDNoNeo4j password when WAGGLE_BACKEND=neo4j
WAGGLE_NEO4J_USERNAMENoNeo4j username when WAGGLE_BACKEND=neo4j
WAGGLE_RATE_LIMIT_RPMNoGlobal rate limit in requests per minute120
WAGGLE_DEFAULT_TENANT_IDNoDefault tenant IDlocal-default
WAGGLE_MAX_PAYLOAD_BYTESNoMax request size in bytes1048576
WAGGLE_WRITE_RATE_LIMIT_RPMNoWrite-tool rate limit in requests per minute60
WAGGLE_MAX_CONCURRENT_REQUESTSNoConcurrency cap8
WAGGLE_REQUEST_TIMEOUT_SECONDSNoPer-request timeout in seconds30

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": false
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
store_nodeB

Store a piece of knowledge as a node in the persistent memory graph. Call this whenever you learn something important from the user: facts, preferences, decisions, entities, concepts, or questions. Prefer atomic facts.

store_edgeA

Create a relationship between two stored nodes. Use this immediately after storing related nodes so the memory graph preserves structure, updates, and conflicts.

query_graphA

Automatically search the memory graph before answering questions that may depend on prior context, user preferences, project decisions, constraints, or earlier conversation state. Returns a serialized subgraph with matching nodes and their connected neighborhood. Understands temporal references such as 'recently', 'latest', 'originally', and 'last week'.

debug_retrievalB

Diagnose memory retrieval ranking for a query. Returns query embedding preview, context-window routing scores, selected windows, flat top nodes, and tiered top nodes for comparison.

get_relatedA

Fetch the neighborhood around a specific memory node. Use when you already have a node ID and need its connected context. Returns matching nodes and edges as a serialized subgraph.

get_node_historyA

Inspect one memory node's evidence, validity window, and connected context. Use when auditing why a memory exists or how it changed. Returns the node, evidence records, related nodes, and edges.

list_context_scopesA

List known agent, project, and session scope values stored in the current tenant graph. Use before filtering memory by scope. Returns arrays of scope identifiers.

list_context_windowsA

List context windows for a project. Use to inspect chat/session-level memory containers, their status, node counts, and update times.

get_context_windowA

Inspect one context window, including its nodes and links to other context windows. Use when auditing what a conversation/session contributed to memory.

close_context_windowA

Close a context window, recompute its final graph embedding, refresh node counts, and derive cross-window edges. Use when a chat/session is complete.

timelineA

Build a chronological view of memory changes for a node, a query result, or the whole tenant. Use when order and evidence matter. Returns timestamped timeline items.

list_conflictsA

List contradiction and update edges, with unresolved conflicts shown by default. Use to review memory disagreements before resolving them. Returns conflict entries with source and target nodes.

resolve_conflictA

Mark a contradiction or update edge as resolved without deleting the underlying history. Use after deciding how competing memories should be interpreted. Returns the resolved conflict entry.

update_nodeA

Update an existing memory node's content, label, or tags. Use when a stored memory needs correction without deleting its identity. Returns the updated node.

delete_nodeA

Delete a node and all connected edges from persistent memory.

decompose_and_storeA

Break long or complex content into atomic memory nodes, store them automatically, and create inferred edges. Use for notes, summaries, or multi-fact passages. Returns the stored subgraph.

observe_conversationA

Automatically observe a completed user-assistant turn, extract durable information, and store it in the graph. Call this after turns containing preferences, decisions, constraints, requirements, corrections, project facts, or meaningful task outcomes. Do not ask the user to trigger this. Required fields: 'user_message' (the user's text) and 'assistant_response' (the assistant's reply). Do NOT use 'user_text' or 'assistant_text' — those field names are not accepted.

graph_diffA

Show what changed in the memory graph recently, including added nodes, updated nodes, created edges, and contradiction edges. Use for review or handoff. Returns a serialized graph diff.

prime_contextA

Automatically build a compact context brief at the start of a scoped conversation or before work that needs continuity. Use to hydrate an assistant with the most relevant scoped memories. Returns summary text plus nodes and edges.

get_topicsA

Detect topic clusters in the graph using community detection. Use to understand the main themes in memory. Returns labeled clusters with representative nodes and tags. Note: scope filtering (project, agent_id, session_id) is optional and silently ignored — topic detection always runs across the full tenant graph.

get_statsA

Return high-level statistics about the current memory graph. Use for health checks or quick summaries. Returns node and edge counts, node type breakdowns, and recent or highly connected nodes.

export_graph_htmlA

Export the current memory graph as an interactive HTML visualization. Use when a human needs to inspect the graph visually. Returns the output path and graph counts.

window_graph_vizA

Export the context-window graph as an interactive HTML visualization. Each node is a chat/session window and edges show overlap, supersession, temporal order, or shared scope.

export_graph_backupA

Export the current graph as a portable JSON backup. Use for migration, restore drills, or offline archive. Returns backup path, schema version, and object counts.

export_context_bundleA

Export a portable Markdown and/or JSON context bundle for handing memory to another AI or a human. Use for cross-tool context transfer, audits, and resumable work. Returns file paths, counts, and render hints.

import_graph_backupA

Import a portable JSON graph backup into the current backend. Use for restores or migrations. Returns counts for created and updated nodes and edges.

export_markdown_vaultA

Export the current graph as an Obsidian-compatible Markdown vault. Use when a human wants browsable note files with graph links. Returns written files and graph counts.

import_markdown_vaultA

Import an Obsidian-compatible Markdown vault into the current graph non-destructively. Use to sync edited vault notes back into memory. Returns created, updated, deleted-edge, and conflict counts.

Prompts

Interactive templates invoked by user choice

NameDescription
waggle_memory_policyInstructions for automatic memory retrieval and ingestion. Use this prompt to make the assistant handle memory without user-triggered tool calls.

Resources

Contextual data attached and managed by the client

NameDescription
Graph StatsCurrent graph statistics.
Recent Graph NodesThe 10 most recently updated nodes.
Context WindowsRecent context windows grouped by project/session.
Automatic Memory PolicyPolicy for when assistants should retrieve and write Waggle memory automatically.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Abhigyan-Shekhar/Waggle-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server