Waggle-mcp
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| WAGGLE_MODEL | No | Embedding model. Use deterministic for offline-safe Glama inspection. | deterministic |
| WAGGLE_BACKEND | No | Backend database type: sqlite or neo4j | sqlite |
| WAGGLE_DB_PATH | No | Path to SQLite database file when WAGGLE_BACKEND=sqlite | /tmp/waggle-memory.db |
| MCP_PROXY_DEBUG | No | Enable verbose mcp-proxy debugging | false |
| WAGGLE_HTTP_HOST | No | Bind host for HTTP service | 0.0.0.0 |
| WAGGLE_HTTP_PORT | No | Bind port for HTTP service | 8080 |
| WAGGLE_LOG_LEVEL | No | Log level | INFO |
| WAGGLE_NEO4J_URI | No | Neo4j Bolt URI when WAGGLE_BACKEND=neo4j | |
| WAGGLE_TRANSPORT | No | Transport mode: stdio or http | stdio |
| WAGGLE_EXPORT_DIR | No | Optional export directory | /tmp/waggle-exports |
| WAGGLE_STARTUP_MODE | No | Startup mode. fast skips ML warmup so Glama can inspect tools quickly. | fast |
| WAGGLE_NEO4J_DATABASE | No | Neo4j database name when WAGGLE_BACKEND=neo4j | |
| WAGGLE_NEO4J_PASSWORD | No | Neo4j password when WAGGLE_BACKEND=neo4j | |
| WAGGLE_NEO4J_USERNAME | No | Neo4j username when WAGGLE_BACKEND=neo4j | |
| WAGGLE_RATE_LIMIT_RPM | No | Global rate limit in requests per minute | 120 |
| WAGGLE_DEFAULT_TENANT_ID | No | Default tenant ID | local-default |
| WAGGLE_MAX_PAYLOAD_BYTES | No | Max request size in bytes | 1048576 |
| WAGGLE_WRITE_RATE_LIMIT_RPM | No | Write-tool rate limit in requests per minute | 60 |
| WAGGLE_MAX_CONCURRENT_REQUESTS | No | Concurrency cap | 8 |
| WAGGLE_REQUEST_TIMEOUT_SECONDS | No | Per-request timeout in seconds | 30 |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": false
} |
| prompts | {
"listChanged": false
} |
| resources | {
"subscribe": false,
"listChanged": false
} |
| experimental | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| store_nodeB | Store a piece of knowledge as a node in the persistent memory graph. Call this whenever you learn something important from the user: facts, preferences, decisions, entities, concepts, or questions. Prefer atomic facts. |
| store_edgeA | Create a relationship between two stored nodes. Use this immediately after storing related nodes so the memory graph preserves structure, updates, and conflicts. |
| query_graphA | Automatically search the memory graph before answering questions that may depend on prior context, user preferences, project decisions, constraints, or earlier conversation state. Returns a serialized subgraph with matching nodes and their connected neighborhood. Understands temporal references such as 'recently', 'latest', 'originally', and 'last week'. |
| debug_retrievalB | Diagnose memory retrieval ranking for a query. Returns query embedding preview, context-window routing scores, selected windows, flat top nodes, and tiered top nodes for comparison. |
| get_relatedA | Fetch the neighborhood around a specific memory node. Use when you already have a node ID and need its connected context. Returns matching nodes and edges as a serialized subgraph. |
| get_node_historyA | Inspect one memory node's evidence, validity window, and connected context. Use when auditing why a memory exists or how it changed. Returns the node, evidence records, related nodes, and edges. |
| list_context_scopesA | List known agent, project, and session scope values stored in the current tenant graph. Use before filtering memory by scope. Returns arrays of scope identifiers. |
| list_context_windowsA | List context windows for a project. Use to inspect chat/session-level memory containers, their status, node counts, and update times. |
| get_context_windowA | Inspect one context window, including its nodes and links to other context windows. Use when auditing what a conversation/session contributed to memory. |
| close_context_windowA | Close a context window, recompute its final graph embedding, refresh node counts, and derive cross-window edges. Use when a chat/session is complete. |
| timelineA | Build a chronological view of memory changes for a node, a query result, or the whole tenant. Use when order and evidence matter. Returns timestamped timeline items. |
| list_conflictsA | List contradiction and update edges, with unresolved conflicts shown by default. Use to review memory disagreements before resolving them. Returns conflict entries with source and target nodes. |
| resolve_conflictA | Mark a contradiction or update edge as resolved without deleting the underlying history. Use after deciding how competing memories should be interpreted. Returns the resolved conflict entry. |
| update_nodeA | Update an existing memory node's content, label, or tags. Use when a stored memory needs correction without deleting its identity. Returns the updated node. |
| delete_nodeA | Delete a node and all connected edges from persistent memory. |
| decompose_and_storeA | Break long or complex content into atomic memory nodes, store them automatically, and create inferred edges. Use for notes, summaries, or multi-fact passages. Returns the stored subgraph. |
| observe_conversationA | Automatically observe a completed user-assistant turn, extract durable information, and store it in the graph. Call this after turns containing preferences, decisions, constraints, requirements, corrections, project facts, or meaningful task outcomes. Do not ask the user to trigger this. Required fields: 'user_message' (the user's text) and 'assistant_response' (the assistant's reply). Do NOT use 'user_text' or 'assistant_text' — those field names are not accepted. |
| graph_diffA | Show what changed in the memory graph recently, including added nodes, updated nodes, created edges, and contradiction edges. Use for review or handoff. Returns a serialized graph diff. |
| prime_contextA | Automatically build a compact context brief at the start of a scoped conversation or before work that needs continuity. Use to hydrate an assistant with the most relevant scoped memories. Returns summary text plus nodes and edges. |
| get_topicsA | Detect topic clusters in the graph using community detection. Use to understand the main themes in memory. Returns labeled clusters with representative nodes and tags. Note: scope filtering (project, agent_id, session_id) is optional and silently ignored — topic detection always runs across the full tenant graph. |
| get_statsA | Return high-level statistics about the current memory graph. Use for health checks or quick summaries. Returns node and edge counts, node type breakdowns, and recent or highly connected nodes. |
| export_graph_htmlA | Export the current memory graph as an interactive HTML visualization. Use when a human needs to inspect the graph visually. Returns the output path and graph counts. |
| window_graph_vizA | Export the context-window graph as an interactive HTML visualization. Each node is a chat/session window and edges show overlap, supersession, temporal order, or shared scope. |
| export_graph_backupA | Export the current graph as a portable JSON backup. Use for migration, restore drills, or offline archive. Returns backup path, schema version, and object counts. |
| export_context_bundleA | Export a portable Markdown and/or JSON context bundle for handing memory to another AI or a human. Use for cross-tool context transfer, audits, and resumable work. Returns file paths, counts, and render hints. |
| import_graph_backupA | Import a portable JSON graph backup into the current backend. Use for restores or migrations. Returns counts for created and updated nodes and edges. |
| export_markdown_vaultA | Export the current graph as an Obsidian-compatible Markdown vault. Use when a human wants browsable note files with graph links. Returns written files and graph counts. |
| import_markdown_vaultA | Import an Obsidian-compatible Markdown vault into the current graph non-destructively. Use to sync edited vault notes back into memory. Returns created, updated, deleted-edge, and conflict counts. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| waggle_memory_policy | Instructions for automatic memory retrieval and ingestion. Use this prompt to make the assistant handle memory without user-triggered tool calls. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| Graph Stats | Current graph statistics. |
| Recent Graph Nodes | The 10 most recently updated nodes. |
| Context Windows | Recent context windows grouped by project/session. |
| Automatic Memory Policy | Policy for when assistants should retrieve and write Waggle memory automatically. |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/Abhigyan-Shekhar/Waggle-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server