Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
NEO4J_URIYesBolt URL for Neo4j, e.g. bolt://neo4j:7687 (or neo4j+s://... for AuraDB)
NEO4J_USERYesNeo4j usernameneo4j
NEO4J_DATABASEYesNeo4j database name (default 'neo4j' on standard installs and AuraDB)neo4j
NEO4J_PASSWORDYesPassword for the Neo4j user

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
graph_queryA

Query the memory graph by canonical entity name. Use when you know the entity name or close-to-canonical form (e.g. "Steve", "graph-memory"); for natural-language phrasing or synonyms (e.g. "the knowledge graph project") prefer graph_search. Returns up to limit matching nodes plus the edges that connect them within max_hops, with per-edge weight and source provenance.

graph_relateA

Create or strengthen a relationship between entities. Creates the endpoint entities if they don't exist. Use single mode (from_name/to_name/relation) for one fact at a time. Use batch mode when extracting from a transcript or document — it's atomic, so a partial failure won't leave dangling nodes. Idempotent: re-asserting an existing edge boosts its weight rather than duplicating.

graph_deleteA

Permanently delete an entity node and all its edges by ID. Use for removing duplicate or erroneous nodes. Cannot be undone.

graph_boostA

Increase an edge's weight when the user confirms recalled information. Call this when the user says 'yes', 'exactly', or confirms something you retrieved from the graph. Persists immediately; weight clamps at 1.0 so repeated boosts saturate rather than overflow. Returns the previous and new weight.

graph_weakenA

Decrease an edge's weight when the user corrects a recalled fact. Call this when the user says 'no', 'that's wrong', or corrects something from the graph. Persists immediately; weight clamps at 0.0. Returns an error if the edge doesn't exist — use graph_delete to remove an entity outright. To replace a fact rather than weaken it, prefer graph_relate with the new fact and SUPERSEDES.

graph_entitiesA

Browse or search the entity catalog. Use to check if an entity exists before creating one with graph_relate, or to list entities of a given type. For relationship-aware lookups (entity + its neighbors) use graph_query instead. Returns up to limit entities ordered by sort_by; pagination is single-page (raise limit if you need more).

graph_contradictionsA

Find facts that contradict each other in the memory graph — pairs connected by a CONTRADICTS edge. Use during reviews, before a graph_decay run, or when the user asks about conflicting information. Returns {contradictions: [{node_a, node_b, description, detected_date, resolved}], count} ordered by most-recently detected. By default only unresolved pairs are surfaced; set include_resolved=true to audit historical resolutions. Resolve a contradiction by graph_weaken on the wrong edge or by graph_relate with relation=SUPERSEDES on the new fact.

graph_ingestA

Queue a document for asynchronous extraction into the memory graph (mode='queue'), or check the ingest backlog (mode='status'). Use this when you have a file the user wants summarized into the graph but doesn't need it reflected in the same conversation — the nightly dream process picks queued documents up. For inline assertions during a conversation, call graph_relate directly instead. Idempotent: queueing the same file twice overwrites the prior copy in the pending dir.

graph_cypherA

Execute a read-only Cypher query against the memory graph. You generate the Cypher — this tool just runs it. Enforced read-only via Neo4j executeRead(). Use for custom queries not covered by other tools. Admin-only (must be the bootstrap tenant) — non-admin tenants would otherwise be able to bypass tenant filtering by writing raw Cypher.

graph_decayA

Apply time-based decay to every node confidence and edge weight using per-type half-lives (preferences ~693d, events ~99d, etc.). Called by the dream process during maintenance. Always preview with dry_run=true first — decay is irreversible without restoring from a graph_export backup. Returns counts of nodes/edges modified per type.

graph_pruneA

Remove entities and edges that have decayed below threshold. DESTRUCTIVE — always preview first. Requires user confirmation before execute mode.

graph_unmergeA

Split a falsely merged entity back into two separate entities, redistributing specified edges. Use when entity resolution made a mistake (e.g. merged 'Anna' and 'Anne'). The original entity keeps every edge not listed in edges_to_move; the new entity gets the listed edges plus a fresh embedding stub (re-derive with graph_reembed). Logged to the audit trail with reason. Returns the IDs of both entities.

graph_mergeA

Consolidate two entities into one — moves source's edges onto target, adopts source properties for keys target doesn't have, then deletes source. Inverse of graph_unmerge. Use after graph_merge_suggestions surfaces a duplicate pair, or whenever you've confirmed two nodes refer to the same thing. Same-tenant only; refuses to merge an entity with itself. Edges directly between source and target are dropped (would become self-loops). When source and target both have the same edge to a third node, the edge is consolidated and the higher weight wins. Target's embedding is cleared so the next graph_reembed will re-derive it from the merged state. Logged to logs/merge-audit.jsonl with reason. DESTRUCTIVE — always preview with dry_run=true first; recovery requires a graph_export backup or graph_unmerge with the original edge layout.

graph_merge_suggestionsA

Surface candidate pairs of entities likely to be duplicates. Read-only — never auto-merges. Combines embedding similarity, shared-neighbor overlap, and name-token Jaccard. Same-type only. Use to triage entity-explosion before running graph_merge (destructive consolidation) or graph_relate with ALIAS_OF (soft alias).

graph_statsA

Graph health dashboard — node/edge counts by type, average weight, orphan count, unresolved contradictions, stale entries, schema version, and pending ingest backlog. Returns aggregate counts only; for individual entities use graph_entities. Call at session start to size up the graph before deeper queries, after graph_decay or graph_prune to verify the result, or when debugging unexpected query output. No parameters.

graph_validateA

Scan recently extracted entities and edges for quality issues: generic names, reference language, type mismatches, near-duplicate names, and extreme confidence values. Call this after a dream process extraction batch to catch bad data before it settles into the graph. Returns up to max_issues records of shape {entity_id, name, type, issue, severity} where severity is high/medium/low. Read-only — pair with graph_delete or graph_unmerge to act on flagged items.

graph_build_contextA

Single tool call that bundles a session's worth of context: graph health, pending work, last dream run summary, recent additions, top knowledge hubs, unresolved contradictions, and (optionally) a topic neighbourhood. Use this at session start instead of running graph_stats / graph_query / graph_contradictions separately. Cuts 4-5 round trips to one.

graph_reembedA

Regenerate semantic-search embeddings for entities. By default only fills missing embeddings (idempotent, fast). With force=true, re-embeds every entity — use after changing the embed-text recipe (e.g. when richer fields are added). At ~10ms per entity, full re-embed of a few hundred nodes finishes in seconds.

graph_searchA

Find entities semantically similar to a natural-language query, then optionally expand via graph traversal. Uses local sentence embeddings (bge-small-en, 384-dim) — no external API. Best when the user's wording doesn't match canonical entity names (e.g. "containers" → Docker, "AI tools" → Claude Code/Anthropic SDK). Falls back to graph_query if no embeddings available.

graph_communitiesA

Find clusters of densely-interconnected entities in the graph. Uses greedy seed-based BFS through edges above the weight threshold — works without GDS or APOC. Each entity is assigned to at most one community (the first that reaches it from a high-degree seed). Useful for understanding knowledge neighbourhoods (e.g. "everything related to infrastructure"). Returns at most max_communities clusters, each shaped {community_id, seed: {id, name, type}, size, members: [{id, name, type}]}, sorted by size desc; communities below min_size are filtered out. Use graph_query or graph_search instead when you have a specific entity to start from.

graph_exportA

Export all graph nodes and edges to a timestamped JSONL backup file in the backups/ directory. Run this before any risky operation, or on a weekly schedule. Old backups are pruned automatically.

graph_read_transcriptA

Read and parse a Claude Code JSONL transcript file through the canonical transcript parser. Returns normalized messages with text content extracted. Use this instead of reading raw JSONL directly — if the transcript format changes, only this tool needs updating.

graph_auditA

Append a structured event to the dream process audit log (logs/dream-audit.jsonl). Call this during the dream process to record run_start, run_end, transcript_start, transcript_end, entity_created, entity_resolved, edge_created, edge_modified, merge_flagged, contradiction_found, ingest_start, ingest_end, decay_applied, format_warning, or error events. entity_resolved is the audit trail for entity-resolution decisions during dream — every time the dream picks between matching an existing entity, creating a new one, or flagging an ambiguous candidate, log it here so a later graph_unmerge can reconstruct why a merge happened.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/stevepridemore/graph-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server