graph_memory
Retrieve an entity co-occurrence knowledge graph from your indexed documents. Nodes represent named entities with edges indicating shared document chunks, weighted by co-occurrence frequency.
Instructions
Return the entity co-occurrence knowledge graph.
Nodes are named entities extracted from the indexed corpus. An edge
between two entities means they were mentioned in the same document
chunk; the edge weight is the number of shared chunks.
Args:
min_mentions: Minimum mention count for a node to appear.
entity_type: Optional entity type filter (e.g. ``"PERSON"``).
Returns:
Dict with ``"nodes"`` (id, label, type, mentions) and ``"edges"``
(source, target, weight, shared_chunks) lists.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| min_mentions | No | ||
| entity_type | No |
Implementation Reference
- The graph_memory MCP tool handler function. It is decorated with @mcp.tool() and returns a dict with 'nodes', 'edges', 'node_count', and 'edge_count'. It queries entities via ctx.metadata_store.list_entities() and computes edges by intersecting shared chunk IDs.
"""MCP tool: graph_memory. Exposes the in-memory knowledge graph derived from entity co-occurrence. Two entities are connected when they appear in the same indexed chunk. Edge weight = number of shared chunks. This tool is intentionally read-only and pure-function — the graph is computed on the fly (with a 60 s server-side cache in the dashboard) and never mutated. """ from __future__ import annotations from typing import TYPE_CHECKING, Annotated from mcp.server.fastmcp import FastMCP if TYPE_CHECKING: from memorymesh.server.app import AppContext def register(mcp: FastMCP, ctx: AppContext) -> None: """Register the ``graph_memory`` tool on *mcp* with *ctx* injected. Args: mcp: The FastMCP instance to register onto. ctx: Shared application context (injected via closure). """ @mcp.tool() def graph_memory( min_mentions: Annotated[ int, "Only include entities with at least this many mentions (default 2).", ] = 2, entity_type: Annotated[ str | None, "Restrict to a specific entity type (e.g. 'PERSON', 'ORG'). Omit for all.", ] = None, ) -> dict: """Return the entity co-occurrence knowledge graph. Nodes are named entities extracted from the indexed corpus. An edge between two entities means they were mentioned in the same document chunk; the edge weight is the number of shared chunks. Args: min_mentions: Minimum mention count for a node to appear. entity_type: Optional entity type filter (e.g. ``"PERSON"``). Returns: Dict with ``"nodes"`` (id, label, type, mentions) and ``"edges"`` (source, target, weight, shared_chunks) lists. """ from memorymesh.server.auth_guard import check_access if (err := check_access(ctx, "read")) is not None: return err try: entities = ctx.metadata_store.list_entities( min_mentions=min_mentions, entity_type=entity_type, limit=200, ) except Exception as exc: return {"error": str(exc), "nodes": [], "edges": []} entity_chunks: dict[str, set[str]] = {} for ent in entities: try: chunk_ids = ctx.metadata_store.get_entity_chunks(ent.name, ent.entity_type) except Exception: chunk_ids = [] entity_chunks[ent.name] = set(chunk_ids) nodes = [ { "id": ent.name, "label": ent.name, "type": ent.entity_type, "mentions": ent.mention_count, } for ent in entities ] entity_list = list(entities) edges: list[dict] = [] for i, a in enumerate(entity_list): for b in entity_list[i + 1 :]: shared = entity_chunks[a.name] & entity_chunks[b.name] if shared: edges.append( { "source": a.name, "target": b.name, "weight": len(shared), "shared_chunks": sorted(shared), } ) ctx.audit_logger.log_query( tool="graph_memory", query=f"min_mentions={min_mentions} entity_type={entity_type}", n_results=len(nodes), latency_ms=0.0, ) return { "node_count": len(nodes), "edge_count": len(edges), "nodes": nodes, "edges": edges, } - Input schema for graph_memory: 'min_mentions' (int, default 2) and 'entity_type' (optional str, e.g. 'PERSON' or 'ORG'). Annotated with descriptions for MCP tool schema generation.
@mcp.tool() def graph_memory( min_mentions: Annotated[ int, "Only include entities with at least this many mentions (default 2).", ] = 2, entity_type: Annotated[ str | None, "Restrict to a specific entity type (e.g. 'PERSON', 'ORG'). Omit for all.", ] = None, ) -> dict: - src/memorymesh/server/app.py:108-140 (registration)graph_memory module is imported at line 114 and registered at line 140 via graph_memory.register(mcp, ctx) — the standard registration pattern for all tools in this server.
from memorymesh.server.tools import ( ask_memory, forget_memory, forget_source, get_document, get_entity, graph_memory, index_now, list_sources, pin_memory, query_timeline, related_documents, search_by_date, search_memory, summarize_source, sync_source, ) search_memory.register(mcp, ctx) list_sources.register(mcp, ctx) get_document.register(mcp, ctx) index_now.register(mcp, ctx) ask_memory.register(mcp, ctx) pin_memory.register(mcp, ctx) forget_memory.register(mcp, ctx) query_timeline.register(mcp, ctx) sync_source.register(mcp, ctx) get_entity.register(mcp, ctx) related_documents.register(mcp, ctx) search_by_date.register(mcp, ctx) forget_source.register(mcp, ctx) summarize_source.register(mcp, ctx) graph_memory.register(mcp, ctx) - src/memorymesh/server/app.py:35-62 (helper)AppContext dataclass — the 'ctx' dependency injected into graph_memory, providing metadata_store (entity/chunk queries), audit_logger, and auth guard dependencies.
class AppContext: """All runtime dependencies for the MCP server. Frozen so tools cannot accidentally mutate shared state. Args: config: Root MemoryMesh configuration. vector_store: ChromaDB wrapper for dense vectors. metadata_store: SQLite metadata store. bm25: Sparse BM25 index. provider: Embedding provider. indexer: File indexer / pipeline orchestrator. engine: Hybrid search engine. audit_logger: Append-only JSONL audit logger. ollama_client: Optional Ollama LLM client. ``None`` when ``ollama.enabled: false`` in the config. tiered_memory: Optional tiered memory manager for hot/warm/cold tier support (Wave 3). ``None`` when memory tier config is absent. identity_resolver: Auth identity resolver (Wave 4). Always present; returns permissive defaults when ``auth.enabled: false``. acl_enforcer: ACL enforcer (Wave 4). Always present; no-op when auth is disabled. rate_limiter: Token-bucket rate limiter (Wave 4). Always present; no-op (unlimited) when auth is disabled. revocation_list: SQLite-backed revocation deny list (Wave 4). ``None`` when auth storage is unavailable. """ - EntityRepository.list_entities() — the underlying SQL query used by graph_memory to fetch entities filtered by min_mentions and entity_type.
def list_entities( self, entity_type: str | None = None, min_mentions: int = 1, limit: int = 50, ) -> list[Entity]: """Return entities ranked by mention count. Args: entity_type: Filter by type. ``None`` = all types. min_mentions: Minimum mention count to include. limit: Maximum entities to return. """ clauses: list[str] = ["mention_count >= ?"] params: list[object] = [min_mentions] if entity_type is not None: clauses.append("entity_type = ?") params.append(entity_type) where = "WHERE " + " AND ".join(clauses) params.append(limit) rows = ( self._conn() .execute( f"SELECT * FROM entities {where} ORDER BY mention_count DESC LIMIT ?", params, ) .fetchall() ) return [ Entity( name=r["name"], entity_type=r["entity_type"], mention_count=r["mention_count"], first_seen=r["first_seen"],