Skip to main content
Glama

graph_memory

Retrieve an entity co-occurrence knowledge graph from your indexed documents. Nodes represent named entities with edges indicating shared document chunks, weighted by co-occurrence frequency.

Instructions

Return the entity co-occurrence knowledge graph.

    Nodes are named entities extracted from the indexed corpus.  An edge
    between two entities means they were mentioned in the same document
    chunk; the edge weight is the number of shared chunks.

    Args:
        min_mentions: Minimum mention count for a node to appear.
        entity_type: Optional entity type filter (e.g. ``"PERSON"``).

    Returns:
        Dict with ``"nodes"`` (id, label, type, mentions) and ``"edges"``
        (source, target, weight, shared_chunks) lists.
    

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
min_mentionsNo
entity_typeNo

Implementation Reference

  • The graph_memory MCP tool handler function. It is decorated with @mcp.tool() and returns a dict with 'nodes', 'edges', 'node_count', and 'edge_count'. It queries entities via ctx.metadata_store.list_entities() and computes edges by intersecting shared chunk IDs.
    """MCP tool: graph_memory.
    
    Exposes the in-memory knowledge graph derived from entity co-occurrence.
    Two entities are connected when they appear in the same indexed chunk.
    Edge weight = number of shared chunks.
    
    This tool is intentionally read-only and pure-function — the graph is computed
    on the fly (with a 60 s server-side cache in the dashboard) and never mutated.
    """
    
    from __future__ import annotations
    
    from typing import TYPE_CHECKING, Annotated
    
    from mcp.server.fastmcp import FastMCP
    
    if TYPE_CHECKING:
        from memorymesh.server.app import AppContext
    
    
    def register(mcp: FastMCP, ctx: AppContext) -> None:
        """Register the ``graph_memory`` tool on *mcp* with *ctx* injected.
    
        Args:
            mcp: The FastMCP instance to register onto.
            ctx: Shared application context (injected via closure).
        """
    
        @mcp.tool()
        def graph_memory(
            min_mentions: Annotated[
                int,
                "Only include entities with at least this many mentions (default 2).",
            ] = 2,
            entity_type: Annotated[
                str | None,
                "Restrict to a specific entity type (e.g. 'PERSON', 'ORG'). Omit for all.",
            ] = None,
        ) -> dict:
            """Return the entity co-occurrence knowledge graph.
    
            Nodes are named entities extracted from the indexed corpus.  An edge
            between two entities means they were mentioned in the same document
            chunk; the edge weight is the number of shared chunks.
    
            Args:
                min_mentions: Minimum mention count for a node to appear.
                entity_type: Optional entity type filter (e.g. ``"PERSON"``).
    
            Returns:
                Dict with ``"nodes"`` (id, label, type, mentions) and ``"edges"``
                (source, target, weight, shared_chunks) lists.
            """
            from memorymesh.server.auth_guard import check_access
    
            if (err := check_access(ctx, "read")) is not None:
                return err
    
            try:
                entities = ctx.metadata_store.list_entities(
                    min_mentions=min_mentions,
                    entity_type=entity_type,
                    limit=200,
                )
            except Exception as exc:
                return {"error": str(exc), "nodes": [], "edges": []}
    
            entity_chunks: dict[str, set[str]] = {}
            for ent in entities:
                try:
                    chunk_ids = ctx.metadata_store.get_entity_chunks(ent.name, ent.entity_type)
                except Exception:
                    chunk_ids = []
                entity_chunks[ent.name] = set(chunk_ids)
    
            nodes = [
                {
                    "id": ent.name,
                    "label": ent.name,
                    "type": ent.entity_type,
                    "mentions": ent.mention_count,
                }
                for ent in entities
            ]
    
            entity_list = list(entities)
            edges: list[dict] = []
            for i, a in enumerate(entity_list):
                for b in entity_list[i + 1 :]:
                    shared = entity_chunks[a.name] & entity_chunks[b.name]
                    if shared:
                        edges.append(
                            {
                                "source": a.name,
                                "target": b.name,
                                "weight": len(shared),
                                "shared_chunks": sorted(shared),
                            }
                        )
    
            ctx.audit_logger.log_query(
                tool="graph_memory",
                query=f"min_mentions={min_mentions} entity_type={entity_type}",
                n_results=len(nodes),
                latency_ms=0.0,
            )
    
            return {
                "node_count": len(nodes),
                "edge_count": len(edges),
                "nodes": nodes,
                "edges": edges,
            }
  • Input schema for graph_memory: 'min_mentions' (int, default 2) and 'entity_type' (optional str, e.g. 'PERSON' or 'ORG'). Annotated with descriptions for MCP tool schema generation.
    @mcp.tool()
    def graph_memory(
        min_mentions: Annotated[
            int,
            "Only include entities with at least this many mentions (default 2).",
        ] = 2,
        entity_type: Annotated[
            str | None,
            "Restrict to a specific entity type (e.g. 'PERSON', 'ORG'). Omit for all.",
        ] = None,
    ) -> dict:
  • graph_memory module is imported at line 114 and registered at line 140 via graph_memory.register(mcp, ctx) — the standard registration pattern for all tools in this server.
    from memorymesh.server.tools import (
        ask_memory,
        forget_memory,
        forget_source,
        get_document,
        get_entity,
        graph_memory,
        index_now,
        list_sources,
        pin_memory,
        query_timeline,
        related_documents,
        search_by_date,
        search_memory,
        summarize_source,
        sync_source,
    )
    
    search_memory.register(mcp, ctx)
    list_sources.register(mcp, ctx)
    get_document.register(mcp, ctx)
    index_now.register(mcp, ctx)
    ask_memory.register(mcp, ctx)
    pin_memory.register(mcp, ctx)
    forget_memory.register(mcp, ctx)
    query_timeline.register(mcp, ctx)
    sync_source.register(mcp, ctx)
    get_entity.register(mcp, ctx)
    related_documents.register(mcp, ctx)
    search_by_date.register(mcp, ctx)
    forget_source.register(mcp, ctx)
    summarize_source.register(mcp, ctx)
    graph_memory.register(mcp, ctx)
  • AppContext dataclass — the 'ctx' dependency injected into graph_memory, providing metadata_store (entity/chunk queries), audit_logger, and auth guard dependencies.
    class AppContext:
        """All runtime dependencies for the MCP server.
    
        Frozen so tools cannot accidentally mutate shared state.
    
        Args:
            config: Root MemoryMesh configuration.
            vector_store: ChromaDB wrapper for dense vectors.
            metadata_store: SQLite metadata store.
            bm25: Sparse BM25 index.
            provider: Embedding provider.
            indexer: File indexer / pipeline orchestrator.
            engine: Hybrid search engine.
            audit_logger: Append-only JSONL audit logger.
            ollama_client: Optional Ollama LLM client.  ``None`` when
                ``ollama.enabled: false`` in the config.
            tiered_memory: Optional tiered memory manager for hot/warm/cold tier
                support (Wave 3).  ``None`` when memory tier config is absent.
            identity_resolver: Auth identity resolver (Wave 4).  Always present;
                returns permissive defaults when ``auth.enabled: false``.
            acl_enforcer: ACL enforcer (Wave 4).  Always present; no-op when auth
                is disabled.
            rate_limiter: Token-bucket rate limiter (Wave 4).  Always present;
                no-op (unlimited) when auth is disabled.
            revocation_list: SQLite-backed revocation deny list (Wave 4).
                ``None`` when auth storage is unavailable.
        """
  • EntityRepository.list_entities() — the underlying SQL query used by graph_memory to fetch entities filtered by min_mentions and entity_type.
    def list_entities(
        self,
        entity_type: str | None = None,
        min_mentions: int = 1,
        limit: int = 50,
    ) -> list[Entity]:
        """Return entities ranked by mention count.
    
        Args:
            entity_type: Filter by type.  ``None`` = all types.
            min_mentions: Minimum mention count to include.
            limit: Maximum entities to return.
        """
        clauses: list[str] = ["mention_count >= ?"]
        params: list[object] = [min_mentions]
        if entity_type is not None:
            clauses.append("entity_type = ?")
            params.append(entity_type)
        where = "WHERE " + " AND ".join(clauses)
        params.append(limit)
        rows = (
            self._conn()
            .execute(
                f"SELECT * FROM entities {where} ORDER BY mention_count DESC LIMIT ?",
                params,
            )
            .fetchall()
        )
        return [
            Entity(
                name=r["name"],
                entity_type=r["entity_type"],
                mention_count=r["mention_count"],
                first_seen=r["first_seen"],
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description details the graph structure: nodes are entities from indexed corpus, edges represent co-occurrence in document chunks, and edge weight is shared chunk count. It also specifies return format. However, it does not mention destructive behavior, rate limits, or required permissions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections but is somewhat verbose (multiple lines). It could be trimmed without losing clarity, e.g., combining the edge definition more succinctly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers purpose, parameters, and return format (nodes and edges with fields). Given no output schema, this is adequate. However, it lacks details on default behavior, limits, or error conditions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 0%, but the description explains both parameters: min_mentions as 'minimum mention count for a node to appear' and entity_type as 'optional entity type filter (e.g. PERSON)'. This adds meaningful context beyond the schema's type and default.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns the entity co-occurrence knowledge graph, explaining nodes and edges with specific definitions. The verb 'return' and resource 'knowledge graph' are precise, and it distinguishes itself from sibling tools like get_entity by focusing on graph structure.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like get_entity or search_memory. It describes parameters but does not indicate use cases or scenarios where this tool is preferred.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/kilhubprojects/memory-mesh'

If you have feedback or need assistance with the MCP directory API, please join our Discord server