Skip to main content
Glama

mnemostack_health

Check health of embedding, vector store, and optional graph components. Identify component operational status.

Instructions

Check health of mnemostack components (embedding, vector store, optional graph).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The mnemostack_health tool handler function. Checks health of embedding provider, Qdrant vector store, and optional Memgraph graph database. Returns a dict with ok boolean and per-component status.
    @mcp.tool()
    def mnemostack_health() -> dict:
        """Check health of all mnemostack components.
    
        Read-only, no side effects, no authentication required. Returns a JSON
        object with ok (bool) and per-component status for the embedding
        provider, Qdrant vector store, and optional Memgraph graph database.
        Use this to verify the memory backend is reachable before issuing recall
        queries.
        """
        result: dict[str, Any] = {"ok": True, "components": {}}
        try:
            emb = _get_embedding()
            ok, msg = emb.health_check()
            result["components"]["embedding"] = {
                "ok": ok,
                "provider": emb.name,
                "dimension": emb.dimension,
                "message": msg,
            }
            if not ok:
                result["ok"] = False
        except Exception as e:  # noqa: BLE001
            result["components"]["embedding"] = {"ok": False, "error": str(e)}
            result["ok"] = False
    
        try:
            vec = _get_vector()
            exists = vec.collection_exists()
            count = vec.count() if exists else 0
            result["components"]["vector"] = {
                "ok": True,
                "collection": vec.collection,
                "exists": exists,
                "points": count,
            }
        except Exception as e:  # noqa: BLE001
            result["components"]["vector"] = {"ok": False, "error": str(e)}
            result["ok"] = False
    
        if memgraph_uri:
            try:
                from ..graph import GraphStore
    
                gs = GraphStore(uri=memgraph_uri, timeout=graph_timeout)
                ok, msg = gs.health_check()
                result["components"]["graph"] = {
                    "ok": ok,
                    "nodes": gs.count_nodes() if ok else 0,
                    "edges": gs.count_edges() if ok else 0,
                    "message": msg,
                }
                gs.close()
                if not ok:
                    result["ok"] = False
            except Exception as e:  # noqa: BLE001
                result["components"]["graph"] = {"ok": False, "error": str(e)}
                result["ok"] = False
    
        return result
  • Registered via the @mcp.tool() decorator on the function, which registers it with the FastMCP server instance created at line 77.
    @mcp.tool()
    def mnemostack_health() -> dict:
  • Documentation listing mnemostack_health as an exposed tool in the module docstring.
    - mnemostack_health — check all components
  • Test verifying that mnemostack_health is registered as a tool on the MCP server.
    def test_build_server_registers_core_tools():
        mcp = build_server(collection="test", embedding_provider="ollama")
        names = _list_tool_names(mcp)
        assert "mnemostack_health" in names
        assert "mnemostack_search" in names
        assert "mnemostack_answer" in names
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'check health' implying a read operation but does not disclose whether it is idempotent, non-destructive, or what the output represents. Lack of detail on behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence that efficiently conveys the purpose. It is appropriately concise, though could include a bit more context without losing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description covers the core purpose but lacks usage context. With an output schema present, return values are not required, but the description does not explain what 'health' means or how results are structured. Adequate but with gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With zero parameters, baseline is 4. The description correctly implies no inputs are needed, consistent with the schema (100% coverage). No additional parameter detail required.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a clear verb 'check' and specifies the resource 'health of mnemostack components' with a list of components (embedding, vector store, optional graph). It clearly distinguishes the tool from siblings focused on answering, feedback, and search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, nor any context for usage. Given sibling tools exist, some direction is missing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/udjin-labs/mnemostack'

If you have feedback or need assistance with the MCP directory API, please join our Discord server