Skip to main content
Glama

memory_stats

Troubleshoot missing memory search results or assess local data footprint by retrieving entry count, size, distinct tags, embedding index status, and last-write timestamp from the local memory database.

Instructions

Return statistics about the local memory database (entry count, size, tags, embedding state).

Returns counts (rows, distinct tags, total bytes), embedding index status (built / building / stale), and last-write timestamp.

USE WHEN: troubleshooting why memory_semantic_search returns no results, or sizing the user's local data footprint.

BEHAVIOR: pure read of metadata. No side effects.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • Actual handler for the memory_stats tool. Gets a MemoryStore instance, collects tier stats, adds embedding model availability, adds human-friendly KB sizes, and returns JSON.
    @mcp_app.tool()
    @_require_starter
    def memory_stats() -> str:
        """Return storage statistics for the memory system.
    
        Reports entry counts per tier, database sizes, data directory path,
        and whether the semantic embedding model is loaded.
        """
        store = _get_store()
        stats = store.stats()
    
        # Add embedding engine availability
        try:
            from contextpulse_memory.embeddings import get_engine
            engine = get_engine()
            stats["embedding_model_loaded"] = engine.is_available()
        except Exception:
            stats["embedding_model_loaded"] = False
    
        # Human-friendly size fields
        stats["warm_db_kb"] = round(stats["warm_db_bytes"] / 1024, 1)
        stats["cold_db_kb"] = round(stats["cold_db_bytes"] / 1024, 1)
    
        return json.dumps(stats, default=str)
  • Tool registration via @mcp_app.tool() decorator. Uses @_require_starter license gate (free tier).
    @mcp_app.tool()
    @_require_starter
    def memory_stats() -> str:
  • MemoryStore.stats() helper called by the handler. Returns per-tier entry counts and database file sizes.
    def stats(self) -> dict[str, Any]:
        """Return storage statistics across all three tiers."""
        warm_db = self._data_dir / "memory.db"
        cold_db = self._data_dir / "memory_cold.db"
        return {
            "hot_entries": len(self.hot),
            "warm_entries": self.warm.count(),
            "cold_windows": self.cold.count(),
            "warm_db_bytes": warm_db.stat().st_size if warm_db.exists() else 0,
            "cold_db_bytes": cold_db.stat().st_size if cold_db.exists() else 0,
            "data_dir": str(self._data_dir),
        }
  • Glama.ai registry stub registration of memory_stats. Returns a message telling users to install the local daemon.
    @mcp_app.tool()
    def memory_stats() -> str:
        """Return statistics about the local memory database (entry count, size, tags, embedding state).
    
        Returns counts (rows, distinct tags, total bytes), embedding index status
        (built / building / stale), and last-write timestamp.
    
        USE WHEN: troubleshooting why memory_semantic_search returns no results, or
        sizing the user's local data footprint.
    
        BEHAVIOR: pure read of metadata. No side effects.
        """
        return _LOCAL_ONLY_MSG
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Declares it is a pure read of metadata with no side effects, which is essential given no annotations are present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three concise sentences: purpose, details, use cases, behavior. No unnecessary words, front-loaded with key action.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully covers purpose, usage triggers, behavior, and output summary for a zero-parameter tool with output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

No parameters exist, so schema coverage is 100% by default. Description adds value by listing output fields, compensating for lack of parameter detail.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it returns statistics about the local memory database, specifying entry count, size, tags, embedding state. Distinct from sibling tools like memory_list or memory_semantic_search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicit 'USE WHEN' clause identifies specific scenarios: troubleshooting missing results from memory_semantic_search or sizing data footprint.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ContextPulse/contextpulse'

If you have feedback or need assistance with the MCP directory API, please join our Discord server