Skip to main content
Glama

memory_insights

Analyze stored memories to generate activity summaries, identify open items, suggest consolidations, and detect patterns for better context management.

Instructions

Analyze stored memories and produce actionable insights.

Returns activity summary, open items, consolidation suggestions, and optional LLM-powered pattern detection.

Args: period: Time period to analyze (e.g., "7d", "1m", "1y") include_llm_analysis: If True, use LLM to detect patterns and themes

Returns: Dictionary with: - activity_summary: Created counts by type and tag - open_items: Open TODOs and issues with stale detection - consolidation_candidates: Similar memory pairs that could be merged - llm_analysis: Themes, focus areas, gaps, and summary (or null) Rate limited: 120s cooldown.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
periodNo7d
include_llm_analysisNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The handler function `memory_insights` implements the MCP tool, invoking `_generate_insights` which wraps the core logic.
    async def memory_insights(
        period: str = "7d",
        include_llm_analysis: bool = True,
    ) -> Dict[str, Any]:
        """Analyze stored memories and produce actionable insights.
    
        Returns activity summary, open items, consolidation suggestions,
        and optional LLM-powered pattern detection.
    
        Args:
            period: Time period to analyze (e.g., "7d", "1m", "1y")
            include_llm_analysis: If True, use LLM to detect patterns and themes
    
        Returns:
            Dictionary with:
            - activity_summary: Created counts by type and tag
            - open_items: Open TODOs and issues with stale detection
            - consolidation_candidates: Similar memory pairs that could be merged
            - llm_analysis: Themes, focus areas, gaps, and summary (or null)
        Rate limited: 120s cooldown.
        """
        if msg := _check_tool_cooldown("memory_insights"):
            return {"error": "rate_limited", "message": msg}
        try:
            stale_days = int(os.getenv("MEMORA_STALE_DAYS", "14"))
            return _generate_insights(period, stale_days, include_llm_analysis)
        finally:
            _finish_tool("memory_insights")
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full behavioral disclosure burden. It successfully discloses the 120s rate limit cooldown and explains the optional LLM-powered analysis behavior. However, it doesn't explicitly confirm read-only status (implied by 'analyze' but not stated) or error handling for invalid periods.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The Args/Returns structure is appropriate for the technical content and 0% schema coverage. Every section earns its place: purpose statement, parameter documentation, return value semantics, and rate limit warning. Slightly verbose but justified by the lack of structured metadata elsewhere.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately focuses on adding semantic meaning to return fields (explaining what 'consolidation_candidates' and 'open_items' represent) rather than just repeating types. It covers the rate limit constraint and parameter details that the schema omits.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, requiring the description to fully compensate. It provides excellent parameter documentation: period includes syntax examples ('7d', '1m', '1y') and include_llm_analysis explains the behavioral consequence ('use LLM to detect patterns'). This fully bridges the schema gap.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Analyze') and resource ('stored memories'), clearly indicating this is an analytical/reporting tool. It distinguishes itself from siblings like memory_search (retrieval), memory_create (mutation), and memory_stats (raw metrics) by focusing on 'actionable insights' and synthesis.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage through detailed output specifications (activity_summary, open_items), it lacks explicit guidance on when to choose this over similar analytical siblings like memory_stats or memory_clusters. No 'when-not-to-use' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/agentic-box/memora'

If you have feedback or need assistance with the MCP directory API, please join our Discord server