Skip to main content
Glama

Optimize Memory File

optimize_memory

Optimize memory files by reorganizing and consolidating entries with AI while preserving all information to improve efficiency and organization.

Instructions

Manually optimize a memory file using AI to reorganize and consolidate entries while preserving all information.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
memory_fileNo
forceNoForce optimization regardless of criteria

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The main handler function for the 'optimize_memory' tool. It resolves the memory file path (defaulting to user's main memory), instantiates MemoryOptimizer, calls optimize_memory_if_needed with force option, and formats/returns the detailed result message.
    async def optimize_memory(
        ctx: Context,
        memory_file: Annotated[Optional[str], "Path to memory file to optimize"] = None,
        force: Annotated[bool, "Force optimization regardless of criteria"] = False,
    ) -> str:
        """Manually optimize a memory file using AI sampling."""
        if read_only:
            return "Error: Server is running in read-only mode"
    
        try:
            # Determine which file to optimize
            if memory_file:
                file_path = Path(memory_file)
                if not file_path.exists():
                    return f"Error: Memory file not found: {memory_file}"
            else:
                # Use default user memory file
                user_memory_path = instruction_manager.get_memory_file_path()
                if not user_memory_path.exists():
                    return "Error: No user memory file found to optimize"
                file_path = user_memory_path
    
            # Create optimizer and run optimization
            optimizer = MemoryOptimizer(instruction_manager)
            result = await optimizer.optimize_memory_if_needed(file_path, ctx, force=force)
    
            # Format result message
            status = result.get("status", "unknown")
            if status == "optimized":
                entries_before = result.get("entries_before", "unknown")
                entries_after = result.get("entries_after", "unknown")
                backup_created = result.get("backup_created", False)
    
                message = f"βœ… Memory optimization completed successfully!\n"
                message += f"πŸ“Š Entries: {entries_before} β†’ {entries_after}\n"
                message += f"πŸ”„ Method: {result.get('method', 'ai')}\n"
                message += f"πŸ’Ύ Backup created: {'Yes' if backup_created else 'No'}\n"
                message += f"πŸ“ Reason: {result.get('reason', 'Manual optimization')}"
    
            elif status == "metadata_updated":
                message = f"πŸ“ Memory metadata updated (AI optimization unavailable)\n"
                message += f"πŸ’Ύ Backup created: {'Yes' if result.get('backup_created', False) else 'No'}\n"
                message += f"πŸ“ Reason: {result.get('reason', 'Manual optimization')}"
    
            elif status == "skipped":
                message = f"⏭️ Optimization skipped: {result.get('reason', 'Unknown reason')}\n"
                message += f"πŸ’‘ Use force=True to optimize anyway"
    
            elif status == "error":
                message = f"❌ Optimization failed: {result.get('reason', 'Unknown error')}"
    
            else:
                message = f"πŸ” Optimization result: {status}"
    
            return message
    
        except Exception as e:
            return f"Error during memory optimization: {str(e)}"
  • Registration of the 'optimize_memory' tool using @app.tool decorator, including schema for parameters (memory_file optional str, force bool), description, tags, and metadata.
    @app.tool(
        name="optimize_memory",
        description="Manually optimize a memory file using AI to reorganize and consolidate entries while preserving all information.",
        tags={"public", "memory"},
        annotations={
            "idempotentHint": False,
            "readOnlyHint": False,
            "title": "Optimize Memory File",
            "parameters": {
                "memory_file": "Optional path to specific memory file. If not provided, will optimize the user's main memory file.",
                "force": "Force optimization even if criteria are not met. Defaults to False.",
            },
            "returns": "Returns detailed results of the optimization process including status, entries before/after, and backup information.",
        },
        meta={
            "category": "memory",
        },
    )
  • Input/output schema definition in tool annotations, specifying parameters and return description.
    annotations={
        "idempotentHint": False,
        "readOnlyHint": False,
        "title": "Optimize Memory File",
        "parameters": {
            "memory_file": "Optional path to specific memory file. If not provided, will optimize the user's main memory file.",
            "force": "Force optimization even if criteria are not met. Defaults to False.",
        },
        "returns": "Returns detailed results of the optimization process including status, entries before/after, and backup information.",
    },
  • Core helper method called by the tool handler. Determines if optimization needed (size, entries, time thresholds), performs AI-based content reorganization using ctx.sample, updates frontmatter metadata, creates backup, returns status/results.
    async def optimize_memory_if_needed(self, file_path: Path, ctx: Context, force: bool = False) -> Dict[str, Any]:
        """
        Main optimization method with full backward compatibility.
    
        Args:
            file_path: Path to memory file
            ctx: FastMCP context for AI sampling
            force: Force optimization regardless of criteria
    
        Returns:
            Dict with optimization results
        """
        try:
            # Get metadata (with backward compatibility)
            metadata = self._get_memory_metadata(file_path)
    
            # Check if optimization is needed
            if not force:
                should_optimize, reason = self._should_optimize_memory(file_path, metadata)
                if not should_optimize:
                    return {"status": "skipped", "reason": reason, "metadata": metadata}
            else:
                reason = "Forced optimization"
    
            # Read current content
            frontmatter, content = parse_frontmatter_file(file_path)
            full_content = f"---\n"
            for key, value in frontmatter.items():
                if isinstance(value, str) and ('"' in value or "'" in value):
                    full_content += f'{key}: "{value}"\n'
                else:
                    full_content += f"{key}: {value}\n"
            full_content += f"---\n{content}"
    
            logger.info(f"Starting memory optimization: {reason}")
    
            # Try AI optimization
            optimized_content = await self._optimize_memory_with_ai(ctx, full_content)
    
            if optimized_content:
                # Parse optimized content directly from string
                optimized_frontmatter, optimized_body = parse_frontmatter(optimized_content)
    
                # Update metadata in the optimized frontmatter
                entry_count = self._count_memory_entries(optimized_body)
                optimized_frontmatter.update(
                    {
                        "lastOptimized": datetime.datetime.now(datetime.timezone.utc).isoformat(),
                        "entryCount": entry_count,
                        "optimizationVersion": frontmatter.get("optimizationVersion", 0) + 1,
                    }
                )
    
                # Preserve user preferences from original frontmatter
                for key in ["autoOptimize", "sizeThreshold", "entryThreshold", "timeThreshold"]:
                    if key in frontmatter:
                        optimized_frontmatter[key] = frontmatter[key]
                    elif key not in optimized_frontmatter:
                        # Set sensible defaults for new files
                        defaults = {"autoOptimize": True, "sizeThreshold": 50000, "entryThreshold": 20, "timeThreshold": 7}
                        optimized_frontmatter[key] = defaults[key]
    
                # Write optimized content
                success = write_frontmatter_file(file_path, optimized_frontmatter, optimized_body, create_backup=True)
    
                # Determine if backup was actually created (skipped for git repos)
                backup_created = False if _is_in_git_repository(file_path) else success
    
                if success:
                    logger.info(f"Memory optimization completed successfully")
                    return {"status": "optimized", "reason": reason, "method": "ai", "entries_before": metadata.get("entryCount", 0), "entries_after": entry_count, "backup_created": backup_created}
                else:
                    return {"status": "error", "reason": "Failed to write optimized content"}
            else:
                # AI optimization failed, just update metadata
                logger.info("AI optimization unavailable, updating metadata only")
                success = self._update_metadata(file_path, content)
    
                # Determine if backup was actually created (skipped for git repos)
                backup_created = False if _is_in_git_repository(file_path) else success
    
                return {"status": "metadata_updated", "reason": reason, "method": "metadata_only", "ai_available": False, "backup_created": backup_created}
    
        except Exception as e:
            logger.error(f"Memory optimization failed: {e}")
            return {"status": "error", "reason": str(e)}
  • Helper that performs the actual AI optimization: crafts detailed prompt for memory reorganization/consolidation, calls ctx.sample, validates output preserves structure.
        async def _optimize_memory_with_ai(self, ctx: Context, content: str) -> Optional[str]:
            """Safely optimize memory content using AI sampling with comprehensive error handling."""
            try:
                response = await ctx.sample(
                    f"""Please optimize this AI memory file by:
                    
    1. **Preserve ALL information** - Do not delete any memories or important details
    2. **Remove duplicates** - Consolidate identical or very similar entries
    3. **Organize by sections** - Group related memories under clear headings:
       - ## Personal Context (name, location, role, etc.)
       - ## Professional Context (team, goals, projects, etc.) 
       - ## Technical Preferences (coding styles, tools, workflows)
       - ## Communication Preferences (style, feedback preferences)
       - ## Universal Laws (strict rules that must always be followed)
       - ## Policies (guidelines and standards)
       - ## Suggestions/Hints (recommendations and tips)
       - ## Memories/Facts (chronological events and information)
    4. **Maintain timestamps** - Keep all original timestamps for traceability
    5. **Improve formatting** - Use consistent markdown formatting
    6. **Preserve frontmatter structure** - Keep the YAML header intact
    
    Return ONLY the optimized content (including frontmatter), nothing else:
    
    {content}""",
                    temperature=0.1,  # Very low for consistency
                    max_tokens=4000,
                    model_preferences=["gpt-4", "claude-3-sonnet"],  # Prefer more reliable models
                )
    
                if response and hasattr(response, "text"):
                    text_attr = getattr(response, "text", None)
                    optimized_content = str(text_attr).strip() if text_attr else None
    
                    # Basic validation - ensure we still have a memories section
                    if optimized_content and ("## Memories" in optimized_content or "# Personal" in optimized_content):
                        return optimized_content
                    else:
                        logger.warning("AI optimization removed essential sections, reverting to original")
                        return None
                else:
                    logger.warning(f"AI optimization returned unexpected type or no text: {type(response)}")
                    return None
    
            except Exception as e:
                logger.info(f"AI optimization failed: {e}")
                return None
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a mutable (readOnlyHint: false) and non-idempotent (idempotentHint: false) operation, which the description aligns with by implying change ('reorganize and consolidate'). The description adds value by specifying the AI-driven method and preservation of information, which aren't covered by annotations. However, it lacks details on side effects (e.g., performance impact), authentication needs, or rate limits, leaving behavioral gaps despite the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, well-structured sentence that efficiently conveys the core action, method, and constraint ('preserving all information'). It's front-loaded with the main purpose and avoids unnecessary details. However, it could be slightly more concise by integrating the method more seamlessly, but overall it earns its place without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (AI-driven optimization with two parameters), annotations cover mutability and idempotency, and an output schema exists (so return values needn't be explained). The description adequately states what the tool does but lacks usage guidelines, parameter details for 'memory_file', and behavioral context like error handling. This makes it minimally viable but incomplete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50%, with only the 'force' parameter documented. The description doesn't add any parameter-specific information beyond what the schema providesβ€”it doesn't explain 'memory_file' (e.g., what it represents or default behavior) or clarify 'force' further. With moderate schema coverage, the baseline score of 3 reflects that the description doesn't compensate for the undocumented 'memory_file' parameter.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with a specific verb ('optimize'), resource ('memory file'), and method ('using AI to reorganize and consolidate entries while preserving all information'). It distinguishes itself from siblings like 'memory_stats' or 'configure_memory_optimization' by focusing on execution rather than configuration or monitoring. However, it doesn't explicitly differentiate from potential alternatives like manual editing tools, which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., when optimization is needed), exclusions (e.g., when not to optimize), or compare it to sibling tools like 'configure_memory_optimization' for setup or 'memory_stats' for assessment. This lack of context leaves the agent guessing about appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/NiclasOlofsson/mode-manager-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server