Skip to main content
Glama

update_memory

Enhance and maintain the accuracy of stored knowledge by updating content, categorization, and metadata in a centralized memory repository. Ensures reliable, up-to-date information for informed decisions.

Instructions

Evolve and refine your stored knowledge with flexible updates to content, categorization, and metadata. Keep your memory repository current and accurate as understanding deepens, ensuring your knowledge base remains a reliable source of up-to-date insights and decisions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
categoryNoNew category for organizing the memory
contentNoNew detailed content for the memory (no character limit)
idYesThe unique identifier of the memory to update
metadataNoNew metadata as key-value pairs (replaces existing metadata)
titleNoNew title for the memory (max 50 characters for better file organization)
workingDirectoryYesThe full absolute path to the working directory where data is stored. MUST be an absolute path, never relative. Windows: "C:\Users\username\project" or "D:\projects\my-app". Unix/Linux/macOS: "/home/username/project" or "/Users/username/project". Do NOT use: ".", "..", "~", "./folder", "../folder" or any relative paths. Ensure the path exists and is accessible before calling this tool. NOTE: When server is started with --claude flag, this parameter is ignored and a global user directory is used instead.

Implementation Reference

  • The core handler function that implements the 'update_memory' tool logic: input validation, existing memory retrieval, update preparation, storage update call, change detection, and formatted success/error responses.
        handler: async ({
          id,
          title,
          content,
          metadata,
          category
        }: {
          id: string;
          title?: string;
          content?: string;
          metadata?: Record<string, any>;
          category?: string;
        }) => {
          try {
            // Validate inputs
            if (!id || id.trim().length === 0) {
              return {
                content: [{
                  type: 'text' as const,
                  text: 'Error: Memory ID is required.'
                }],
                isError: true
              };
            }
    
            if (content !== undefined && content.trim().length === 0) {
              return {
                content: [{
                  type: 'text' as const,
                  text: 'Error: Content cannot be empty if provided.'
                }],
                isError: true
              };
            }
    
            if (title && title.trim().length > 50) {
              return {
                content: [{
                  type: 'text' as const,
                  text: `Error: Memory title must be 50 characters or less for better file organization. Current length: ${title.trim().length} characters.
    
    Please provide a short, descriptive title instead. For example:
    - "User prefers dark mode interface"
    - "Project uses TypeScript and React"
    - "Database connection timeout is 30s"
    
    Use the content field for detailed information.`
                }],
                isError: true
              };
            }
    
            if (category && category.trim().length > 100) {
              return {
                content: [{
                  type: 'text' as const,
                  text: 'Error: Category must be 100 characters or less.'
                }],
                isError: true
              };
            }
    
            // Check if at least one field is being updated
            if (title === undefined && content === undefined && metadata === undefined && category === undefined) {
              return {
                content: [{
                  type: 'text' as const,
                  text: 'Error: At least one field (title, content, metadata, or category) must be provided for update.'
                }],
                isError: true
              };
            }
    
            // Get the existing memory first
            const existingMemory = await storage.getMemory(id.trim());
            if (!existingMemory) {
              return {
                content: [{
                  type: 'text' as const,
                  text: `❌ Memory not found.
    
    **Memory ID:** ${id}
    
    The memory with this ID does not exist or may have been deleted.`
                }],
                isError: true
              };
            }
    
            // Prepare updates
            const updates: any = {};
            if (title !== undefined) {
              updates.title = title.trim();
            }
            if (content !== undefined) {
              updates.content = content.trim();
            }
            if (metadata !== undefined) {
              updates.metadata = metadata;
            }
            if (category !== undefined) {
              updates.category = category.trim();
            }
    
            const updatedMemory = await storage.updateMemory(id.trim(), updates);
    
            if (!updatedMemory) {
              return {
                content: [{
                  type: 'text' as const,
                  text: `❌ Failed to update memory.
    
    **Memory ID:** ${id}
    
    The memory could not be updated. Please try again.`
                }],
                isError: true
              };
            }
    
            // Show what changed
            const changes: string[] = [];
            if (title !== undefined && title.trim() !== existingMemory.title) {
              changes.push('Title');
            }
            if (content !== undefined && content.trim() !== existingMemory.content) {
              changes.push('Content');
            }
            if (metadata !== undefined && JSON.stringify(metadata) !== JSON.stringify(existingMemory.metadata)) {
              changes.push('Metadata');
            }
            if (category !== undefined && category.trim() !== existingMemory.category) {
              changes.push('Category');
            }
    
            return {
              content: [{
                type: 'text' as const,
                text: `✅ Memory updated successfully!
    
    **Memory ID:** ${updatedMemory.id}
    **Updated Fields:** ${changes.join(', ')}
    **Title:** ${updatedMemory.title}
    **Content:** ${updatedMemory.content.substring(0, 200)}${updatedMemory.content.length > 200 ? '...' : ''}
    **Category:** ${updatedMemory.category || 'Not specified'}
    **Created:** ${new Date(updatedMemory.createdAt).toLocaleString()}
    **Updated:** ${new Date(updatedMemory.updatedAt).toLocaleString()}
    **Metadata:** ${Object.keys(updatedMemory.metadata).length > 0 ? JSON.stringify(updatedMemory.metadata, null, 2) : 'None'}`
              }]
            };
          } catch (error) {
            return {
              content: [{
                type: 'text' as const,
                text: `Error updating memory: ${error instanceof Error ? error.message : 'Unknown error'}`
              }],
              isError: true
            };
          }
        }
  • Zod-based input schema defining parameters for updating a memory: id (required), title/content/metadata/category (optional).
    inputSchema: {
      id: z.string(),
      title: z.string().optional(),
      content: z.string().optional(),
      metadata: z.record(z.any()).optional(),
      category: z.string().optional()
    },
  • src/server.ts:842-875 (registration)
    MCP server registration of 'update_memory' tool via McpServer.tool(). Wraps the createUpdateMemoryTool handler, adds workingDirectory param and storage creation.
    server.tool(
      'update_memory',
      'Evolve and refine your stored knowledge with flexible updates to content, categorization, and metadata. Keep your memory repository current and accurate as understanding deepens, ensuring your knowledge base remains a reliable source of up-to-date insights and decisions.',
      {
        workingDirectory: z.string().describe(getWorkingDirectoryDescription(config)),
        id: z.string().describe('The unique identifier of the memory to update'),
        title: z.string().optional().describe('New title for the memory (max 50 characters for better file organization)'),
        content: z.string().optional().describe('New detailed content for the memory (no character limit)'),
        metadata: z.record(z.any()).optional().describe('New metadata as key-value pairs (replaces existing metadata)'),
        category: z.string().optional().describe('New category for organizing the memory')
      },
      async ({ workingDirectory, id, title, content, metadata, category }: {
        workingDirectory: string;
        id: string;
        title?: string;
        content?: string;
        metadata?: Record<string, any>;
        category?: string;
      }) => {
        try {
          const storage = await createMemoryStorage(workingDirectory, config);
          const tool = createUpdateMemoryTool(storage);
          return await tool.handler({ id, title, content, metadata, category });
        } catch (error) {
          return {
            content: [{
              type: 'text' as const,
              text: `Error: ${error instanceof Error ? error.message : 'Unknown error'}`
            }],
            isError: true
          };
        }
      }
    );
  • File storage backend implementation of updateMemory: finds JSON file by ID, parses, merges partial updates, handles category moves, updates timestamp, persists to file.
    async updateMemory(id: string, updates: Partial<Memory>): Promise<Memory | null> {
      const filePath = await this.findMemoryFileById(id);
      if (!filePath) {
        return null;
      }
    
      try {
        const content = await fs.readFile(filePath, 'utf-8');
        const jsonMemory = JSON.parse(content);
    
        // Convert to Memory interface for merging
        const existingMemory: Memory = {
          id: jsonMemory.id,
          title: jsonMemory.title,
          content: jsonMemory.details,
          metadata: {},
          createdAt: jsonMemory.dateCreated,
          updatedAt: jsonMemory.dateUpdated,
          category: jsonMemory.category === 'general' ? undefined : jsonMemory.category
        };
    
        // Merge updates
        const updatedMemory: Memory = {
          ...existingMemory,
          ...updates,
          id: existingMemory.id, // Ensure ID doesn't change
          updatedAt: new Date().toISOString(),
        };
    
        // If category changed, we need to move the file
        if (updates.category !== undefined && updates.category !== existingMemory.category) {
          // Delete old file
          await fs.unlink(filePath);
    
          // Create new file in new category
          await this.createMemory(updatedMemory);
        } else {
          // Update existing file
          const updatedJsonMemory = {
            id: updatedMemory.id,
            title: updatedMemory.title,
            details: updatedMemory.content,
            category: updatedMemory.category || 'general',
            dateCreated: updatedMemory.createdAt,
            dateUpdated: updatedMemory.updatedAt
          };
    
          await fs.writeFile(filePath, JSON.stringify(updatedJsonMemory, null, 2), 'utf-8');
        }
    
        return updatedMemory;
      } catch (error) {
        return null;
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It vaguely mentions 'flexible updates' and ensuring the knowledge base remains 'reliable,' but fails to disclose critical traits: whether this is a mutation operation (implied by 'update'), what permissions are required, if changes are reversible, or how errors are handled. For a tool with 6 parameters and no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long but uses flowery language like 'evolve and refine' and 'reliable source of up-to-date insights and decisions,' which adds verbosity without enhancing clarity. It is front-loaded with the core purpose but could be more direct and concise by eliminating redundant phrases, making it less efficient than ideal.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (6 parameters, no output schema, no annotations), the description is incomplete. It lacks details on behavioral aspects like mutation effects, error handling, or return values, and doesn't provide usage guidelines. While the schema covers parameters, the description fails to address broader context needed for a tool that modifies data, leaving gaps in understanding for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, meaning all parameters are documented in the input schema itself. The description adds minimal value beyond the schema, as it only broadly references 'content, categorization, and metadata' without detailing specific parameters like 'id' or 'workingDirectory.' Since the schema does the heavy lifting, the baseline score of 3 is appropriate, but the description doesn't compensate with additional semantic context.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'evolve[s] and refine[s] your stored knowledge with flexible updates to content, categorization, and metadata,' which indicates it updates memory entries. However, it uses vague terms like 'evolve and refine' rather than a specific verb like 'modify' or 'edit,' and while it mentions 'categorization' and 'metadata,' it doesn't clearly distinguish this from sibling tools like 'update_project' or 'update_task' beyond the resource type 'memory.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no explicit guidance on when to use this tool versus alternatives. It mentions keeping the 'memory repository current' but doesn't specify prerequisites, such as needing an existing memory ID, or differentiate it from other memory tools like 'create_memory' or 'delete_memory.' This lack of context leaves the agent without clear usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Pimzino/agentic-tools-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server