Skip to main content
Glama
leesgit

claude-session-continuity-mcp

memory_store

Store typed knowledge (observation, decision, learning, error, pattern) with tags, importance, and optional relation. Embeddings enable semantic retrieval.

Instructions

Store a piece of knowledge in the memory system. Memories are typed (observation, decision, learning, error, pattern), tagged, and automatically embedded for semantic retrieval. Side effects: inserts into the memories table and asynchronously generates a vector embedding. If relatedTo is provided, also creates a knowledge graph edge. Returns the new memory ID. Use memory_search to verify no duplicate exists before storing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesThe knowledge content to store
typeYesMemory type: observation (discovery/finding), decision (architecture/tech choice), learning (new knowledge), error (error encountered), pattern (code convention)
projectNoAssociated project name (optional — omit for cross-project knowledge)
tagsNoTags for filtering (e.g. ["auth", "performance"])
importanceNoImportance score 1-10 where 10 is critical (default: 5)
relatedToNoID of an existing memory to link via knowledge graph (optional)

Implementation Reference

  • The main handler function for the 'memory_store' tool. It validates input via MemoryStoreSchema, performs memory consolidation (Jaccard similarity check to merge with existing similar memories), inserts new memory or updates merged one, and asynchronously generates embeddings for semantic search.
    export async function handleMemoryStore(args: unknown): Promise<CallToolResult> {
      return logger.withTool('memory_store', async () => {
        // 입력 검증
        const parsed = MemoryStoreSchema.safeParse(args);
        if (!parsed.success) {
          return {
            content: [{ type: 'text' as const, text: `Validation error: ${parsed.error.message}` }],
            isError: true
          };
        }
    
        const { content, type, tags, project, importance, metadata } = parsed.data;
    
        // Memory Consolidation: 기존 유사 메모리와 병합 시도
        let memoryId: number;
        let consolidated = false;
    
        try {
          const existing = db.prepare(`
            SELECT id, content, importance, access_count FROM memories
            WHERE project = ? AND memory_type = ?
            ORDER BY importance DESC, created_at DESC LIMIT 20
          `).all(project || null, type) as Array<{
            id: number; content: string; importance: number; access_count: number;
          }>;
    
          let mergeTarget: typeof existing[0] | null = null;
          for (const row of existing) {
            if (jaccardSimilarity(content, row.content) >= 0.6) {
              mergeTarget = row;
              break;
            }
          }
    
          if (mergeTarget) {
            // 더 긴 content 채택, importance +1, access_count +1
            const betterContent = content.length >= mergeTarget.content.length ? content : mergeTarget.content;
            const newImportance = Math.min(10, mergeTarget.importance + 1);
            db.prepare(`
              UPDATE memories SET content = ?, importance = ?, access_count = access_count + 1,
                accessed_at = CURRENT_TIMESTAMP
              WHERE id = ?
            `).run(betterContent, newImportance, mergeTarget.id);
            memoryId = mergeTarget.id;
            consolidated = true;
          } else {
            const result = db.prepare(`
              INSERT INTO memories (content, memory_type, tags, project, importance, metadata)
              VALUES (?, ?, ?, ?, ?, ?)
            `).run(content, type, tags ? JSON.stringify(tags) : null, project || null, importance, metadata ? JSON.stringify(metadata) : null);
            memoryId = result.lastInsertRowid as number;
          }
        } catch {
          // Consolidation 실패 시 기존 방식 폴백
          const result = db.prepare(`
            INSERT INTO memories (content, memory_type, tags, project, importance, metadata)
            VALUES (?, ?, ?, ?, ?, ?)
          `).run(content, type, tags ? JSON.stringify(tags) : null, project || null, importance, metadata ? JSON.stringify(metadata) : null);
          memoryId = result.lastInsertRowid as number;
        }
    
        // 임베딩 생성 (새 메모리만, 비동기)
        if (!consolidated) {
          generateEmbedding(content).then(embedding => {
            if (embedding) {
              try {
                db.prepare(`INSERT OR REPLACE INTO embeddings (memory_id, embedding) VALUES (?, ?)`).run(memoryId, embeddingToBuffer(embedding));
              } catch { /* ignore */ }
            }
          }).catch(() => { /* ignore */ });
        }
    
        return {
          content: [{
            type: 'text' as const,
            text: JSON.stringify({
              success: true,
              id: memoryId,
              type,
              importance,
              consolidated,
              embeddingQueued: !consolidated
            })
          }]
        };
      }, args as Record<string, unknown>);
    }
  • Zod validation schema for memory_store inputs: content (1-5000 chars), type (enum of 6 memory types), project, tags (up to 10 strings), importance (1-10, default 5), and optional metadata.
    export const MemoryStoreSchema = z.object({
      content: z.string().min(1).max(5000).describe('저장할 내용'),
      type: MemoryTypeSchema,
      project: ProjectNameSchema.optional(),
      tags: z.array(z.string().max(50)).max(10).optional().describe('태그 목록'),
      importance: z.number().min(1).max(10).default(5).describe('중요도 1-10'),
      metadata: z.record(z.unknown()).optional().describe('추가 메타데이터')
    }).describe('메모리 저장');
  • Tool definition registration for memory_store: declares the tool name 'memory_store', description, and inputSchema (JSON Schema) for MCP protocol registration.
    export const memoryTools: Tool[] = [
      {
        name: 'memory_store',
        description: `메모리 저장. 학습/결정/에러/패턴을 기억합니다.
    - content: 저장할 내용 (필수)
    - type: observation|decision|learning|error|pattern|preference
    - tags: 검색용 태그 배열
    - project: 관련 프로젝트
    - importance: 중요도 1-10 (기본 5)
    임베딩 자동 생성으로 시맨틱 검색 지원.`,
        inputSchema: {
          type: 'object',
          properties: {
            content: { type: 'string', description: '저장할 내용' },
            type: {
              type: 'string',
              enum: ['observation', 'decision', 'learning', 'error', 'pattern', 'preference'],
              description: '메모리 유형'
            },
            tags: { type: 'array', items: { type: 'string' }, description: '태그 목록' },
            project: { type: 'string', description: '관련 프로젝트' },
            importance: { type: 'number', description: '중요도 1-10' },
            metadata: { type: 'object', description: '추가 메타데이터' }
          },
          required: ['content', 'type']
        }
      },
  • Router registration: the handleToolV2 function dispatches 'memory_store' to handleMemoryStore. The tool is also exported as part of allToolsV2 array and exported in the handlers export block.
    case 'memory_store':
      return handleMemoryStore(args);
    case 'memory_search':
      return handleMemorySearch(args);
    case 'memory_delete':
      return handleMemoryDelete(args);
    case 'memory_stats':
      return handleMemoryStats();
  • Helper function calculateDecayedScore used to decay memory importance over time based on memory type decay rates, age, and access count.
    export function calculateDecayedScore(
      importance: number,
      memoryType: string,
      createdAt: string,
      accessCount: number
    ): number {
      const ageDays = (Date.now() - new Date(createdAt).getTime()) / (1000 * 60 * 60 * 24);
      const decayRate = DECAY_RATES[memoryType] ?? 0.005;
      return importance * Math.exp(-decayRate * ageDays) * Math.log2(accessCount + 2);
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description fully discloses side effects: inserts into table, async embedding, optional graph edge, and return value. Could mention timing of async operation, but current detail is strong.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by side effects and usage guidance. Every sentence is valuable and succinct.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 6 parameters and no output schema, the description covers all essential aspects: side effects, return value, duplicate checking advice. Complete for an AI agent to use correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with descriptions, so baseline is 3. Description adds context by explaining that memories are typed/tagged and that relatedTo creates a knowledge graph edge, going beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it stores knowledge in the memory system, lists memory types, and differentiates from sibling memory_search by advising to verify duplicates. It specifies verb and resource with distinct purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using memory_search to verify no duplicate before storing, providing when-not-to-use guidance and an alternative tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/leesgit/claude-session-continuity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server