Skip to main content
Glama

Check for near-duplicate memories

memory_dedup_check
Read-onlyIdempotent

Check if storing new memory content would duplicate existing memories. Returns matching memories with cosine similarity above the threshold, highest first. Use before memory_store to prevent redundancy.

Instructions

Cheap pre-flight (~50 tok/match): "would storing this content duplicate something I already have?" Read-only. Use before memory_store when overlap is likely, or before bulk imports. Returns up to limit existing memories with cosine similarity above the threshold (highest first). Computing the candidate embedding may make an outbound call to the configured provider (OpenAI, Ollama, etc.). If embeddings are disabled, returns a clear no-op message rather than silently passing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
contentYesCandidate memory body to check, exactly as you would pass to `memory_store`.
titleNoOptional candidate title; concatenated with `content` to match how `memory_store` builds its embedding.
project_pathNoOptional absolute project path to scope the search to a single project's memories.
thresholdNoCosine similarity threshold in [0, 1]. Defaults to `search.embeddings.dedupThreshold` from config (typically ~0.85).
limitNoMaximum number of matches to return (1-20). Default 5.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesMarkdown list of near-duplicate matches with similarity scores, or `No near-duplicates found above threshold.` Returns a no-op message when embeddings are disabled in config.

Implementation Reference

  • Main handler function for memory_dedup_check. Checks if candidate content duplicates existing memories using embedding similarity. Concatenates title+content, calls findDuplicate, formats results with similarity scores and token cost markers.
    export async function handleMemoryDedupCheck(
      db: Database.Database,
      memRepo: MemoriesRepo,
      embRepo: EmbeddingsRepo,
      provider: EmbeddingProvider | null,
      config: Config,
      params: DedupCheckParams,
    ): Promise<string> {
      // If embeddings disabled or no provider, return no-op message.
      if (!config.search.embeddings.enabled || !provider) {
        return "Dedup unavailable: embeddings disabled (set search.embeddings.enabled = true and provide API key).";
      }
    
      // Compute threshold and limit.
      const threshold = params.threshold ?? config.search.embeddings.dedupThreshold;
      const limit = Math.min(params.limit ?? 5, 20);
    
      // Resolve project ID if project_path provided.
      let projectId: string | null = null;
      if (params.project_path) {
        const row = db.prepare("SELECT id FROM projects WHERE root_path = ?").get(params.project_path) as
          | { id: string }
          | undefined;
        projectId = row?.id ?? null;
      }
    
      // Concatenate title and content.
      const text = params.title ? `${params.title}\n\n${params.content}` : params.content;
    
      // Call findDuplicate.
      const { duplicate, candidates } = await findDuplicate(
        db,
        embRepo,
        provider,
        text,
        projectId,
        threshold,
      );
    
      // Combine and slice results.
      const hits = [duplicate, ...candidates].filter((h) => h !== null).slice(0, limit);
    
      if (hits.length === 0) {
        return `No duplicates above threshold ${threshold.toFixed(2)}.`;
      }
    
      // Format output with token cost markers.
      const lines: string[] = [];
      lines.push(`Top ${hits.length} match(es) above threshold ${threshold.toFixed(2)}:`);
    
      for (let i = 0; i < hits.length; i++) {
        const hit = hits[i];
        const mem = memRepo.getById(hit.memoryId);
        if (!mem) continue;
    
        const titleDisplay = mem.title ? `"${mem.title}"` : "(untitled)";
        const memType = mem.memory_type ?? "fact";
        const line = `  ${i + 1}. sim=${hit.similarity.toFixed(3)}  ${hit.memoryId}  ${titleDisplay}  (${memType})`;
        const tokenCost = estimateTokensV2(line);
        lines.push(`  [${tokenCost}t] ${i + 1}. sim=${hit.similarity.toFixed(3)}  ${hit.memoryId}  ${titleDisplay}  (${memType})`);
      }
    
      return lines.join("\n");
    }
  • Input schema/params interface for the dedup check tool.
    export interface DedupCheckParams {
      content: string;
      title?: string;
      project_path?: string;
      threshold?: number;     // overrides config.search.embeddings.dedupThreshold
      limit?: number;         // default 5, max 20
    }
  • src/index.ts:332-360 (registration)
    Registration of the memory_dedup_check tool on the MCP server, including Zod input schema, annotations, and the async handler binding.
    server.registerTool(
      "memory_dedup_check",
      {
        title: "Check for near-duplicate memories",
        description: [
          "Cheap pre-flight (~50 tok/match): \"would storing this content duplicate something I already have?\" Read-only.",
          "Use before `memory_store` when overlap is likely, or before bulk imports. Returns up to `limit` existing memories with cosine similarity above the threshold (highest first).",
          "Computing the candidate embedding may make an outbound call to the configured provider (OpenAI, Ollama, etc.). If embeddings are disabled, returns a clear no-op message rather than silently passing.",
        ].join(" "),
        inputSchema: {
          content: z.string().min(1).describe("Candidate memory body to check, exactly as you would pass to `memory_store`."),
          title: z.string().optional().describe("Optional candidate title; concatenated with `content` to match how `memory_store` builds its embedding."),
          project_path: z.string().optional().describe("Optional absolute project path to scope the search to a single project's memories."),
          threshold: z.number().min(0).max(1).optional().describe("Cosine similarity threshold in [0, 1]. Defaults to `search.embeddings.dedupThreshold` from config (typically ~0.85)."),
          limit: z.number().int().min(1).max(20).default(5).describe("Maximum number of matches to return (1-20). Default 5."),
        },
        annotations: {
          title: "Check for near-duplicate memories",
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: true,
        },
        outputSchema: {
          message: z.string().describe("Markdown list of near-duplicate matches with similarity scores, or `No near-duplicates found above threshold.` Returns a no-op message when embeddings are disabled in config."),
        },
      },
      async (params) => textResult(await handleMemoryDedupCheck(db, memRepo, embRepo, dedupProvider, config, params))
    );
  • Core dedup engine: fetches existing embeddings, applies privacy redaction/scrub, calls provider.embed(), computes cosine similarity, and returns best match + up to 5 candidates above threshold.
    export async function findDuplicate(
      db: Database.Database,
      embRepo: EmbeddingsRepo,
      provider: EmbeddingProvider,
      text: string,
      projectId: string | null,
      threshold: number,
      maxScan?: number,
      excludeId?: string,
    ): Promise<DedupResult> {
      // Fetch existing vectors first so we can check the count BEFORE making a
      // network call to the embedding provider (cost-correctness: don't embed
      // when we know we'll skip the scan anyway).
      const all = embRepo.getByProject(projectId, provider.model);
    
      // Cap scan to avoid unbounded heap usage on very large projects.
      // Check BEFORE calling provider.embed() to avoid a wasted API round-trip.
      if (maxScan !== undefined && all.length > maxScan) {
        logger.warn(
          `dedup skipped: project has ${all.length} vectors which exceeds dedup_max_scan=${maxScan}. Write proceeds.`
        );
        return { duplicate: null, candidates: [], skipped: true };
      }
    
      // Apply privacy redaction and secret scrubbing BEFORE embedding.
      const safeText = scrubSecrets(redactPrivate(text));
    
      let qvec: Float32Array;
      try {
        [qvec] = await provider.embed([safeText]);
      } catch (err) {
        // Never block writes on provider failure.
        logger.warn(`dedup embed failed (write proceeds): ${err instanceof Error ? err.message : String(err)}`);
        return { duplicate: null, candidates: [] };
      }
    
      // Compute cosine similarities for all candidates.
      const scored: Array<{ memoryId: string; similarity: number }> = all
        .filter(({ memoryId }) => !excludeId || memoryId !== excludeId)
        .map(({ memoryId, vector }) => ({
          memoryId,
          similarity: cosineSimilarity(qvec, vector),
        }))
        .filter((h) => h.similarity >= threshold)
        .sort((a, b) => b.similarity - a.similarity);
    
      if (scored.length === 0) {
        return { duplicate: null, candidates: [] };
      }
    
      // Fetch titles for the top matches.
      function getTitle(memoryId: string): string {
        const row = db.prepare("SELECT title FROM memories WHERE id = ?").get(memoryId) as
          | { title: string }
          | undefined;
        return row?.title ?? memoryId;
      }
    
      const topHit = scored[0];
      const duplicate: DedupHit = {
        memoryId: topHit.memoryId,
        title: getTitle(topHit.memoryId),
        similarity: topHit.similarity,
      };
    
      const candidates: DedupHit[] = scored.slice(1, 6).map((h) => ({
        memoryId: h.memoryId,
        title: getTitle(h.memoryId),
        similarity: h.similarity,
      }));
    
      return { duplicate, candidates };
    }
  • Type definitions for dedup hits and results used by the findDuplicate helper.
    export interface DedupHit {
      memoryId: string;
      title: string;
      similarity: number;
    }
    
    export interface DedupResult {
      /** The best match above threshold, or null if none. */
      duplicate: DedupHit | null;
      /** Additional matches above threshold (top 5, excluding duplicate). */
      candidates: DedupHit[];
      /** True when the scan was skipped due to dedupMaxScan. Write proceeds. */
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Discloses key traits beyond annotations: cheap (~50 tok/match), read-only, returns sorted matches, outbound call for embedding, and clear no-op when embeddings disabled. Annotations already indicate read-only and idempotent; description adds valuable context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is concise (three sentences), front-loaded with the key purpose and cost estimate, and every sentence adds value. No redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the existence of an output schema and comprehensive annotations, the description covers all necessary aspects: purpose, when to use, behavior details, and edge cases (embeddings disabled). No gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with detailed descriptions for each parameter. The tool description does not add new parameter info beyond the schema, so baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool checks for near-duplicate memories before storing. It uses specific verbs ('check', 'duplicate') and distinguishes from siblings like memory_store by calling it a 'pre-flight' check.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises using it before memory_store when overlap is likely or before bulk imports. Does not mention when not to use it, but the context is clear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfrmonteiro99/memento-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server