Skip to main content
Glama

Memory effectiveness analytics

memory_analytics
Read-onlyIdempotent

Analyze memory injection utility, token costs, auto-capture stats, compression activity, and prune suggestions to tune importance thresholds and identify dead memories.

Instructions

Reports utility rates of injected memories, token costs per search layer, auto-capture stats, compression activity, and prune suggestions. Read-only. Use to tune importance thresholds, find dead memories worth pruning, and verify that auto-capture/compression are earning their keep. Returns a no-op message when analytics are disabled.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
periodNoTime window to summarise. `all` covers the full retention period configured in `analytics.retentionDays`.last_7d
sectionNoWhich sub-report to render — `injections` (utility), `captures` (auto-capture), `compression`, `memories` (prune suggestions), or `all` (default).all
project_pathNoOptional absolute project path to scope the report. Empty string aggregates across all projects.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
messageYesMarkdown report grouped by `section` (`injections` / `captures` / `compression` / `memories`) with totals, percentages, and prune candidates. Returns a no-op message when analytics are disabled in config.

Implementation Reference

  • Main handler function for the memory_analytics tool. Accepts params (period, section, project_path), resolves project ID, calls generateReport(), and formats the output as a text report with sections for injections, captures, compression, and memories.
    export async function handleMemoryAnalytics(
      db: Database.Database,
      params: { period?: string; section?: string; project_path?: string }
    ): Promise<string> {
      const period = params.period || "last_7d";
    
      // K4: resolve path → UUID. null means "no project filter".
      // "global" or empty string → aggregate across all projects.
      const projectId = resolveProjectId(db, params.project_path);
    
      // If the user passed a project_path that doesn't exist, tell them explicitly.
      if (params.project_path && params.project_path !== "global" && projectId === null) {
        return `No project registered for path "${params.project_path}". Memories must be stored against this path before analytics are available.`;
      }
    
      const report = generateReport(db, projectId, period);
      const section = params.section ?? "all";
    
      const lines: string[] = [
        `=== Memory Analytics (${report.period}${projectId ? " for project " + params.project_path : " (global / all projects)"}) ===`,
      ];
    
      if (section === "all" || section === "injections") {
        lines.push(
          `Sessions: ${report.session_count}`,
          `Total tokens: ${report.total_tokens_consumed}`,
          `Avg tokens/session: ${report.avg_tokens_per_session}`,
          "",
        );
      }
    
      if (section === "all" || section === "captures") {
        lines.push(
          `Auto-capture: ${report.auto_capture_stats.total_captures} captured, ${report.auto_capture_stats.total_skips} skipped`,
          `Capture rate: ${(report.auto_capture_stats.capture_rate * 100).toFixed(1)}%`,
          "",
        );
      }
    
      if (section === "all" || section === "compression") {
        const c = report.compression_stats;
        const savedPct = c.tokens_before > 0 ? (c.tokens_saved / c.tokens_before * 100).toFixed(1) : "0.0";
        lines.push(
          `Compression: ${c.total_runs} run(s), ${c.tokens_before}→${c.tokens_after} tokens (saved ${c.tokens_saved}, ${savedPct}%)`,
          `Avg ratio: ${c.avg_ratio.toFixed(2)}`,
          "",
        );
      }
    
      if (section === "all" || section === "memories") {
        lines.push(
          `Memories: ${report.memory_stats.total_active} active, ${report.memory_stats.total_deleted} deleted`,
        );
        if (Object.keys(report.memory_stats.by_type).length > 0) {
          lines.push("By type: " + Object.entries(report.memory_stats.by_type).map(([t, c]) => `${t}:${c}`).join(", "));
        }
        if (report.dedup_stats) {
          const d = report.dedup_stats;
          lines.push(
            `Dedup: ${d.blocked} blocked, ${d.warned} warned, ${d.passed} passed (${report.period})`
          );
        }
      }
    
      // G3: explanatory footer when the period predates the v2 install (i.e., no analytics events yet).
      const firstEventRow = db.prepare(
        "SELECT MIN(created_at) as first FROM analytics_events"
      ).get() as { first: string | null };
      if (!firstEventRow.first) {
        lines.push("");
        lines.push("Note: no analytics events recorded yet. This is expected immediately after v2 upgrade; data will accumulate over the next few sessions.");
      } else {
        lines.push("");
        lines.push(`Note: analytics tracking began ${firstEventRow.first}. Memories created before that appear with neutral (0.5) utility scores until they accumulate injection+use data.`);
      }
    
      return lines.join("\n");
    }
  • Registration with inputSchema defining three parameters: period (enum: last_24h, last_7d, last_30d, all), section (enum: all, injections, captures, compression, memories), and project_path (optional string). Annotations mark it as readOnlyHint=true, idempotentHint=true.
    {
      title: "Memory effectiveness analytics",
      description: [
        "Reports utility rates of injected memories, token costs per search layer, auto-capture stats, compression activity, and prune suggestions. Read-only.",
        "Use to tune importance thresholds, find dead memories worth pruning, and verify that auto-capture/compression are earning their keep. Returns a no-op message when analytics are disabled.",
      ].join(" "),
      inputSchema: {
        period: z.enum(["last_24h", "last_7d", "last_30d", "all"]).default("last_7d").describe("Time window to summarise. `all` covers the full retention period configured in `analytics.retentionDays`."),
        section: z.enum(["all", "injections", "captures", "compression", "memories"]).default("all").describe("Which sub-report to render — `injections` (utility), `captures` (auto-capture), `compression`, `memories` (prune suggestions), or `all` (default)."),
        project_path: z.string().default("").describe("Optional absolute project path to scope the report. Empty string aggregates across all projects."),
      },
      annotations: {
        title: "Memory effectiveness analytics",
        readOnlyHint: true,
        destructiveHint: false,
        idempotentHint: true,
        openWorldHint: false,
      },
      outputSchema: {
        message: z.string().describe("Markdown report grouped by `section` (`injections` / `captures` / `compression` / `memories`) with totals, percentages, and prune candidates. Returns a no-op message when analytics are disabled in config."),
      },
    },
  • src/index.ts:510-535 (registration)
    Tool registration via server.registerTool('memory_analytics', ...) with title, description, schemas, and handler that calls handleMemoryAnalytics(db, params).
    server.registerTool(
      "memory_analytics",
      {
        title: "Memory effectiveness analytics",
        description: [
          "Reports utility rates of injected memories, token costs per search layer, auto-capture stats, compression activity, and prune suggestions. Read-only.",
          "Use to tune importance thresholds, find dead memories worth pruning, and verify that auto-capture/compression are earning their keep. Returns a no-op message when analytics are disabled.",
        ].join(" "),
        inputSchema: {
          period: z.enum(["last_24h", "last_7d", "last_30d", "all"]).default("last_7d").describe("Time window to summarise. `all` covers the full retention period configured in `analytics.retentionDays`."),
          section: z.enum(["all", "injections", "captures", "compression", "memories"]).default("all").describe("Which sub-report to render — `injections` (utility), `captures` (auto-capture), `compression`, `memories` (prune suggestions), or `all` (default)."),
          project_path: z.string().default("").describe("Optional absolute project path to scope the report. Empty string aggregates across all projects."),
        },
        annotations: {
          title: "Memory effectiveness analytics",
          readOnlyHint: true,
          destructiveHint: false,
          idempotentHint: true,
          openWorldHint: false,
        },
        outputSchema: {
          message: z.string().describe("Markdown report grouped by `section` (`injections` / `captures` / `compression` / `memories`) with totals, percentages, and prune candidates. Returns a no-op message when analytics are disabled in config."),
        },
      },
      async (params) => textResult(await handleMemoryAnalytics(db, params))
    );
  • Helper function resolveProjectId() that looks up a project by root_path in the DB without creating a new row. Returns null for 'global', empty string, or non-existent paths.
    export function resolveProjectId(db: Database.Database, path: string | undefined): string | null {
      if (!path || path === "global" || path === "") return null;
      const row = db.prepare("SELECT id FROM projects WHERE root_path = ?").get(path) as
        | { id: string } | undefined;
      return row?.id ?? null;
    }
  • The generateReport() function and supporting types/helpers (periodToSqlClause, AnalyticsReport interface, DedupStats, etc.). Queries analytics_events, memories, and compression_log tables to build the full report object consumed by the handler.
    export function periodToSqlClause(period: string): string {
      switch (period) {
        case "last_24h": return "AND created_at > datetime('now', '-24 hours')";
        case "last_7d": return "AND created_at > datetime('now', '-7 days')";
        case "last_30d": return "AND created_at > datetime('now', '-30 days')";
        case "all": return "";
        default: return "";
      }
    }
    
    // K4: projectId may be null to aggregate across all projects (no WHERE project_id filter).
    export function generateReport(db: Database.Database, projectId: string | null, period: string): AnalyticsReport {
      const clause = periodToSqlClause(period);
    
      // Build the project-id filter fragment. For null we emit "1=1" (no filter); for a real id we bind it.
      const projSql = projectId === null ? "1=1" : "project_id = ?";
      const projBind: string[] = projectId === null ? [] : [projectId];
    
      const sessionStats = db.prepare(`
        SELECT
          COUNT(DISTINCT session_id) as session_count,
          COALESCE(SUM(CASE WHEN event_type = 'budget_debit' THEN tokens_cost ELSE 0 END), 0) as total_tokens
        FROM analytics_events
        WHERE ${projSql} ${clause}
      `).get(...projBind) as { session_count: number; total_tokens: number };
    
      const captureStats = db.prepare(`
        SELECT
          COALESCE(SUM(CASE WHEN event_type = 'auto_capture' THEN 1 ELSE 0 END), 0) as captures,
          COALESCE(SUM(CASE WHEN event_type = 'auto_capture_skip' THEN 1 ELSE 0 END), 0) as skips
        FROM analytics_events
        WHERE ${projSql} ${clause}
      `).get(...projBind) as { captures: number; skips: number };
    
      const memStats = db.prepare(`
        SELECT
          COALESCE(SUM(CASE WHEN deleted_at IS NULL THEN 1 ELSE 0 END), 0) as active,
          COALESCE(SUM(CASE WHEN deleted_at IS NOT NULL THEN 1 ELSE 0 END), 0) as deleted
        FROM memories
        WHERE ${projSql}
      `).get(...projBind) as { active: number; deleted: number };
    
      const typeRows = db.prepare(`
        SELECT memory_type, COUNT(*) as cnt FROM memories
        WHERE ${projSql} AND deleted_at IS NULL
        GROUP BY memory_type
      `).all(...projBind) as Array<{ memory_type: string; cnt: number }>;
    
      const byType: Record<string, number> = {};
      for (const r of typeRows) byType[r.memory_type] = r.cnt;
    
      const totalCaptures = captureStats.captures + captureStats.skips;
    
      // Compression stats: compression_log has no project_id column, so we join to memories
      // to filter by project. Period clause applies to compression_log.created_at.
      const compressionClause = clause.replace(/created_at/g, "cl.created_at");
      const compressionStats = db
        .prepare(
          projectId === null
            ? `
              SELECT
                COUNT(*) as runs,
                COALESCE(SUM(cl.tokens_before), 0) as tokens_before,
                COALESCE(SUM(cl.tokens_after), 0) as tokens_after,
                COALESCE(AVG(cl.compression_ratio), 0) as avg_ratio
              FROM compression_log cl
              WHERE 1=1 ${compressionClause}
            `
            : `
              SELECT
                COUNT(*) as runs,
                COALESCE(SUM(cl.tokens_before), 0) as tokens_before,
                COALESCE(SUM(cl.tokens_after), 0) as tokens_after,
                COALESCE(AVG(cl.compression_ratio), 0) as avg_ratio
              FROM compression_log cl
              JOIN memories m ON m.id = cl.compressed_memory_id
              WHERE m.project_id = ? ${compressionClause}
            `,
        )
        .get(...projBind) as {
        runs: number;
        tokens_before: number;
        tokens_after: number;
        avg_ratio: number;
      };
    
      // Search layer stats
      const searchLayerRows = db.prepare(`
        SELECT
          json_extract(event_data, '$.detail') as detail,
          COUNT(*) as count
        FROM analytics_events
        WHERE ${projSql} AND event_type = 'search_layer_used' ${clause}
        GROUP BY detail
      `).all(...projBind) as Array<{ detail: string; count: number }>;
    
      const searchLayerByDetail: Record<string, number> = {};
      let totalSearches = 0;
      for (const r of searchLayerRows) {
        if (r.detail) {
          searchLayerByDetail[r.detail] = r.count;
          totalSearches += r.count;
        }
      }
    
      // Session summary mode stats
      const summaryRows = db.prepare(`
        SELECT
          json_extract(event_data, '$.mode') as mode,
          json_extract(event_data, '$.fallback') as fallback,
          COUNT(*) as cnt
        FROM analytics_events
        WHERE ${projSql} AND event_type = 'session_summary' ${clause}
        GROUP BY json_extract(event_data, '$.mode'), json_extract(event_data, '$.fallback')
      `).all(...projBind) as Array<{ mode: string | null; fallback: number | string | null; cnt: number }>;
    
      let llmSummaries = 0;
      let deterministicSummaries = 0;
      let fallbackSummaries = 0;
      for (const r of summaryRows) {
        if (r.mode === "llm") {
          llmSummaries += r.cnt;
        } else {
          deterministicSummaries += r.cnt;
        }
        if (r.fallback === 1 || r.fallback === "true" || r.fallback === true) {
          fallbackSummaries += r.cnt;
        }
      }
      const totalSummaries = llmSummaries + deterministicSummaries;
    
      // Sync stats
      const syncPushRows = db.prepare(`
        SELECT COUNT(*) as cnt FROM analytics_events
        WHERE ${projSql} AND event_type = 'sync_push' ${clause}
      `).get(...projBind) as { cnt: number };
    
      const syncPullRows = db.prepare(`
        SELECT COUNT(*) as cnt FROM analytics_events
        WHERE ${projSql} AND event_type = 'sync_pull' ${clause}
      `).get(...projBind) as { cnt: number };
    
      const syncConflictRows = db.prepare(`
        SELECT COALESCE(SUM(json_extract(event_data, '$.conflicts')), 0) as total
        FROM analytics_events
        WHERE ${projSql} AND event_type = 'sync_pull' ${clause}
      `).get(...projBind) as { total: number };
    
      const totalSyncPushes = syncPushRows.cnt;
      const totalSyncPulls = syncPullRows.cnt;
      const totalSyncConflicts = syncConflictRows.total ?? 0;
    
      // Dedup stats (last 7d window; last period applies)
      const dedupRows = db.prepare(`
        SELECT
          json_extract(event_data, '$.action') as action,
          COUNT(*) as cnt
        FROM analytics_events
        WHERE ${projSql} AND event_type = 'dedup_decision' ${clause}
        GROUP BY json_extract(event_data, '$.action')
      `).all(...projBind) as Array<{ action: string | null; cnt: number }>;
    
      let dedupBlocked = 0;
      let dedupWarned = 0;
      let dedupPassed = 0;
      for (const r of dedupRows) {
        if (r.action === "blocked") dedupBlocked += r.cnt;
        else if (r.action === "warned") dedupWarned += r.cnt;
        else if (r.action === "passed") dedupPassed += r.cnt;
      }
      const totalDedupEvents = dedupBlocked + dedupWarned + dedupPassed;
    
      return {
        period,
        session_count: sessionStats.session_count,
        total_tokens_consumed: sessionStats.total_tokens,
        avg_tokens_per_session: sessionStats.session_count > 0
          ? Math.round(sessionStats.total_tokens / sessionStats.session_count) : 0,
        auto_capture_stats: {
          total_captures: captureStats.captures,
          total_skips: captureStats.skips,
          capture_rate: totalCaptures > 0 ? captureStats.captures / totalCaptures : 0,
        },
        memory_stats: {
          total_active: memStats.active,
          total_deleted: memStats.deleted,
          by_type: byType,
        },
        compression_stats: {
          total_runs: compressionStats.runs,
          tokens_before: compressionStats.tokens_before,
          tokens_after: compressionStats.tokens_after,
          avg_ratio: compressionStats.avg_ratio,
          tokens_saved: Math.max(0, compressionStats.tokens_before - compressionStats.tokens_after),
        },
        search_layer_stats: totalSearches > 0 ? {
          total_searches: totalSearches,
          by_detail: searchLayerByDetail,
        } : undefined,
        session_summary_stats: totalSummaries > 0 ? {
          llm: llmSummaries,
          deterministic: deterministicSummaries,
          fallback: fallbackSummaries,
        } : undefined,
        sync_stats: (totalSyncPushes > 0 || totalSyncPulls > 0) ? {
          pushes: totalSyncPushes,
          pulls: totalSyncPulls,
          conflicts: totalSyncConflicts,
        } : undefined,
        dedup_stats: totalDedupEvents > 0 ? {
          blocked: dedupBlocked,
          warned: dedupWarned,
          passed: dedupPassed,
        } : undefined,
      };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint, destructiveHint, and idempotentHint. The description adds that it returns a no-op message when analytics are disabled, which supplements the annotations with a useful edge-case behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three concise sentences, front-loaded with the main output, followed by usage guidance and an edge case. No unnecessary words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With an output schema present, the description does not need to detail return values. It covers the main use cases and the disabled-analytics edge case, which is sufficient for a read-only analytics tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the baseline is 3. The description does not add additional parameter-specific details beyond what the schema already provides for period, section, and project_path.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool reports analytics on injected memories, token costs, auto-capture stats, compression, and prune suggestions, which is specific and distinct from sibling tools like memory_store or memory_delete.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says to use for tuning importance thresholds, finding dead memories, and verifying auto-capture/compression effectiveness, and notes it is read-only. It provides clear context but does not explicitly mention when not to use it.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lfrmonteiro99/memento-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server