Skip to main content
Glama

Graph Decay

graph_decay
Destructive

Apply time-based decay to node confidence and edge weights using per-type half-lives to maintain knowledge graph relevance. Preview changes with dry_run before irreversible decay.

Instructions

Apply time-based decay to every node confidence and edge weight using per-type half-lives (preferences ~693d, events ~99d, etc.). Called by the dream process during maintenance. Always preview with dry_run=true first — decay is irreversible without restoring from a graph_export backup. Returns counts of nodes/edges modified per type.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dry_runNoPreview only, don't apply changes (default: false)

Implementation Reference

  • Registration of the graph_decay tool via server.registerTool('graph_decay', ...). Defines input schema (dry_run boolean), description, and the handler that calls client.applyDecay().
    // ─── Tool: graph_decay ───
    
    server.registerTool("graph_decay", {
      title: "Graph Decay",
      description:
        "Apply time-based decay to every node confidence and edge weight using per-type half-lives (preferences ~693d, events ~99d, etc.). Called by the dream process during maintenance. Always preview with dry_run=true first — decay is irreversible without restoring from a graph_export backup. Returns counts of nodes/edges modified per type.",
      inputSchema: {
        dry_run: z.boolean().optional().default(false).describe("Preview only, don't apply changes (default: false)"),
      },
      annotations: { destructiveHint: true },
    }, async (args) => {
      try {
        const result = await client.applyDecay(currentTenant(), args.dry_run ?? false);
        return toolResult(result);
      } catch (err) {
        return toolError(`graph_decay failed: ${err instanceof Error ? err.message : String(err)}`);
      }
    });
  • Core implementation of applyDecay() in Neo4jClient. Iterates per-type decay rates, applies exponential decay formula to node confidence and edge weights based on time since last_seen/last_confirmed. Also counts nodes flagged for pruning. Supports dry_run mode.
    async applyDecay(tenantId: string, dryRun = false): Promise<{
      nodes_decayed: number;
      edges_decayed: number;
      nodes_flagged_for_pruning: number;
    }> {
      const config = getConfig();
      let totalNodesDecayed = 0;
      let totalEdgesDecayed = 0;
    
      if (dryRun) {
        for (const [type] of Object.entries(config.decay.rates)) {
          const rows = await this.run(
            `
            MATCH (n:\`${type}\` {tenant_id: $tenantId})
            WHERE n.last_seen < datetime() - duration('P1D')
            RETURN count(n) AS count
            `,
            { tenantId },
          );
          totalNodesDecayed += Number(rows[0]?.["count"] ?? 0);
        }
      } else {
        for (const [type, rate] of Object.entries(config.decay.rates)) {
          const rows = await this.run(
            `
            MATCH (n:\`${type}\` {tenant_id: $tenantId})
            WHERE n.last_seen < datetime() - duration('P1D')
            // duration.inDays() forces an all-days representation; using .days
            // on the normalized duration.between() would drop the months
            // component (30 days back → "1 month + 0 days" → 0-day decay).
            WITH n, n.confidence * ($rate ^ duration.inDays(n.last_seen, datetime()).days) AS new_conf
            SET n.confidence = CASE WHEN new_conf < 0.01 THEN 0.01 ELSE new_conf END
            RETURN count(n) AS decayed
            `,
            { tenantId, rate },
          );
          totalNodesDecayed += Number(rows[0]?.["decayed"] ?? 0);
        }
    
        // Decay edges (both endpoints must be in tenant)
        const edgeRows = await this.run(
          `
          MATCH (a:Entity {tenant_id: $tenantId})-[r]->(b:Entity {tenant_id: $tenantId})
          WHERE r.last_confirmed < datetime() - duration('P1D')
            AND r.weight IS NOT NULL
          WITH r, r.weight * ($rate ^ duration.inDays(r.last_confirmed, datetime()).days) AS new_weight
          SET r.weight = CASE WHEN new_weight < 0.01 THEN 0.01 ELSE new_weight END
          RETURN count(r) AS decayed
          `,
          { tenantId, rate: config.decay.edge_rate },
        );
        totalEdgesDecayed = Number(edgeRows[0]?.["decayed"] ?? 0);
      }
    
      // Count nodes flagged for pruning (tenant-scoped)
      const pruneRows = await this.run(
        `
        MATCH (n:Entity {tenant_id: $tenantId})
        WHERE n.confidence < $threshold
        OPTIONAL MATCH (n)-[r]-(other:Entity {tenant_id: $tenantId})
        WITH n, max(r.weight) AS max_edge_weight
        WHERE max_edge_weight IS NULL OR max_edge_weight < $edgeThreshold
        RETURN count(n) AS flagged
        `,
        {
          tenantId,
          threshold: config.decay.prune_node_threshold,
          edgeThreshold: config.decay.prune_edge_threshold,
        },
      );
      const nodesFlagged = Number(pruneRows[0]?.["flagged"] ?? 0);
    
      return {
        nodes_decayed: totalNodesDecayed,
        edges_decayed: totalEdgesDecayed,
        nodes_flagged_for_pruning: nodesFlagged,
      };
    }
  • Decay configuration shape: per-type rates (Person, Project, Preference, Concept, Decision, Fact, Event, Object), edge_rate, and pruning thresholds.
    decay: {
      rates: Record<string, number>;
      edge_rate: number;
      prune_node_threshold: number;
      prune_edge_threshold: number;
      prune_orphan_days: number;
    };
  • Default decay rates used by the decay algorithm — exponential decay factors per entity type and for edges.
    decay: {
      rates: {
        Person: 0.998,
        Project: 0.995,
        Preference: 0.999,
        Concept: 0.999,
        Decision: 0.997,
        Fact: 0.996,
        Event: 0.993,
        Object: 0.996,
      },
      edge_rate: 0.997,
      prune_node_threshold: 0.1,
      prune_edge_threshold: 0.05,
      prune_orphan_days: 30,
    },
  • Audit event type definition for 'decay_applied' events, logged when decay executes.
    | (BaseEvent & {
        event: "decay_applied";
        nodes_affected: number;
        edges_affected: number;
      })
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description expands on the destructiveHint annotation by detailing irreversibility, the need for backups, and returning counts of modifications per type. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no waste: first explains the operation, second provides context, third warns about irreversibility. Information is front-loaded and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description specifies return values (counts per type). It covers behavior, safety, and context (dream process), making it fully adequate for the tool's complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter dry_run is described in both the schema and the description, with the description adding critical context about its role in preventing irreversible changes. Schema coverage is 100%.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool applies time-based decay to node confidence and edge weights using per-type half-lives, with specific examples (preferences ~693d, events ~99d). It distinguishes itself from siblings by noting it is called by the dream process during maintenance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly advises to always preview with dry_run=true first and warns that decay is irreversible without a graph_export backup, providing clear when-to-use and precautionary instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/stevepridemore/graph-memory'

If you have feedback or need assistance with the MCP directory API, please join our Discord server