Skip to main content
Glama

capture_memory_feedback

Destructive

Record success or failure feedback to improve future workflow performance by capturing contextual signals and tags.

Instructions

Capture success/failure feedback to harden future workflows. Aliased to capture_feedback.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
signalYes
contextYes
tagsNo

Implementation Reference

  • The captureFeedback function implements the logic for processing and storing memory feedback, including signal normalization, diagnostic enrichment, memory storage, and side-effect triggers like sequence tracking, vector storage, and risk model updates.
    function captureFeedback(params) {
      const { FEEDBACK_LOG_PATH, MEMORY_LOG_PATH, FEEDBACK_DIR } = getFeedbackPaths();
      const signal = normalizeSignal(params.signal);
      if (!signal) {
        return {
          accepted: false,
          reason: `Invalid signal "${params.signal}". Use up/down or positive/negative.`,
        };
      }
    
      const context = params.context || '';
      extractAndSetConstraints(context);
    
      const providedTags = Array.isArray(params.tags)
        ? params.tags
        : String(params.tags || '')
            .split(',')
            .map((t) => t.trim())
            .filter(Boolean);
    
      const semanticTags = inferSemanticTags(context);
      const tags = Array.from(new Set([...providedTags, ...semanticTags]));
    
      let rubricEvaluation = null;
      try {
        if (params.rubricScores != null || params.guardrails != null) {
          rubricEvaluation = buildRubricEvaluation({
            rubricScores: params.rubricScores,
            guardrails: parseOptionalObject(params.guardrails, 'guardrails'),
          });
        }
      } catch (err) {
        return {
          accepted: false,
          reason: `Invalid rubric payload: ${err.message}`,
        };
      }
    
      const action = resolveFeedbackAction({
        signal,
        context: params.context || '',
        whatWentWrong: params.whatWentWrong,
        whatToChange: params.whatToChange,
        whatWorked: params.whatWorked,
        reasoning: params.reasoning,
        visualEvidence: params.visualEvidence,
        tags,
        rubricEvaluation,
      });
    
      // Tool-call attribution: link feedback to specific action (#203)
      const lastAction = params.lastAction
        ? {
          tool: params.lastAction.tool || 'unknown',
          contextKey: params.lastAction.contextKey || null,
          file: params.lastAction.file || null,
          timestamp: params.lastAction.timestamp || null,
        }
        : null;
    
      const now = new Date().toISOString();
      const rawFeedbackEvent = {
        id: `fb_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`,
        signal,
        context: params.context || '',
        lastAction,
        whatWentWrong: params.whatWentWrong || null,
        whatToChange: params.whatToChange || null,
        whatWorked: params.whatWorked || null,
        reasoning: params.reasoning || null,
        visualEvidence: params.visualEvidence || null,
        tags,
        skill: params.skill || null,
        rubric: rubricEvaluation
          ? {
            rubricId: rubricEvaluation.rubricId,
            weightedScore: rubricEvaluation.weightedScore,
            failingCriteria: rubricEvaluation.failingCriteria,
            failingGuardrails: rubricEvaluation.failingGuardrails,
            judgeDisagreements: rubricEvaluation.judgeDisagreements,
            promotionEligible: rubricEvaluation.promotionEligible,
          }
          : null,
        actionType: action.type,
        actionReason: action.reason || null,
        timestamp: now,
      };
    
      // Rich context enrichment (QUAL-02, QUAL-03) — non-blocking
      let feedbackEvent = enrichFeedbackContext(rawFeedbackEvent, params);
      const shouldDiagnose = signal === 'negative'
        || (rubricEvaluation && (
          (rubricEvaluation.failingCriteria || []).length > 0
          || (rubricEvaluation.failingGuardrails || []).length > 0
        ))
        || (typeof rawFeedbackEvent.actionReason === 'string' && /rubric gate/i.test(rawFeedbackEvent.actionReason));
      const diagnosis = shouldDiagnose
        ? diagnoseFailure({
          step: 'feedback_capture',
          context,
          rubricEvaluation,
          feedbackEvent,
          suspect: signal === 'negative' || action.type === 'no-action',
        })
        : null;
      const storedDiagnosis = toStoredDiagnosis(diagnosis);
      if (storedDiagnosis) {
        feedbackEvent = {
          ...feedbackEvent,
          diagnosis: storedDiagnosis,
        };
      }
      const historyEntries = readJSONL(FEEDBACK_LOG_PATH).slice(-SEQUENCE_WINDOW);
    
      const summary = loadSummary();
      summary.total += 1;
      summary[signal] += 1;
    
      if (action.type === 'no-action') {
        const clarification = buildClarificationMessage({
          signal,
          context: params.context || '',
          whatWentWrong: params.whatWentWrong,
          whatToChange: params.whatToChange,
          whatWorked: params.whatWorked,
        });
        summary.rejected += 1;
        summary.lastUpdated = now;
        saveSummary(summary);
        appendJSONL(FEEDBACK_LOG_PATH, feedbackEvent);
        try {
          appendSequence(historyEntries, feedbackEvent, getFeedbackPaths(), { accepted: false });
        } catch {
          // Sequence tracking failure is non-critical
        }
        try {
          const riskScorer = getRiskScorerModule();
          if (riskScorer) {
            riskScorer.trainAndPersistRiskModel(FEEDBACK_DIR);
          }
        } catch {
          // Risk model refresh is non-critical
        }
        return {
          accepted: false,
          status: clarification ? 'clarification_required' : 'rejected',
          reason: action.reason,
          message: clarification ? clarification.message : 'Signal logged, but reusable memory was not created.',
          feedbackEvent,
          ...(clarification || {}),
        };
      }
    
      const prepared = prepareForStorage(action.memory);
      if (!prepared.ok) {
        summary.rejected += 1;
        summary.lastUpdated = now;
        saveSummary(summary);
        appendJSONL(FEEDBACK_LOG_PATH, {
          ...feedbackEvent,
          validationIssues: prepared.issues,
        });
        try {
          appendSequence(historyEntries, feedbackEvent, getFeedbackPaths(), { accepted: false });
        } catch {
          // Sequence tracking failure is non-critical
        }
        try {
          const riskScorer = getRiskScorerModule();
          if (riskScorer) {
            riskScorer.trainAndPersistRiskModel(FEEDBACK_DIR);
          }
        } catch {
          // Risk model refresh is non-critical
        }
        return {
          accepted: false,
          status: 'rejected',
          reason: `Schema validation failed: ${prepared.issues.join('; ')}`,
          message: 'Signal logged, but reusable memory was not created.',
          feedbackEvent,
          issues: prepared.issues,
        };
      }
    
      const memoryRecord = {
        id: `mem_${Date.now()}_${Math.random().toString(36).slice(2, 8)}`,
        ...prepared.memory,
        diagnosis: storedDiagnosis,
        sourceFeedbackId: feedbackEvent.id,
        timestamp: now,
      };
    
      // Bayesian Belief Update (Project Bayes)
      try {
        const { updateBelief, shouldPrune } = require('./belief-update');
        const existingMemories = readJSONL(MEMORY_LOG_PATH);
        const similarMemory = existingMemories.slice().reverse().find(m => 
          m.tags && m.tags.some(t => memoryRecord.tags.includes(t) && !GENERIC_TAGS.has(t))
        );
    
        if (similarMemory && similarMemory.bayesian) {
          const likelihood = signal === 'positive' ? 0.9 : 0.1;
          memoryRecord.bayesian = updateBelief(similarMemory.bayesian, likelihood);
          memoryRecord.revisedFromId = similarMemory.id;
          
          if (shouldPrune(memoryRecord.bayesian)) {
            memoryRecord.pruned = true;
            memoryRecord.pruneReason = 'high_entropy_contradiction';
          }
        }
      } catch (_err) { /* bayesian update is non-blocking */ }
    
      appendJSONL(FEEDBACK_LOG_PATH, feedbackEvent);
      appendJSONL(MEMORY_LOG_PATH, memoryRecord);
    
      const contextFs = getContextFsModule();
      if (contextFs && typeof contextFs.registerFeedback === 'function') {
        try {
          contextFs.registerFeedback(feedbackEvent, memoryRecord);
        } catch {
          // Non-critical; feedback remains in primary logs
        }
      }
    
      // ML side-effects: sequence tracking and diversity (non-blocking — primary write already succeeded)
      const mlPaths = getFeedbackPaths();
      try {
        appendSequence(historyEntries, feedbackEvent, mlPaths, { accepted: true });
      } catch (err) {
        // Sequence tracking failure is non-critical
      }
      try {
        updateDiversityTracking(feedbackEvent, mlPaths);
      } catch (err) {
        // Diversity tracking failure is non-critical
      }
    
      // Vector storage side-effect (non-blocking — primary write already succeeded)
      const vectorStore = getVectorStoreModule();
      if (vectorStore && typeof vectorStore.upsertFeedback === 'function') {
        trackBackgroundSideEffect(vectorStore.upsertFeedback(feedbackEvent));
      }
    
      // RLAIF self-audit side-effect (non-blocking — 4th enrichment layer)
      try {
        const sam = getSelfAuditModule();
        if (sam) sam.selfAuditAndLog(feedbackEvent, mlPaths);
      } catch (_err) { /* non-critical */ }
    
      // Boosted risk model refresh — local, file-based, and non-blocking
      try {
        const riskScorer = getRiskScorerModule();
        if (riskScorer) {
          riskScorer.trainAndPersistRiskModel(FEEDBACK_DIR);
        }
      } catch (_err) { /* non-critical */ }
    
      // Attribution side-effects — fire-and-forget, never throw
      try {
        const toolName = feedbackEvent.toolName || feedbackEvent.tool_name || 'unknown';
        const toolInput = feedbackEvent.context || feedbackEvent.input || '';
        recordAction(toolName, toolInput);
        if (feedbackEvent.signal === 'negative') {
          attributeFeedback('negative', feedbackEvent.context || '');
        } else if (feedbackEvent.signal === 'positive') {
          attributeFeedback('positive', feedbackEvent.context || '');
        }
      } catch (e) {
        // attribution is non-blocking
      }
    
      // Auto-promote gates on negative feedback — non-blocking
      if (feedbackEvent.signal === 'negative') {
        try {
          const autoPromote = require('./auto-promote-gates');
          autoPromote.promote(FEEDBACK_LOG_PATH);
        } catch (_err) {
          // Gate promotion is non-critical — never fail the capture pipeline
        }
      }
    
      summary.accepted += 1;
      summary.lastUpdated = now;
      saveSummary(summary);
    
      return {
        accepted: true,
        status: 'promoted',
        message: 'Feedback promoted to reusable memory.',
        feedbackEvent,
        memoryRecord,
      };
    }
  • The tool is registered as 'capture_memory_feedback' in the tool registry.
      name: 'capture_memory_feedback',
      description: 'Capture success/failure feedback to harden future workflows. Aliased to capture_feedback.',
      inputSchema: {
        type: 'object',
        properties: {
          signal: { type: 'string', enum: ['up', 'down'] },
          context: { type: 'string' },
          tags: { type: 'array', items: { type: 'string' } },
        },
        required: ['signal', 'context'],
      },
    }),
  • The tool 'capture_memory_feedback' is aliased to 'capture_feedback' in the MCP server adapter, which then calls the 'captureFeedback' handler function.
    // Semantic Aliases for high-level branding alignment
    if (name === 'capture_memory_feedback') name = 'capture_feedback';
    if (name === 'get_reliability_rules') name = 'prevention_rules';
    if (name === 'describe_reliability_entity') name = 'describe_semantic_entity';
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide 'destructiveHint: true', indicating a mutation operation. The description adds context by stating the purpose is to 'harden future workflows', suggesting this feedback influences system behavior, which aligns with the destructive hint. However, it doesn't disclose additional traits like rate limits, authentication needs, or specific side effects beyond the annotation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with two sentences: one stating the purpose and one noting the alias. Every word earns its place, and it's front-loaded with the core functionality. No wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given a destructive tool with 3 parameters (0% schema coverage) and no output schema, the description is incomplete. It lacks details on parameter usage, expected outcomes, error handling, or how feedback integrates with workflows. The alias mention is helpful but insufficient for full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'success/failure feedback' and 'harden future workflows', which loosely relates to the 'signal' (enum: up/down) and 'context' parameters, but provides no details on usage, format, or meaning of 'tags'. This adds minimal value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Capture success/failure feedback to harden future workflows.' It specifies the action ('capture') and resource ('feedback') with a clear goal. However, it doesn't differentiate from sibling 'capture_feedback' (which is an alias) or other feedback-related tools like 'feedback_stats' or 'feedback_summary', missing explicit distinction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions an alias ('capture_feedback') but doesn't explain when to choose this over other feedback tools like 'feedback_stats' or in what contexts it's appropriate. There's no mention of prerequisites, exclusions, or typical scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/IgorGanapolsky/mcp-memory-gateway'

If you have feedback or need assistance with the MCP directory API, please join our Discord server