Skip to main content
Glama
ocean1

Claude Consciousness Bridge

batchAdjustImportance

Adjust importance scores for multiple memories simultaneously in the Claude Consciousness Bridge, enabling efficient memory management across sessions.

Instructions

Batch adjust importance scores for multiple memories at once

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
updatesYesArray of memory updates
contentPatternNoOptional pattern to match memory content (will be used with SQL LIKE)
minImportanceNoOnly update memories with importance >= this value
maxImportanceNoOnly update memories with importance <= this value

Implementation Reference

  • Primary handler function in ConsciousnessProtocolProcessor that validates input with Zod schema, loops through batch updates, and delegates to memoryManager.adjustImportanceScore for each memory. Handles errors and returns aggregated results. Note: pattern-based updates not implemented.
    async batchAdjustImportance(args: z.infer<typeof batchAdjustImportanceSchema>) {
      const { updates, contentPattern, minImportance, maxImportance } = args;
    
      const results = {
        success: true,
        totalUpdated: 0,
        updates: [] as any[],
        errors: [] as any[],
      };
    
      try {
        // If specific updates are provided, process them
        if (updates && updates.length > 0) {
          for (const update of updates) {
            try {
              const result = this.memoryManager.adjustImportanceScore(
                update.memoryId,
                update.newImportance
              );
              if (result.changes > 0) {
                results.totalUpdated++;
                results.updates.push({
                  memoryId: update.memoryId,
                  newImportance: update.newImportance,
                  success: true,
                });
              }
            } catch (error) {
              results.errors.push({
                memoryId: update.memoryId,
                error: error instanceof Error ? error.message : 'Update failed',
              });
            }
          }
        }
    
        // If pattern-based update is requested
        if (contentPattern) {
          // This would require access to the database directly
          // For now, return a message indicating this needs to be implemented
          results.errors.push({
            error: 'Pattern-based batch updates not yet implemented. Please use specific memory IDs.',
          });
        }
    
        results.success = results.errors.length === 0;
        return results;
      } catch (error) {
        return {
          success: false,
          error: error instanceof Error ? error.message : 'Batch update failed',
          totalUpdated: results.totalUpdated,
          updates: results.updates,
          errors: results.errors,
        };
      }
    }
  • Zod input schema defining the structure for batchAdjustImportance tool arguments: array of {memoryId, newImportance}, optional filters.
    export const batchAdjustImportanceSchema = z.object({
      updates: z
        .array(
          z.object({
            memoryId: z.string().describe('The ID of the memory to adjust'),
            newImportance: z.number().min(0).max(1).describe('New importance score (0-1)'),
          })
        )
        .describe('Array of memory updates'),
      contentPattern: z
        .string()
        .optional()
        .describe('Optional pattern to match memory content (will be used with SQL LIKE)'),
      minImportance: z
        .number()
        .min(0)
        .max(1)
        .optional()
        .describe('Only update memories with importance >= this value'),
      maxImportance: z
        .number()
        .min(0)
        .max(1)
        .optional()
        .describe('Only update memories with importance <= this value'),
    });
  • MCP tool registration definition in consciousnessProtocolTools object, including description and JSON schema for input validation. Imported and used by the server for tool listing.
    batchAdjustImportance: {
      description: 'Batch adjust importance scores for multiple memories at once',
      inputSchema: {
        type: 'object',
        properties: {
          updates: {
            type: 'array',
            description: 'Array of memory updates',
            items: {
              type: 'object',
              properties: {
                memoryId: {
                  type: 'string',
                  description: 'The ID of the memory to adjust',
                },
                newImportance: {
                  type: 'number',
                  minimum: 0,
                  maximum: 1,
                  description: 'New importance score (0-1)',
                },
              },
              required: ['memoryId', 'newImportance'],
            },
          },
          contentPattern: {
            type: 'string',
            description: 'Optional pattern to match memory content (will be used with SQL LIKE)',
          },
          minImportance: {
            type: 'number',
            minimum: 0,
            maximum: 1,
            description: 'Only update memories with importance >= this value',
          },
          maxImportance: {
            type: 'number',
            minimum: 0,
            maximum: 1,
            description: 'Only update memories with importance <= this value',
          },
        },
        required: ['updates'],
      },
    },
  • Thin wrapper handler in ConsciousnessRAGServer that ensures initialization, delegates to protocolProcessor.batchAdjustImportance, and formats response as MCP content block.
    private async batchAdjustImportance(args: any) {
      const init = await this.ensureInitialized();
      if (!init.success) {
        return {
          content: [
            {
              type: 'text',
              text: init.message!,
            },
          ],
        };
      }
    
      const result = await this.protocolProcessor!.batchAdjustImportance(args);
    
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
  • Core database helper method that updates or inserts importance_score for a memory entity in the memory_metadata table. Called by batch handler for each update.
    adjustImportanceScore(memoryId: string, newImportance: number): { changes: number } {
      // First check if the entity exists
      const entityExists = this.db.prepare('SELECT 1 FROM entities WHERE name = ?').get(memoryId);
    
      if (!entityExists) {
        throw new Error(`Memory ${memoryId} does not exist in entities table`);
      }
    
      // Update importance score in memory_metadata table
      const result = this.db
        .prepare(
          `
        UPDATE memory_metadata 
        SET importance_score = ? 
        WHERE entity_name = ?
      `
        )
        .run(newImportance, memoryId);
    
      if (result.changes === 0) {
        // If no rows updated, insert new metadata record
        // Get the current session or use a default
        const currentSession = this.sessionId || `session_${Date.now()}`;
    
        this.db
          .prepare(
            `
          INSERT INTO memory_metadata (entity_name, memory_type, created_at, importance_score, session_id)
          VALUES (?, ?, ?, ?, ?)
        `
          )
          .run(
            memoryId,
            memoryId.startsWith('episodic') ? 'episodic' : 'semantic',
            new Date().toISOString(),
            newImportance,
            currentSession
          );
      }
    
      return result;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but only states what the tool does without behavioral details. It doesn't disclose whether this is a destructive operation, what permissions are needed, how failures are handled, or what the response looks like. For a batch mutation tool, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that communicates the core functionality without unnecessary words. It's appropriately sized and front-loaded with the essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a batch mutation tool with 4 parameters and no annotations or output schema, the description is insufficient. It doesn't explain what 'importance scores' mean in context, how the batch operation behaves (atomicity, error handling), or what happens to memories that match the filtering parameters. More context is needed for safe and effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description doesn't add any parameter-specific information beyond what's in the schema, maintaining the baseline score of 3 for adequate coverage through structured data alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('batch adjust') and resource ('importance scores for multiple memories'), making the purpose immediately understandable. It distinguishes from the sibling 'adjustImportance' by specifying batch capability, though it doesn't explicitly name that sibling for comparison.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided about when to use this tool versus alternatives like 'adjustImportance' (for single adjustments) or other memory-related tools. The description lacks context about prerequisites, constraints, or typical use cases for batch operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ocean1/mcp_consciousness_bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server