Skip to main content
Glama
Platano78

Smart-AI-Bridge

backup_restore

Manage backups with create, restore, list, and cleanup actions. Supports timestamped backups with metadata for enterprise-grade tracking. Restore files from backups and automate cleanup based on age or count.

Instructions

Enhanced backup management - Timestamped backup tracking with metadata, restore capability, and intelligent cleanup. Extends existing backup patterns with enterprise-grade management.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
actionYes
file_pathNo
backup_idNo
metadataNo
cleanup_optionsNo

Implementation Reference

  • BackupRestoreHandler class with execute() dispatcher and methods: createBackup (creates timestamped backups with metadata), restoreBackup (restores from backup with pre-restore safety copy), listBackups (lists all backups for a file path), cleanupBackups (removes old backups by age/count, supports dry_run)
    class BackupRestoreHandler extends BaseHandler {
      async execute(args) {
        const { action, file_path, backup_id, metadata, cleanup_options } = args;
    
        switch (action) {
          case 'create':
            return this.createBackup(file_path, metadata);
          case 'restore':
            return this.restoreBackup(file_path, backup_id);
          case 'list':
            return this.listBackups(file_path);
          case 'cleanup':
            return this.cleanupBackups(file_path, cleanup_options);
          default:
            throw new Error(`Unknown action: ${action}`);
        }
      }
    
      async createBackup(filePath, metadata = {}) {
        if (!filePath) {
          throw new Error('file_path is required for create action');
        }
    
        const content = await this.safeReadFile(filePath);
        const backupId = `backup_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
        const backupPath = `${filePath}.${backupId}`;
    
        await fs.writeFile(backupPath, content);
    
        // Store metadata
        const metadataPath = `${backupPath}.meta.json`;
        await fs.writeFile(metadataPath, JSON.stringify({
          ...metadata,
          originalPath: filePath,
          backupId,
          createdAt: new Date().toISOString(),
          size: content.length
        }));
    
        return this.buildSuccessResponse({
          action: 'create',
          backup_id: backupId,
          backup_path: backupPath,
          original_path: filePath,
          size: content.length
        });
      }
    
      async restoreBackup(filePath, backupId) {
        if (!filePath || !backupId) {
          throw new Error('file_path and backup_id are required for restore action');
        }
    
        const backupPath = `${filePath}.${backupId}`;
        const content = await this.safeReadFile(backupPath);
    
        // Create backup of current state before restoring
        const preRestoreBackup = `${filePath}.pre_restore_${Date.now()}`;
        try {
          const currentContent = await fs.readFile(filePath, 'utf8');
          await fs.writeFile(preRestoreBackup, currentContent);
        } catch (e) {
          // File doesn't exist, no pre-restore backup needed
        }
    
        await fs.writeFile(filePath, content);
    
        return this.buildSuccessResponse({
          action: 'restore',
          backup_id: backupId,
          restored_to: filePath,
          pre_restore_backup: preRestoreBackup
        });
      }
    
      async listBackups(filePath) {
        const dir = path.dirname(filePath || '.');
        const basename = path.basename(filePath || '');
    
        const files = await fs.readdir(dir);
        const backups = [];
    
        for (const file of files) {
          if (file.startsWith(basename) && file.includes('backup_')) {
            const fullPath = path.join(dir, file);
            const stats = await fs.stat(fullPath);
    
            if (!file.endsWith('.meta.json')) {
              const metaPath = `${fullPath}.meta.json`;
              let metadata = {};
              try {
                metadata = JSON.parse(await fs.readFile(metaPath, 'utf8'));
              } catch (e) {
                // No metadata file
              }
    
              backups.push({
                path: fullPath,
                backup_id: file.replace(`${basename}.`, ''),
                size: stats.size,
                created: stats.mtime,
                metadata
              });
            }
          }
        }
    
        return this.buildSuccessResponse({
          action: 'list',
          file_path: filePath,
          backups: backups.sort((a, b) => b.created - a.created)
        });
      }
    
      async cleanupBackups(filePath, options = {}) {
        const {
          max_age_days = 30,
          max_count_per_file = 10,
          dry_run = false
        } = options;
    
        const listResult = await this.listBackups(filePath);
        const backups = listResult.backups || [];
    
        const now = Date.now();
        const maxAge = max_age_days * 24 * 60 * 60 * 1000;
    
        const toDelete = [];
    
        // Filter by age
        for (const backup of backups) {
          const age = now - new Date(backup.created).getTime();
          if (age > maxAge) {
            toDelete.push({ ...backup, reason: 'age' });
          }
        }
    
        // Keep only max_count most recent
        const remaining = backups.filter(b => !toDelete.find(d => d.path === b.path));
        if (remaining.length > max_count_per_file) {
          const excess = remaining.slice(max_count_per_file);
          for (const backup of excess) {
            toDelete.push({ ...backup, reason: 'count' });
          }
        }
    
        if (!dry_run) {
          for (const backup of toDelete) {
            try {
              await fs.unlink(backup.path);
              // Also delete metadata file
              try {
                await fs.unlink(`${backup.path}.meta.json`);
              } catch (e) {
                // Metadata file might not exist
              }
            } catch (e) {
              console.error(`Failed to delete ${backup.path}: ${e.message}`);
            }
          }
        }
    
        return this.buildSuccessResponse({
          action: 'cleanup',
          dry_run,
          backups_deleted: toDelete.length,
          deleted: toDelete.map(b => ({ path: b.path, reason: b.reason }))
        });
      }
    }
  • Tool definition/schema for 'backup_restore'. Defines actions (create/restore/list/cleanup), file_path, backup_id, metadata (description, tags), and cleanup_options (max_age_days, max_count_per_file, dry_run). Required fields: action.
    {
      name: 'backup_restore',
      description: 'Enhanced backup management - Timestamped backup tracking with metadata, restore capability, and intelligent cleanup. Extends existing backup patterns with enterprise-grade management.',
      handler: 'handleBackupRestore',
      schema: {
        type: 'object',
        properties: {
          action: {
            type: 'string',
            enum: ['create', 'restore', 'list', 'cleanup']
          },
          file_path: { type: 'string' },
          backup_id: { type: 'string' },
          metadata: {
            type: 'object',
            properties: {
              description: { type: 'string' },
              tags: { type: 'array', items: { type: 'string' } }
            }
          },
          cleanup_options: {
            type: 'object',
            properties: {
              max_age_days: { type: 'number', default: 30 },
              max_count_per_file: { type: 'number', default: 10 },
              dry_run: { type: 'boolean', default: false }
            }
          }
        },
        required: ['action']
      }
    },
  • Handler registry mapping the string 'handleBackupRestore' to the BackupRestoreHandler class, enabling the tool to be dispatched by name.
    const HANDLER_REGISTRY = {
      // Original handlers
      'handleReview': ReviewHandler,
      'handleAsk': AskHandler,
      'handleWriteFilesAtomic': WriteFilesAtomicHandler,
      'handleBackupRestore': BackupRestoreHandler,
      'handleCheckBackendHealth': HealthHandler,
      'handleValidateChanges': ValidateChangesHandler,
      'handleManageConversation': ManageConversationHandler,
      'handleGetAnalytics': GetAnalyticsHandler,
      'handleSpawnSubagent': SubagentHandler,
      'handleParallelAgents': ParallelAgentsHandler,
      'handleCouncil': CouncilHandler,
      'handleExplore': ExploreHandler,
    
      // SAB v2.0: Local LLM File Operations
      'handleAnalyzeFile': AnalyzeFileHandler,
      'handleGenerateFile': GenerateFileHandler,
      'handleModifyFile': ModifyFileHandler,
      'handleBatchAnalyze': BatchAnalyzeHandler,
      'handleBatchModify': BatchModifyHandler,
      'handleRefactor': RefactorHandler,
    
      // SAB v2.0: Dual Iterate (Internal generate->review->fix loop)
      'handleDualIterate': DualIterateHandler
    };
  • Import of BackupRestoreHandler from file-handlers.js into the handler registry.
    import {
      WriteFilesAtomicHandler,
      BackupRestoreHandler
  • Named export of BackupRestoreHandler from the handlers index module.
    BackupRestoreHandler,
    HealthHandler,
    ValidateChangesHandler,
    ManageConversationHandler,
    GetAnalyticsHandler,
    SubagentHandler,
    ParallelAgentsHandler,
    CouncilHandler,
    
    // SAB v2.0: Local LLM File Operations
    AnalyzeFileHandler,
    GenerateFileHandler,
    ModifyFileHandler,
    BatchAnalyzeHandler,
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description is the sole source of behavioral transparency. It mentions 'intelligent cleanup' but does not explain what that entails, nor does it disclose potential side effects, permissions, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is short but includes vague marketing language such as 'enterprise-grade management' that adds little value. It front-loads the core purpose but wastes words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of the input schema with nested objects and no output schema, the description lacks sufficient detail on how each action works, what the tool returns, and how parameters interact.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning to any of the five parameters. With 0% schema description coverage, the description fails to clarify the purpose or usage of parameters like action, file_path, backup_id, metadata, or cleanup_options.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool as backup management with specific capabilities like timestamped tracking, metadata, restore, and cleanup. It distinguishes itself from sibling tools which are primarily file analysis and modification tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool versus alternatives. The description does not mention prerequisites, limitations, or conditions under which this tool is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Platano78/Smart-AI-Bridge'

If you have feedback or need assistance with the MCP directory API, please join our Discord server