Skip to main content
Glama
bswa006

AI Agent Template MCP Server

by bswa006

setup_persistence_automation

Automates context updates with monitoring and validation to maintain accurate project information for AI agents.

Instructions

Set up automated context updates with monitoring and validation

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
projectPathYesPath to the project directory
analysisIdNoAnalysis ID from analyze_codebase_deeply
updateScheduleYesHow often to update context
gitHooksNoInstall git hooks for validation
monitoringNoEnable context monitoring
notificationsNoNotification settings

Implementation Reference

  • Main handler function that implements the tool logic: creates update/validation scripts, monitoring configs, git hooks, cron jobs, GitHub Actions workflows, and setup instructions for automated codebase context persistence.
    export async function setupPersistenceAutomation(
      config: PersistenceConfig
    ): Promise<PersistenceResult> {
      const result: PersistenceResult = {
        success: false,
        filesCreated: [],
        message: '',
        setupInstructions: [],
      };
    
      try {
        // Check if analysis has been completed
        const analysisId = config.analysisId || global.latestAnalysisId;
        if (!analysisId || !global.codebaseAnalysis?.[analysisId]) {
          throw new Error('Codebase analysis must be completed first. Run analyze_codebase_deeply tool.');
        }
    
        const analysis: DeepAnalysisResult = global.codebaseAnalysis[analysisId];
        
        // Create scripts directory
        const scriptsDir = join(config.projectPath, 'scripts');
        if (!existsSync(scriptsDir)) {
          mkdirSync(scriptsDir, { recursive: true });
        }
        
        // Create update script
        const updateScript = createUpdateScript(config, analysis);
        const updateScriptPath = join(scriptsDir, 'update-context.sh');
        writeFileSync(updateScriptPath, updateScript);
        chmodSync(updateScriptPath, '755');
        result.filesCreated.push(updateScriptPath);
        
        // Create validation script
        const validationScript = createValidationScript(config);
        const validationScriptPath = join(scriptsDir, 'validate-context.sh');
        writeFileSync(validationScriptPath, validationScript);
        chmodSync(validationScriptPath, '755');
        result.filesCreated.push(validationScriptPath);
        
        // Create monitoring configuration
        if (config.monitoring) {
          const monitoringConfig = createMonitoringConfig(config, analysis);
          const monitoringPath = join(config.projectPath, '.context-monitoring.yaml');
          writeFileSync(monitoringPath, yaml.dump(monitoringConfig));
          result.filesCreated.push(monitoringPath);
        }
        
        // Set up git hooks if requested
        if (config.gitHooks) {
          await setupGitHooks(config, result);
        }
        
        // Create scheduled update configuration
        if (config.updateSchedule !== 'manual') {
          const scheduleConfig = createScheduleConfig(config);
          const schedulePath = join(config.projectPath, '.context-schedule.yaml');
          writeFileSync(schedulePath, yaml.dump(scheduleConfig));
          result.filesCreated.push(schedulePath);
          
          // Create cron job script
          const cronScript = createCronScript(config);
          const cronPath = join(scriptsDir, 'setup-cron.sh');
          writeFileSync(cronPath, cronScript);
          chmodSync(cronPath, '755');
          result.filesCreated.push(cronPath);
        }
        
        // Create GitHub Actions workflow if in a git repo
        const gitDir = join(config.projectPath, '.git');
        if (existsSync(gitDir)) {
          const workflowsDir = join(config.projectPath, '.github', 'workflows');
          if (!existsSync(workflowsDir)) {
            mkdirSync(workflowsDir, { recursive: true });
          }
          
          const workflow = createGitHubActionsWorkflow(config);
          const workflowPath = join(workflowsDir, 'update-context.yml');
          writeFileSync(workflowPath, workflow);
          result.filesCreated.push(workflowPath);
        }
        
        // Generate setup instructions
        result.setupInstructions = generateSetupInstructions(config, result.filesCreated);
        
        result.success = true;
        result.message = `Persistence automation setup complete. Schedule: ${config.updateSchedule}`;
      } catch (error) {
        result.success = false;
        result.message = `Failed to setup persistence automation: ${error}`;
      }
    
      return result;
    }
  • Input schema definition for the tool, specifying parameters, types, enums, and required fields.
    {
      name: 'setup_persistence_automation',
      description: 'Set up automated context updates with monitoring and validation',
      inputSchema: {
        type: 'object',
        properties: {
          projectPath: {
            type: 'string',
            description: 'Path to the project directory',
          },
          analysisId: {
            type: 'string',
            description: 'Analysis ID from analyze_codebase_deeply',
          },
          updateSchedule: {
            type: 'string',
            enum: ['daily', 'weekly', 'on-change', 'manual'],
            description: 'How often to update context',
          },
          gitHooks: {
            type: 'boolean',
            description: 'Install git hooks for validation',
          },
          monitoring: {
            type: 'boolean',
            description: 'Enable context monitoring',
          },
          notifications: {
            type: 'object',
            properties: {
              email: { type: 'string' },
              slack: { type: 'string' },
            },
            description: 'Notification settings',
          },
        },
        required: ['projectPath', 'updateSchedule'],
      },
    },
  • Registration and dispatch in the main tool switch statement: parses arguments with Zod matching the schema and calls the handler function.
    case 'setup_persistence_automation': {
      const params = z.object({
        projectPath: z.string(),
        analysisId: z.string().optional(),
        updateSchedule: z.enum(['daily', 'weekly', 'on-change', 'manual']),
        gitHooks: z.boolean().optional(),
        monitoring: z.boolean().optional(),
        notifications: z.object({
          email: z.string().optional(),
          slack: z.string().optional(),
        }).optional(),
      }).parse(args);
      
      const result = await setupPersistenceAutomation(params);
      return {
        content: [
          {
            type: 'text',
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • Key helper function that generates the main update-context.sh bash script used for automated context updates.
    function createUpdateScript(config: PersistenceConfig, analysis: DeepAnalysisResult): string {
      return `#!/bin/bash
    # Context Update Script
    # Generated: ${new Date().toISOString()}
    
    set -e
    
    SCRIPT_DIR="$( cd "$( dirname "\${BASH_SOURCE[0]}" )" && pwd )"
    PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
    
    echo "🔄 Starting context update..."
    
    # Function to check if context needs update
    needs_update() {
      local context_file="$1"
      local threshold_days="\${2:-7}"
      
      if [ ! -f "$context_file" ]; then
        return 0 # Needs update if doesn't exist
      fi
      
      # Check file age
      if [[ "$OSTYPE" == "darwin"* ]]; then
        # macOS
        file_age=$(( ($(date +%s) - $(stat -f %m "$context_file")) / 86400 ))
      else
        # Linux
        file_age=$(( ($(date +%s) - $(stat -c %Y "$context_file")) / 86400 ))
      fi
      
      if [ $file_age -gt $threshold_days ]; then
        return 0 # Needs update if older than threshold
      fi
      
      return 1 # No update needed
    }
    
    # Check if MCP server is available
    check_mcp_server() {
      if ! command -v npx &> /dev/null; then
        echo "❌ npx not found. Please install Node.js"
        exit 1
      fi
      
      echo "âś… MCP server available"
    }
    
    # Run analysis
    run_analysis() {
      echo "🔍 Running codebase analysis..."
      
      # This would normally call the MCP tool
      # For now, we'll create a marker file
      touch "$PROJECT_ROOT/.last-analysis"
      
      echo "âś… Analysis complete"
    }
    
    # Update context files
    update_contexts() {
      local force_update="\${1:-false}"
      
      # List of context files to check
      declare -a context_files=(
        "agent-context/CODEBASE-CONTEXT.md"
        "agent-context/PROJECT-TEMPLATE.md"
        "agent-context/conversation-starters.md"
        "agent-context/minimal-context.md"
      )
      
      local updates_needed=false
      
      for file in "\${context_files[@]}"; do
        if [ "$force_update" = "true" ] || needs_update "$PROJECT_ROOT/$file"; then
          echo "📝 Updating $file..."
          updates_needed=true
        fi
      done
      
      if [ "$updates_needed" = "true" ]; then
        run_analysis
        echo "âś… All context files updated"
      else
        echo "âś… All context files are up to date"
      fi
    }
    
    # Validate context files
    validate_contexts() {
      echo "🔍 Validating context files..."
      
      "$SCRIPT_DIR/validate-context.sh"
      
      if [ $? -eq 0 ]; then
        echo "âś… All context files are valid"
      else
        echo "❌ Context validation failed"
        exit 1
      fi
    }
    
    # Send notification
    send_notification() {
      local message="$1"
      ${config.notifications?.email ? `
      
      # Email notification
      if command -v mail &> /dev/null; then
        echo "$message" | mail -s "Context Update: $PROJECT_ROOT" "${config.notifications.email}"
      fi` : ''}
      ${config.notifications?.slack ? `
      
      # Slack notification
      if [ -n "$SLACK_WEBHOOK" ]; then
        curl -X POST -H 'Content-type: application/json' \\
          --data "{\\"text\\":\\"$message\\"}" \\
          "$SLACK_WEBHOOK"
      fi` : ''}
      
      echo "$message"
    }
    
    # Main execution
    main() {
      local force_update=false
      
      # Parse arguments
      while [[ $# -gt 0 ]]; do
        case $1 in
          --force|-f)
            force_update=true
            shift
            ;;
          --help|-h)
            echo "Usage: $0 [--force]"
            echo "  --force, -f  Force update all context files"
            exit 0
            ;;
          *)
            echo "Unknown option: $1"
            exit 1
            ;;
        esac
      done
      
      cd "$PROJECT_ROOT"
      
      # Check prerequisites
      check_mcp_server
      
      # Update contexts
      update_contexts "$force_update"
      
      # Validate
      validate_contexts
      
      # Record update
      echo "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" > "$PROJECT_ROOT/.last-context-update"
      
      # Send notification
      send_notification "âś… Context update completed successfully for ${analysis.projectPath.split('/').pop()}"
    }
    
    # Run main function
    main "$@"`;
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'monitoring and validation' but doesn't explain what these entail operationally—such as what gets monitored, how validation works, whether this is a one-time setup or ongoing process, or potential side effects like modifying project files or requiring specific permissions. This leaves significant gaps for a tool with 6 parameters and complex functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose without unnecessary elaboration. Every word contributes directly to understanding the tool's function, making it appropriately concise for its complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, nested objects, no output schema, and no annotations), the description is inadequate. It doesn't explain the tool's behavior, output expectations, or how it integrates with the workflow (e.g., dependency on 'analyze_codebase_deeply'). For a setup automation tool with multiple configuration options, more context is needed to guide effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds no additional meaning beyond what's in the schema—it doesn't clarify relationships between parameters (e.g., how 'gitHooks' relates to 'validation'), nor does it provide usage examples or constraints. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Set up automated context updates') and the features involved ('with monitoring and validation'), providing a specific verb+resource combination. However, it doesn't explicitly distinguish this tool from potential siblings like 'complete_setup_workflow' or 'initialize_agent_workspace' that might also involve setup processes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., requiring an analysis from 'analyze_codebase_deeply' as suggested by the 'analysisId' parameter), nor does it differentiate from sibling tools like 'complete_setup_workflow' that might handle broader setup tasks.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bswa006/mcp-context-manager'

If you have feedback or need assistance with the MCP directory API, please join our Discord server