job-framework.mdc•12.4 kB
---
description: SFCC Job Framework patterns and best practices
alwaysApply: false
---
# SFCC Job Framework Development
Use this rule when creating custom jobs in the SFCC Job Framework.
## Mandatory MCP Tools Sequence
**BEFORE writing ANY job code:**
1. `mcp_sfcc-dev_get_best_practice_guide` with guideName: "job_framework"
2. `mcp_sfcc-dev_search_best_practices` with query: "performance"
3. `mcp_sfcc-dev_search_best_practices` with query: "security"
4. `mcp_sfcc-dev_search_sfcc_classes` with query: relevant business domain
## MCP-Guided Job Development Process
### Step 1: Get Job Framework Best Practices
```
Use: mcp_sfcc-dev_get_best_practice_guide with guideName: "job_framework"
Purpose: Get comprehensive job development patterns, chunking strategies, and performance guidelines
```
### Step 2: Performance Optimization Patterns
```
Use: mcp_sfcc-dev_search_best_practices with query: "performance"
Purpose: Get memory management, transaction handling, and resource optimization patterns
```
### Step 3: Security Implementation
```
Use: mcp_sfcc-dev_search_best_practices with query: "security"
Purpose: Get secure job execution and data handling patterns
```
### Step 4: SFCC API Research
```
Use: mcp_sfcc-dev_search_sfcc_classes with query: [relevant domain]
Use: mcp_sfcc-dev_get_sfcc_class_info with className: [job-related classes]
Purpose: Understand available job framework APIs and data access patterns
```
## Core Concepts from MCP Job Framework Guide
### Job Development Paradigms
SFCC offers two distinct development models for custom jobs:
| Aspect | Task-Oriented ("Normal") | Chunk-Oriented |
|--------|-------------------------|----------------|
| **Best For** | Simple, monolithic tasks; quick operations | Large-scale data processing |
| **Data Volume** | Low (prone to timeouts with large datasets) | High (designed for massive datasets) |
| **Progress Monitoring** | Limited (running or finished) | Granular (updated per chunk) |
| **Transaction Control** | Typically one transaction | Fine-grained per chunk |
| **Code Complexity** | Low (single main function) | Moderate (callback functions) |
| **Resumability** | Difficult (requires full restart) | Easier (failures isolated to chunks) |
## MCP-Enhanced Task-Oriented Job Pattern
```javascript
'use strict';
/**
 * Task-Oriented Job: [Job Name]
 * Purpose: [Job functionality]
 *
 * Implementation based on:
 * - mcp_sfcc-dev_get_best_practice_guide with guideName: "job_framework"
 * - mcp_sfcc-dev_search_best_practices with query: "performance"
 * - mcp_sfcc-dev_search_best_practices with query: "security"
 */
var Status = require('dw/system/Status');
var Logger = require('dw/system/Logger').getLogger('jobs', 'JobName');
var Transaction = require('dw/system/Transaction');
/**
 * Main job execution function
 * Pattern from MCP job framework guide
 */
function execute(parameters) {
    try {
        Logger.info('Starting job execution with parameters: {0}', JSON.stringify(parameters));
        // Input validation (patterns from MCP security guide)
        var validationResult = validateJobParameters(parameters);
        if (!validationResult.valid) {
            Logger.error('Job parameter validation failed: {0}', validationResult.error);
            return new Status(Status.ERROR, 'INVALID_PARAMETERS', validationResult.error);
        }
        // Main job logic here (use SFCC classes discovered via MCP)
        var result = performJobOperation(parameters);
        if (result.success) {
            Logger.info('Job completed successfully. Processed: {0} items', result.processedCount);
            return new Status(Status.OK, 'SUCCESS', 'Job completed successfully');
        } else {
            Logger.error('Job failed: {0}', result.error);
            return new Status(Status.ERROR, 'JOB_FAILED', result.error);
        }
    } catch (e) {
        Logger.error('Unexpected error in job execution: {0}', e.message);
        Logger.debug('Job error stack trace: {0}', e.stack);
        return new Status(Status.ERROR, 'UNEXPECTED_ERROR', 'Job execution failed');
    }
}
/**
 * Validate job parameters based on MCP security patterns
 */
function validateJobParameters(parameters) {
    // Implement validation patterns from:
    // mcp_sfcc-dev_search_best_practices with query: "security"
    try {
        if (!parameters) {
            return { valid: false, error: 'Parameters object is required' };
        }
        // Add specific parameter validations based on job requirements
        return { valid: true };
    } catch (e) {
        return { valid: false, error: 'Parameter validation error: ' + e.message };
    }
}
/**
 * Main job operation implementation
 * Follow performance patterns from MCP performance guide
 */
function performJobOperation(parameters) {
    try {
        var processedCount = 0;
        // Use Transaction for data modifications (MCP performance patterns)
        Transaction.wrap(function () {
            // Job logic here
            processedCount = performDataOperation();
        });
        return { success: true, processedCount: processedCount };
    } catch (e) {
        Logger.error('Error in job operation: {0}', e.message);
        return { success: false, error: e.message };
    }
}
module.exports.execute = execute;
```
## MCP-Enhanced Chunk-Oriented Job Pattern
```javascript
'use strict';
/**
 * Chunk-Oriented Job: [Job Name]
 * Purpose: [Large-scale data processing]
 *
 * Implementation based on:
 * - mcp_sfcc-dev_get_best_practice_guide with guideName: "job_framework"
 * - MCP performance and security patterns
 */
var Status = require('dw/system/Status');
var Logger = require('dw/system/Logger').getLogger('jobs', 'ChunkJob');
/**
 * Get total count for progress tracking
 * Pattern from MCP job framework guide
 */
function getTotalCount(parameters, stepExecution) {
    try {
        // Get total count for chunking (use SFCC classes from MCP research)
        var totalCount = calculateTotalItems(parameters);
        Logger.info('Total items to process: {0}', totalCount);
        return totalCount;
    } catch (e) {
        Logger.error('Error getting total count: {0}', e.message);
        return 0;
    }
}
/**
 * Read chunk of data
 * Performance patterns from MCP performance guide
 */
function read(parameters, stepExecution) {
    try {
        var chunkSize = parameters.ChunkSize || 100;
        var currentOffset = stepExecution.readCount;
        // Use efficient data access patterns (MCP performance guide)
        var dataChunk = getDataChunk(currentOffset, chunkSize, parameters);
        if (dataChunk && dataChunk.length > 0) {
            Logger.debug('Read chunk: offset={0}, size={1}', currentOffset, dataChunk.length);
            return dataChunk;
        }
        return null; // End of data
    } catch (e) {
        Logger.error('Error reading data chunk: {0}', e.message);
        return null;
    }
}
/**
 * Process single item in chunk
 * Security and performance patterns from MCP guides
 */
function process(item, parameters, stepExecution) {
    try {
        // Validate item (security patterns from MCP)
        if (!item || !validateItem(item)) {
            Logger.warn('Invalid item skipped: {0}', item);
            return null;
        }
        // Process item with error handling
        var result = processItem(item, parameters);
        if (result.success) {
            Logger.debug('Processed item: {0}', item.ID || item.id);
            return result.data;
        } else {
            Logger.warn('Failed to process item {0}: {1}', item.ID || item.id, result.error);
            return null;
        }
    } catch (e) {
        Logger.error('Error processing item {0}: {1}', item.ID || item.id, e.message);
        return null;
    }
}
/**
 * Write processed data
 * Transaction patterns from MCP performance guide
 */
function write(items, parameters, stepExecution) {
    try {
        if (!items || items.length === 0) {
            return;
        }
        var Transaction = require('dw/system/Transaction');
        // Use transaction per chunk (MCP performance pattern)
        Transaction.wrap(function () {
            items.forEach(function (item) {
                if (item) {
                    writeItem(item, parameters);
                }
            });
        });
        Logger.info('Wrote chunk: {0} items', items.length);
    } catch (e) {
        Logger.error('Error writing chunk: {0}', e.message);
        throw e; // Re-throw to trigger chunk retry
    }
}
module.exports.getTotalCount = getTotalCount;
module.exports.read = read;
module.exports.process = process;
module.exports.write = write;
```
## Job Development Checklist (MCP-Verified)
Before implementing jobs, verify with MCP:
- [ ] `mcp_sfcc-dev_get_best_practice_guide` with guideName: "job_framework" - Get implementation patterns
- [ ] `mcp_sfcc-dev_search_best_practices` with query: "performance" - Performance guidelines
- [ ] `mcp_sfcc-dev_search_best_practices` with query: "security" - Security requirements
- [ ] Choose appropriate job model (task vs. chunk-oriented) based on MCP guidance
Implementation verification:
- [ ] Proper error handling and logging
- [ ] Input parameter validation
- [ ] Appropriate transaction management
- [ ] Memory-efficient data processing
- [ ] Progress tracking for chunk-oriented jobs
- [ ] Resource cleanup (close iterators, etc.)
## Quick Reference from MCP Job Guide
### Choosing the Right Job Model
**Use Task-Oriented When:**
- Processing single files or making single API calls
- Quick database updates affecting known small datasets
- Simple configuration or setup tasks
- Progress tracking is not important
**Use Chunk-Oriented When:**
- Processing large datasets (>1000 items)
- Iterating over products, orders, customers, or file rows
- Progress monitoring is required
- Failure resilience is critical
- Transaction control is important
## NEVER Implement Jobs Without MCP
- ❌ Don't choose job model without consulting `mcp_sfcc-dev_get_best_practice_guide`
- ❌ Don't implement without performance patterns - use MCP performance guide
- ❌ Don't skip parameter validation - use MCP security patterns
- ❌ Don't assume job framework APIs - use `mcp_sfcc-dev_search_sfcc_classes`
## 🚀 Job Deployment Troubleshooting
**If custom jobs don't appear in Business Manager after deployment:**
1. **Check Current Code Version:**
   ```
   Use: mcp_sfcc-dev_get_code_versions
   Purpose: Verify which code version is currently active
   ```
2. **Perform Code-Switch Fix:**
   ```
   Use: mcp_sfcc-dev_activate_code_version with versionId: [target_version]
   Purpose: Switch to activate job registrations in Business Manager
   ```
**Common Job Deployment Issues:**
- Jobs missing from Administration > Operations > Jobs → Use code-switch fix
- Job steps not executing → Check job registration in active code version
- Intermittent job visibility → Multiple code versions active, use activation tool
## 🔧 Job Debugging and Log Analysis
**For Job Execution Issues:**
1. **Get Job Log Files:**
   ```
   Use: mcp_sfcc-dev_get_latest_job_log_files
   Purpose: Find recent job executions and available log files
   ```
2. **Search Job Logs by Name:**
   ```
   Use: mcp_sfcc-dev_search_job_logs_by_name with jobName: "YourJobName"
   Purpose: Find specific job logs for targeted debugging
   ```
3. **Get Job Execution Summary:**
   ```
   Use: mcp_sfcc-dev_get_job_execution_summary with jobName: "YourJobName"
   Purpose: Get comprehensive execution details, timing, and status
   ```
4. **Analyze Job Log Entries:**
   ```
   Use: mcp_sfcc-dev_get_job_log_entries with level: "error" (or "warn", "info", "debug", "all")
   Purpose: Review job log entries by severity level
   ```
5. **Search Job Logs for Patterns:**
   ```
   Use: mcp_sfcc-dev_search_job_logs with pattern: "OutOfMemoryError" (or other error patterns)
   Purpose: Find specific error patterns or custom logging messages in job logs
   ```
**Job Debugging Best Practices:**
- Job logs contain all log levels (error, warn, info, debug) in single files
- Use execution summaries to understand job performance and bottlenecks
- Search for custom Logger statements added in job steps
- Monitor job timing and memory usage through log analysis
- Use pattern searches to track specific data processing flows