Skip to main content
Glama

deployment_readiness

Validate deployment readiness by checking test failures, analyzing deployment history, and blocking unsafe deployments to ensure production stability.

Instructions

Comprehensive deployment readiness validation with test failure tracking, deployment history analysis, and hard blocking for unsafe deployments. Integrates with smart_git_push for deployment gating.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
operationYesType of deployment readiness check to perform
projectPathNoPath to project directory (defaults to current working directory)
targetEnvironmentNoTarget deployment environmentproduction
strictModeNoEnable strict validation (recommended for production)
allowMockCodeNoAllow mock code in deployment (NOT RECOMMENDED)
productionCodeThresholdNoMinimum production code quality score (0-100)
mockCodeMaxAllowedNoMaximum mock code indicators allowed
maxTestFailuresNoMaximum test failures allowed (0 = zero tolerance)
requireTestCoverageNoMinimum test coverage percentage required
blockOnFailingTestsNoBlock deployment if tests are failing
testSuiteRequiredNoRequired test suites that must pass
maxRecentFailuresNoMaximum recent deployment failures allowed
deploymentSuccessThresholdNoMinimum deployment success rate required (%)
blockOnRecentFailuresNoBlock if recent deployments failed
rollbackFrequencyThresholdNoMaximum rollback frequency allowed (%)
requireAdrComplianceNoRequire ADR compliance validation
integrateTodoTasksNoAuto-create blocking tasks for issues
updateHealthScoringNoUpdate project health scores
triggerSmartGitPushNoTrigger smart git push validation
emergencyBypassNoEmergency bypass for critical fixes
businessJustificationNoBusiness justification for overrides (required for emergency_override)
approvalRequiredNoRequire approval for overrides
enableMemoryIntegrationNoEnable memory entity storage for deployment assessment tracking and historical analysis
migrateExistingHistoryNoMigrate existing JSON-based deployment history to memory entities

Implementation Reference

  • Main exported handler function for the 'deployment_readiness' tool. Orchestrates full deployment readiness assessment including test execution, deployment history analysis, code quality gates with TreeSitter, ADR compliance, environment research, memory integration for pattern recognition, and generates comprehensive MCP response with blockers and recommendations.
    export async function deploymentReadiness(args: any): Promise<any> { try { const validatedArgs = DeploymentReadinessSchema.parse(args); // Initialize paths and cache const projectPath = validatedArgs.projectPath || process.cwd(); const projectName = basename(projectPath); const cacheDir = join(os.tmpdir(), projectName, 'cache'); const deploymentHistoryPath = join(cacheDir, 'deployment-history.json'); const readinessCachePath = join(cacheDir, 'deployment-readiness-cache.json'); // Ensure cache directory exists if (!existsSync(cacheDir)) { mkdirSync(cacheDir, { recursive: true }); } // Initialize memory manager if enabled let memoryManager: DeploymentMemoryManager | null = null; if (validatedArgs.enableMemoryIntegration) { memoryManager = new DeploymentMemoryManager(); await memoryManager.initialize(); // Migrate existing history if requested if (validatedArgs.migrateExistingHistory) { await memoryManager.migrateExistingHistory(deploymentHistoryPath); } } let result: DeploymentReadinessResult; switch (validatedArgs.operation) { case 'test_validation': result = await performTestValidation(validatedArgs, projectPath); break; case 'deployment_history': result = await performDeploymentHistoryAnalysis(validatedArgs, deploymentHistoryPath); break; case 'check_readiness': case 'validate_production': case 'full_audit': result = await performFullAudit(validatedArgs, projectPath, deploymentHistoryPath); break; case 'emergency_override': result = await performEmergencyOverride(validatedArgs, projectPath); break; default: throw new McpAdrError('INVALID_ARGS', `Unknown operation: ${validatedArgs.operation}`); } // Cache result for performance writeFileSync( readinessCachePath, JSON.stringify( { timestamp: new Date().toISOString(), operation: validatedArgs.operation, result, }, null, 2 ) ); // Memory integration: store assessment and analyze patterns let memoryIntegrationInfo = ''; if (memoryManager) { try { // Store deployment assessment const assessmentId = await memoryManager.storeDeploymentAssessment( validatedArgs.targetEnvironment, result, { projectPath, operation: validatedArgs.operation }, projectPath ); // Compare with historical patterns const historyComparison = await memoryManager.compareWithHistory( result, validatedArgs.targetEnvironment ); // Analyze deployment patterns const patternAnalysis = await memoryManager.analyzeDeploymentPatterns( validatedArgs.targetEnvironment ); memoryIntegrationInfo = ` ## ๐Ÿง  Memory Integration Analysis - **Assessment Stored**: โœ… Deployment assessment saved (ID: ${assessmentId.substring(0, 8)}...) - **Environment**: ${validatedArgs.targetEnvironment} - **Historical Comparison**: ${historyComparison.isImprovement ? '๐Ÿ“ˆ Improvement detected' : '๐Ÿ“Š Baseline established'} ${ historyComparison.insights.length > 0 ? `### Historical Insights ${historyComparison.insights.map(insight => `- ${insight}`).join('\n')} ` : '' } ${ patternAnalysis.trends.length > 0 ? `### Deployment Trends ${patternAnalysis.trends.map(trend => `- **${trend.metric}**: ${trend.trend} (${trend.change > 0 ? '+' : ''}${trend.change})`).join('\n')} ` : '' } ${ patternAnalysis.recommendations.length > 0 ? `### Pattern-Based Recommendations ${patternAnalysis.recommendations.map(rec => `- ${rec}`).join('\n')} ` : '' } ${ patternAnalysis.riskFactors.length > 0 ? `### Risk Factors Identified ${patternAnalysis.riskFactors.map(risk => `- **${risk.factor}**: ${risk.description} (${risk.severity})`).join('\n')} ` : '' } `; } catch (memoryError) { memoryIntegrationInfo = ` ## ๐Ÿง  Memory Integration Status - **Status**: โš ๏ธ Memory integration failed - assessment continued without persistence - **Error**: ${memoryError instanceof Error ? memoryError.message : 'Unknown error'} `; } } // Generate enhanced response with memory integration const baseResponse = generateDeploymentReadinessResponse(result, validatedArgs); // Add memory integration info if available if (memoryIntegrationInfo && baseResponse.content?.[0]?.text) { baseResponse.content[0].text += memoryIntegrationInfo; } return baseResponse; } catch (error) { throw new McpAdrError( 'DEPLOYMENT_READINESS_ERROR', `Deployment readiness check failed: ${jsonSafeError(error)}` ); } }
  • Comprehensive Zod input schema defining all configuration options for the tool including operation type, environment, strict mode, test failure thresholds, deployment history gates, code quality parameters, memory integration, TreeSitter analysis, and research settings.
    const DeploymentReadinessSchema = z.object({ operation: z .enum([ 'check_readiness', // Full deployment readiness check 'validate_production', // Production-specific validation 'test_validation', // Test execution and failure analysis 'deployment_history', // Deployment history analysis 'full_audit', // Comprehensive audit (all checks) 'emergency_override', // Emergency bypass with justification ]) .describe('Operation to perform'), // Core Configuration projectPath: z.string().optional().describe('Project root path'), targetEnvironment: z .enum(['staging', 'production', 'integration']) .default('production') .describe('Target deployment environment'), strictMode: z.boolean().default(true).describe('Enable strict validation (recommended)'), // Code Quality Gates allowMockCode: z .boolean() .default(false) .describe('Allow mock code in deployment (NOT RECOMMENDED)'), productionCodeThreshold: z .number() .default(85) .describe('Minimum production code quality score (0-100)'), mockCodeMaxAllowed: z.number().default(0).describe('Maximum mock code indicators allowed'), // Test Failure Gates maxTestFailures: z .number() .default(0) .describe('Maximum test failures allowed (0 = zero tolerance)'), requireTestCoverage: z.number().default(80).describe('Minimum test coverage percentage required'), blockOnFailingTests: z.boolean().default(true).describe('Block deployment if tests are failing'), testSuiteRequired: z .array(z.string()) .default([]) .describe('Required test suites that must pass'), // Deployment History Gates maxRecentFailures: z.number().default(2).describe('Maximum recent deployment failures allowed'), deploymentSuccessThreshold: z .number() .default(80) .describe('Minimum deployment success rate required (%)'), blockOnRecentFailures: z.boolean().default(true).describe('Block if recent deployments failed'), rollbackFrequencyThreshold: z .number() .default(20) .describe('Maximum rollback frequency allowed (%)'), // Integration Rules requireAdrCompliance: z.boolean().default(true).describe('Require ADR compliance validation'), integrateTodoTasks: z.boolean().default(true).describe('Auto-create blocking tasks for issues'), updateHealthScoring: z.boolean().default(true).describe('Update project health scores'), triggerSmartGitPush: z.boolean().default(false).describe('Trigger smart git push validation'), // Human Override System emergencyBypass: z.boolean().default(false).describe('Emergency bypass for critical fixes'), businessJustification: z.string().optional().describe('Business justification for overrides'), approvalRequired: z.boolean().default(true).describe('Require approval for overrides'), // Memory Integration enableMemoryIntegration: z.boolean().default(true).describe('Enable memory entity storage'), migrateExistingHistory: z .boolean() .default(false) .describe('Migrate existing deployment history to memory'), // Tree-sitter Analysis enableTreeSitterAnalysis: z .boolean() .default(true) .describe('Use tree-sitter for enhanced code analysis'), treeSitterLanguages: z .array(z.string()) .default(['typescript', 'javascript', 'python', 'yaml', 'hcl']) .describe('Languages to analyze with tree-sitter'), // Research-Driven Integration enableResearchIntegration: z .boolean() .default(true) .describe('Use research-orchestrator to verify environment readiness'), researchConfidenceThreshold: z .number() .default(0.7) .describe('Minimum confidence for environment research (0-1)'), });
  • Central tool catalog registration defining metadata, category ('deployment'), complexity ('complex'), token cost estimates, related tools (smart_git_push), keywords, and input schema for dynamic discovery via search_tools meta-tool.
    TOOL_CATALOG.set('deployment_readiness', { name: 'deployment_readiness', shortDescription: 'Check deployment readiness', fullDescription: 'Validates deployment readiness with zero-tolerance for critical failures.', category: 'deployment', complexity: 'complex', tokenCost: { min: 2000, max: 4000 }, hasCEMCPDirective: true, relatedTools: ['smart_git_push', 'analyze_deployment_progress'], keywords: ['deployment', 'readiness', 'validation', 'check'], requiresAI: true, inputSchema: { type: 'object', properties: { projectPath: { type: 'string' }, environment: { type: 'string', enum: ['development', 'staging', 'production'] }, strictMode: { type: 'boolean', default: true }, }, required: ['projectPath'], }, });
  • DeploymentMemoryManager class: key helper for memory integration. Stores deployment assessments as memory entities, migrates history, analyzes patterns/trends, compares with historical data for risk assessment and recommendations.
    class DeploymentMemoryManager { private memoryManager: MemoryEntityManager; private logger: EnhancedLogger; /** * Constructor with optional dependency injection * @param deps - Optional dependencies for testing (defaults create real instances) */ constructor(deps: DeploymentMemoryManagerDeps = {}) { this.memoryManager = deps.memoryManager ?? new MemoryEntityManager(); this.logger = deps.logger ?? new EnhancedLogger(); } async initialize(): Promise<void> { await this.memoryManager.initialize(); } /** * Store deployment assessment as memory entity */ async storeDeploymentAssessment( environment: string, readinessData: DeploymentReadinessResult, validationResults: any, projectPath?: string ): Promise<string> { try { const assessmentData = { environment: environment as 'development' | 'staging' | 'production' | 'testing', readinessScore: readinessData.overallScore / 100, // Convert to 0-1 range validationResults: { testResults: { passed: readinessData.testValidationResult.testSuitesExecuted.reduce( (sum, suite) => sum + suite.passedTests, 0 ), failed: readinessData.testValidationResult.failureCount, coverage: readinessData.testValidationResult.coveragePercentage / 100, // Convert to 0-1 range criticalFailures: readinessData.testValidationResult.criticalTestFailures.map( f => f.testName ), }, securityValidation: { vulnerabilities: 0, // Default - could be enhanced with actual security scan data securityScore: 0.8, // Default - could be enhanced with actual security analysis criticalIssues: readinessData.criticalBlockers .filter(b => b.category === 'adr_compliance') .map(b => b.title), }, performanceValidation: { performanceScore: Math.max(0, (readinessData.overallScore - 20) / 80), // Derived from overall score bottlenecks: [], resourceUtilization: {}, }, }, blockingIssues: [ ...readinessData.criticalBlockers.map(b => ({ issue: `${b.title}: ${b.description}`, severity: b.severity as 'low' | 'medium' | 'high' | 'critical', category: this.mapBlockerCategory(b.category), resolution: b.resolutionSteps.join('; '), estimatedEffort: b.estimatedResolutionTime, })), ...readinessData.testFailureBlockers.map(b => ({ issue: `${b.title}: ${b.description}`, severity: b.severity as 'low' | 'medium' | 'high' | 'critical', category: 'test' as const, resolution: b.resolutionSteps.join('; '), estimatedEffort: b.estimatedResolutionTime, })), ...readinessData.deploymentHistoryBlockers.map(b => ({ issue: `${b.title}: ${b.description}`, severity: b.severity as 'low' | 'medium' | 'high' | 'critical', category: 'configuration' as const, resolution: b.resolutionSteps.join('; '), estimatedEffort: b.estimatedResolutionTime, })), ], deploymentStrategy: { type: 'rolling' as const, // Default strategy - could be made configurable rollbackPlan: 'Automated rollback via deployment pipeline with health check validation', monitoringPlan: 'Monitor application metrics, error rates, and performance indicators for 30 minutes post-deployment', estimatedDowntime: readinessData.isDeploymentReady ? '0 minutes (rolling deployment)' : 'Cannot deploy - blockers present', }, complianceChecks: { adrCompliance: readinessData.adrComplianceResult.score / 100, // Convert to 0-1 range regulatoryCompliance: [], // Could be enhanced with actual compliance data auditTrail: [ `Deployment assessment completed at ${new Date().toISOString()}`, `Test validation: ${readinessData.testValidationResult.overallTestStatus}`, `Overall readiness score: ${readinessData.overallScore}%`, `Git push status: ${readinessData.gitPushStatus}`, ], }, }; const entity = await this.memoryManager.upsertEntity({ type: 'deployment_assessment', title: `Deployment Assessment: ${environment} - ${readinessData.isDeploymentReady ? 'READY' : 'BLOCKED'} - ${new Date().toISOString().split('T')[0]}`, description: `Deployment readiness assessment for ${environment} environment${readinessData.isDeploymentReady ? ' - APPROVED' : ' - BLOCKED'}`, tags: [ 'deployment', environment.toLowerCase(), 'readiness-assessment', readinessData.isDeploymentReady ? 'approved' : 'blocked', `score-${Math.floor(readinessData.overallScore / 10) * 10}`, ...(readinessData.criticalBlockers.length > 0 ? ['critical-issues'] : []), ...(readinessData.testValidationResult.failureCount > 0 ? ['test-failures'] : []), ...(readinessData.deploymentHistoryAnalysis.rollbackRate > 20 ? ['high-rollback-risk'] : []), ], assessmentData, relationships: [], context: { projectPhase: 'deployment-validation', technicalStack: this.extractTechnicalStack(validationResults), environmentalFactors: [environment, projectPath || 'unknown-project'].filter(Boolean), stakeholders: ['deployment-team', 'qa-team', 'infrastructure-team'], }, accessPattern: { lastAccessed: new Date().toISOString(), accessCount: 1, accessContext: ['deployment-assessment'], }, evolution: { origin: 'created', transformations: [ { timestamp: new Date().toISOString(), type: 'assessment_creation', description: `Deployment assessment created for ${environment}`, agent: 'deployment-readiness-tool', }, ], }, validation: { isVerified: readinessData.isDeploymentReady, verificationMethod: 'comprehensive-deployment-audit', verificationTimestamp: new Date().toISOString(), }, }); this.logger.info( `Deployment assessment stored for ${environment}`, 'DeploymentMemoryManager', { environment, entityId: entity.id, readinessScore: readinessData.overallScore, isReady: readinessData.isDeploymentReady, blockingIssues: assessmentData.blockingIssues.length, } ); return entity.id; } catch (error) { this.logger.error( 'Failed to store deployment assessment', 'DeploymentMemoryManager', error as Error ); throw error; } } /** * Migrate existing deployment history to memory entities */ async migrateExistingHistory(historyPath: string): Promise<void> { try { if (!existsSync(historyPath)) { this.logger.info( 'No existing deployment history found to migrate', 'DeploymentMemoryManager' ); return; } const historyData = JSON.parse(readFileSync(historyPath, 'utf8')); const deployments = historyData.deployments || []; let migratedCount = 0; for (const deployment of deployments) { try { await this.migrateDeploymentRecord(deployment); migratedCount++; } catch (error) { this.logger.error( `Failed to migrate deployment ${deployment.deploymentId}`, 'DeploymentMemoryManager', error as Error ); } } this.logger.info( `Migration completed: ${migratedCount}/${deployments.length} deployments migrated`, 'DeploymentMemoryManager' ); } catch (error) { this.logger.error( 'Failed to migrate deployment history', 'DeploymentMemoryManager', error as Error ); throw error; } } /** * Analyze deployment patterns across memory entities */ async analyzeDeploymentPatterns(environment?: string): Promise<{ patterns: any[]; trends: any[]; recommendations: string[]; riskFactors: any[]; }> { try { const query: any = { entityTypes: ['deployment_assessment'], limit: 100, sortBy: 'lastModified', }; if (environment) { query.tags = [environment.toLowerCase()]; } const assessments = await this.memoryManager.queryEntities(query); const patterns = this.detectDeploymentPatterns(assessments.entities); const trends = this.calculateDeploymentTrends(assessments.entities); const recommendations = this.generatePatternRecommendations(patterns, trends); const riskFactors = this.identifyRiskFactors(assessments.entities); return { patterns, trends, recommendations, riskFactors, }; } catch (error) { this.logger.error( 'Failed to analyze deployment patterns', 'DeploymentMemoryManager', error as Error ); throw error; } } /** * Compare current assessment with historical patterns */ async compareWithHistory( currentAssessment: DeploymentReadinessResult, environment: string ): Promise<{ isImprovement: boolean; comparison: any; insights: string[]; }> { try { const recentAssessments = await this.memoryManager.queryEntities({ entityTypes: ['deployment_assessment'], tags: [environment.toLowerCase()], limit: 10, sortBy: 'lastModified', }); if (recentAssessments.entities.length === 0) { return { isImprovement: true, comparison: { type: 'first_assessment' }, insights: ['This is the first deployment assessment for this environment'], }; } const lastAssessment = recentAssessments.entities[0] as any; const comparison = this.compareAssessments(currentAssessment, lastAssessment.assessmentData); return { isImprovement: comparison.scoreImprovement > 0, comparison, insights: this.generateComparisonInsights(comparison), }; } catch (error) { this.logger.error( 'Failed to compare with history', 'DeploymentMemoryManager', error as Error ); return { isImprovement: false, comparison: { type: 'comparison_failed' }, insights: ['Unable to compare with historical data'], }; } } // Private helper methods private async migrateDeploymentRecord(deployment: DeploymentRecord): Promise<void> { const assessmentData = { environment: deployment.environment as 'development' | 'staging' | 'production' | 'testing', readinessScore: deployment.status === 'success' ? 1.0 : 0.0, // Use 0-1 range validationResults: { testResults: deployment.testResults ? { passed: deployment.testResults.testSuitesExecuted.reduce( (sum, suite) => sum + suite.passedTests, 0 ), failed: deployment.testResults.failureCount, coverage: deployment.testResults.coveragePercentage / 100, criticalFailures: deployment.testResults.criticalTestFailures.map(f => f.testName), } : { passed: 0, failed: 0, coverage: 0, criticalFailures: [], }, securityValidation: { vulnerabilities: 0, securityScore: 0.8, criticalIssues: [], }, performanceValidation: { performanceScore: deployment.status === 'success' ? 0.8 : 0.2, bottlenecks: [], resourceUtilization: {}, }, }, blockingIssues: deployment.failureReason ? [ { issue: `Historical Deployment Failure: ${deployment.failureReason}`, severity: 'high' as const, category: 'configuration' as const, resolution: 'Review and address historical failure causes', }, ] : [], deploymentStrategy: { type: 'rolling' as const, rollbackPlan: 'Standard rollback procedure', monitoringPlan: 'Basic monitoring', estimatedDowntime: deployment.rollbackRequired ? 'Variable' : '0 minutes', }, complianceChecks: { adrCompliance: 1.0, regulatoryCompliance: [], auditTrail: [ `Migrated deployment record from ${deployment.timestamp}`, `Original status: ${deployment.status}`, `Rollback required: ${deployment.rollbackRequired}`, ], }, }; await this.memoryManager.upsertEntity({ type: 'deployment_assessment', title: `Historical Deployment: ${deployment.environment} - ${deployment.status.toUpperCase()} - ${deployment.timestamp.split('T')[0]}`, description: `Migrated deployment record for ${deployment.environment} (ID: ${deployment.deploymentId})`, tags: [ 'deployment', deployment.environment.toLowerCase(), 'migrated-record', deployment.status, ...(deployment.rollbackRequired ? ['rollback-required'] : []), ], assessmentData, relationships: [], context: { projectPhase: 'deployment-execution', technicalStack: [], environmentalFactors: [deployment.environment], stakeholders: ['deployment-team'], }, accessPattern: { lastAccessed: new Date().toISOString(), accessCount: 1, accessContext: ['migration'], }, evolution: { origin: 'imported', transformations: [ { timestamp: new Date().toISOString(), type: 'migration', description: `Migrated from deployment-history.json (original: ${deployment.timestamp})`, agent: 'deployment-readiness-tool', }, ], }, validation: { isVerified: true, verificationMethod: 'historical-migration', verificationTimestamp: new Date().toISOString(), }, }); } private extractTechnicalStack(_validationResults: any): string[] { // Extract technical stack from validation results // This is a simplified implementation return []; } private detectDeploymentPatterns(assessments: any[]): any[] { // Analyze deployment patterns across assessments const patterns = []; // Pattern: Time-based failures const timePatterns = this.analyzeTimePatterns(assessments); if (timePatterns.length > 0) { patterns.push({ type: 'time_based', patterns: timePatterns }); } // Pattern: Environment-specific issues const envPatterns = this.analyzeEnvironmentPatterns(assessments); if (envPatterns.length > 0) { patterns.push({ type: 'environment_specific', patterns: envPatterns }); } return patterns; } private calculateDeploymentTrends(assessments: any[]): any[] { if (assessments.length < 3) return []; const trends = []; const scores = assessments.map((a: any) => a.assessmentData.readinessScore); // Calculate score trend const scoreTrend = this.calculateTrend(scores); trends.push({ metric: 'readiness_score', trend: scoreTrend > 0 ? 'improving' : scoreTrend < 0 ? 'declining' : 'stable', change: scoreTrend, }); return trends; } private generatePatternRecommendations(patterns: any[], trends: any[]): string[] { const recommendations: string[] = []; // Generate recommendations based on patterns patterns.forEach(pattern => { if (pattern.type === 'time_based') { recommendations.push('Consider scheduling deployments during low-risk time windows'); } if (pattern.type === 'environment_specific') { recommendations.push('Address environment-specific configuration issues'); } }); // Generate recommendations based on trends trends.forEach(trend => { if (trend.metric === 'readiness_score' && trend.trend === 'declining') { recommendations.push('Investigate causes of declining deployment readiness scores'); } }); return recommendations; } /** * Map deployment blocker category to schema-compliant category */ private mapBlockerCategory( category: string ): 'test' | 'security' | 'performance' | 'configuration' | 'dependencies' { switch (category) { case 'test_failure': return 'test'; case 'adr_compliance': case 'environment': case 'deployment_history': return 'configuration'; case 'code_quality': return 'performance'; default: return 'configuration'; } } private identifyRiskFactors(assessments: any[]): any[] { const riskFactors = []; // Analyze recent failures const recentFailures = assessments .filter((a: any) => !a.assessmentData.deploymentReady) .slice(0, 5); if (recentFailures.length >= 3) { riskFactors.push({ factor: 'frequent_failures', description: `${recentFailures.length} deployment blocks in recent assessments`, severity: 'high', }); } return riskFactors; } private compareAssessments(current: DeploymentReadinessResult, historical: any): any { return { scoreImprovement: current.overallScore - historical.readinessScore, confidenceChange: current.confidence - historical.confidence, blockingIssuesChange: current.criticalBlockers.length - historical.blockingIssues.length, testImprovements: { failureCountChange: current.testValidationResult.failureCount - (historical.validationResults?.testValidation?.failureCount || 0), coverageChange: current.testValidationResult.coveragePercentage - (historical.validationResults?.testValidation?.coveragePercentage || 0), }, }; } private generateComparisonInsights(comparison: any): string[] { const insights = []; if (comparison.scoreImprovement > 0) { insights.push(`Deployment readiness improved by ${comparison.scoreImprovement} points`); } else if (comparison.scoreImprovement < 0) { insights.push( `Deployment readiness declined by ${Math.abs(comparison.scoreImprovement)} points` ); } if (comparison.testImprovements.failureCountChange < 0) { insights.push( `Test stability improved: ${Math.abs(comparison.testImprovements.failureCountChange)} fewer failures` ); } if (comparison.testImprovements.coverageChange > 0) { insights.push(`Test coverage increased by ${comparison.testImprovements.coverageChange}%`); } return insights; } private analyzeTimePatterns(_assessments: any[]): any[] { // Simplified time pattern analysis return []; } private analyzeEnvironmentPatterns(_assessments: any[]): any[] { // Simplified environment pattern analysis return []; } private calculateTrend(values: number[]): number { if (values.length < 2) return 0; const recent = values.slice(0, Math.min(5, values.length)); const older = values.slice(Math.min(5, values.length)); const recentAvg = recent.reduce((a, b) => a + b, 0) / recent.length; const olderAvg = older.length > 0 ? older.reduce((a, b) => a + b, 0) / older.length : recentAvg; return recentAvg - olderAvg; } }
  • performFullAudit helper: core logic for full readiness check combining all validators (tests, history, code quality via TreeSitter, ADR compliance via Smart Code Linking, environment research), generates final result with blockers and scores.
    async function performFullAudit( args: z.infer<typeof DeploymentReadinessSchema>, projectPath: string, historyPath: string ): Promise<DeploymentReadinessResult> { // Step 0: Research environment readiness const environmentResearch = await performEnvironmentResearch(args, projectPath); // Combine all validations const testResult = await performTestValidation(args, projectPath); const historyResult = await performDeploymentHistoryAnalysis(args, historyPath); // Smart Code Linking - Enhanced deployment readiness with ADR analysis let smartCodeAnalysis = ''; let adrComplianceResult = testResult.adrComplianceResult; if (args.requireAdrCompliance) { try { // Discover ADRs in the project const { discoverAdrsInDirectory } = await import('../utils/adr-discovery.js'); const adrDirectory = 'docs/adrs'; const discoveryResult = await discoverAdrsInDirectory(adrDirectory, projectPath, { includeContent: true, includeTimeline: false, }); if (discoveryResult.adrs.length > 0) { // Combine all ADR content for Smart Code Linking analysis const combinedAdrContent = discoveryResult.adrs .map(adr => `# ${adr.title}\n${adr.content || ''}`) .join('\n\n'); const relatedCodeResult = await findRelatedCode( 'deployment-readiness-analysis', combinedAdrContent, projectPath, { useAI: true, useRipgrep: true, maxFiles: 25, includeContent: false, } ); // Enhanced ADR compliance analysis with related code context const deploymentCriticalFiles = relatedCodeResult.relatedFiles.filter(file => { const deploymentKeywords = [ 'deploy', 'config', 'env', 'docker', 'k8s', 'terraform', 'ci', 'cd', ]; return deploymentKeywords.some( keyword => file.path.toLowerCase().includes(keyword) || file.directory.toLowerCase().includes(keyword) ); }); adrComplianceResult = { score: Math.min(100, 70 + relatedCodeResult.confidence * 30), compliantAdrs: discoveryResult.adrs.length, totalAdrs: discoveryResult.adrs.length, missingImplementations: deploymentCriticalFiles.length === 0 ? ['Deployment-specific implementations not found in related code'] : [], recommendations: [ ...(deploymentCriticalFiles.length > 0 ? [`Found ${deploymentCriticalFiles.length} deployment-critical files linked to ADRs`] : ['Consider documenting deployment procedures in ADRs']), ...(relatedCodeResult.relatedFiles.length > 10 ? ['High code-ADR linkage indicates good architectural documentation'] : ['Consider improving ADR-to-code traceability']), ...(relatedCodeResult.confidence > 0.8 ? ['Strong architectural alignment detected between ADRs and implementation'] : ['Review ADR implementation alignment before deployment']), ], }; smartCodeAnalysis = ` ## ๐Ÿ”— Smart Code Linking - Deployment Analysis **ADR Discovery**: Found ${discoveryResult.adrs.length} architectural decision records **Related Code Files**: ${relatedCodeResult.relatedFiles.length} files linked to ADRs **Deployment-Critical Files**: ${deploymentCriticalFiles.length} files identified ### Deployment-Critical Code Analysis ${ deploymentCriticalFiles.length > 0 ? deploymentCriticalFiles .slice(0, 5) .map( (file, index) => `${index + 1}. **${file.path}** - ${file.extension} file (${file.size} bytes)` ) .join('\n') : '*No deployment-specific files found in ADR-related code*' } ### Architectural Alignment - **ADR-Code Confidence**: ${(relatedCodeResult.confidence * 100).toFixed(1)}% - **Keywords Used**: ${relatedCodeResult.keywords.join(', ')} - **Implementation Coverage**: ${relatedCodeResult.relatedFiles.length > 0 ? 'Adequate' : 'Needs Review'} **Deployment Impact**: ${ deploymentCriticalFiles.length > 0 ? 'ADR-guided deployment files found - architectural decisions are implemented' : 'Limited deployment-specific code found - verify manual deployment procedures' } `; } else { smartCodeAnalysis = ` ## ๐Ÿ”— Smart Code Linking - Deployment Analysis **Status**: No ADRs found in project **Recommendation**: Consider creating ADRs to document deployment architecture and decisions **Impact**: Proceeding with deployment readiness check without architectural guidance `; } } catch (error) { console.warn('[WARNING] Smart Code Linking for deployment analysis failed:', error); smartCodeAnalysis = ` ## ๐Ÿ”— Smart Code Linking - Deployment Analysis **Status**: โš ๏ธ ADR analysis failed - continuing with standard deployment checks **Error**: ${error instanceof Error ? error.message : 'Unknown error'} `; } } // Create environment blockers based on research findings const environmentBlockers: DeploymentBlocker[] = []; if (environmentResearch.warnings.length > 0) { environmentResearch.warnings.forEach(warning => { if (warning.includes('threshold') || warning.includes('No container orchestration')) { environmentBlockers.push({ category: 'environment', title: 'Environment Readiness Concern', description: warning, severity: warning.includes('No container orchestration') ? 'high' : 'medium', impact: 'May affect deployment execution', resolutionSteps: [ 'Verify environment tools are installed', 'Check environment configurations', 'Consult deployment documentation', ], estimatedResolutionTime: '30 minutes - 1 hour', blocksDeployment: args.strictMode && warning.includes('No container orchestration'), }); } }); } const allBlockers = [ ...testResult.criticalBlockers, ...testResult.testFailureBlockers, ...historyResult.deploymentHistoryBlockers, ...environmentBlockers, ]; // Adjust overall score based on environment research confidence const baseScore = (testResult.overallScore + historyResult.overallScore) / 2; const environmentScore = environmentResearch.confidence * 100; const overallScore = baseScore * 0.7 + environmentScore * 0.3; const isReady = allBlockers.filter(b => b.blocksDeployment).length === 0; const result = { isDeploymentReady: isReady, overallScore, confidence: Math.min( testResult.confidence, historyResult.confidence, environmentResearch.confidence * 100 ), codeQualityAnalysis: testResult.codeQualityAnalysis, testValidationResult: testResult.testValidationResult, deploymentHistoryAnalysis: historyResult.deploymentHistoryAnalysis, adrComplianceResult, criticalBlockers: allBlockers.filter(b => b.severity === 'critical'), testFailureBlockers: testResult.testFailureBlockers, deploymentHistoryBlockers: historyResult.deploymentHistoryBlockers, warnings: [...testResult.warnings, ...historyResult.warnings, ...environmentResearch.warnings], todoTasksCreated: [], healthScoreUpdate: {}, gitPushStatus: isReady ? ('allowed' as const) : ('blocked' as const), overrideStatus: {}, smartCodeAnalysis, // Include Smart Code Linking analysis environmentResearch, // Include environment research results }; return result; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tosin2013/mcp-adr-analysis-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server