CI_CD_README.md•10.8 kB
---
pdf-engine: lualatex
mainfont: "DejaVu Serif"
monofont: "DejaVu Sans Mono"
header-includes: |
\usepackage{fontspec}
\directlua{
luaotfload.add_fallback("emojifallback", {"NotoColorEmoji:mode=harf;"})
}
\setmainfont[
RawFeature={fallback=emojifallback}
]{DejaVu Serif}
---
# CodeGraph CI/CD Pipeline Documentation
## Overview
This document describes the comprehensive CI/CD pipeline implemented for the CodeGraph Rust project. The pipeline includes automated testing, building, benchmarking, security scanning, cross-platform builds, release automation, and documentation generation.
## 🏗️ Pipeline Architecture
The CI/CD pipeline consists of several interconnected workflows:
### Core Workflows
- **CI** (`ci.yml`) - Main continuous integration workflow
- **Security** (`security.yml`) - Security scanning and dependency auditing
- **Benchmark** (`benchmark.yml`) - Performance benchmarking and regression detection
- **Release** (`release.yml`) - Cross-platform builds and release automation
- **Documentation** (`docs.yml`) - Documentation generation and deployment
- **Nightly** (`nightly.yml`) - Extended testing and monitoring
## 📋 Workflow Details
### 1. CI Workflow (`ci.yml`)
**Triggers:**
- Push to `main` and `develop` branches
- Pull requests to `main` and `develop` branches
**Jobs:**
- **Test Suite**: Cross-platform testing (Ubuntu, Windows, macOS) with Rust stable/beta
- **Code Coverage**: Generate and upload coverage reports using `cargo-llvm-cov`
- **Build**: Release builds and artifact uploading
**Key Features:**
- Format checking with `rustfmt`
- Linting with `clippy`
- Cross-platform compatibility testing
- Code coverage reporting to Codecov
### 2. Security Workflow (`security.yml`)
**Triggers:**
- Push to `main` and `develop` branches
- Pull requests to `main` and `develop` branches
- Daily schedule (6 AM UTC)
**Jobs:**
- **Security Audit**: Vulnerability scanning with `cargo-audit`
- **Dependency Check**: License and policy validation with `cargo-deny`
- **Supply Chain**: Analysis of crate authors and supply chain security
- **Static Analysis**: Security-focused static analysis with Semgrep
- **Security Lints**: Enhanced Clippy security lints
**Security Features:**
- Automatic vulnerability detection
- License compliance checking
- Supply chain security analysis
- SARIF file generation for GitHub Security tab
### 3. Benchmark Workflow (`benchmark.yml`)
**Triggers:**
- Push to `main` branch
- Pull requests to `main` branch
- Weekly schedule (Monday 2 AM UTC)
**Jobs:**
- **Performance Benchmarks**: Criterion-based benchmarking
- **Benchmark Comparison**: PR vs main branch performance comparison
- **Memory Profiling**: Memory usage analysis with Valgrind
**Performance Features:**
- Automated performance regression detection
- Benchmark result visualization
- PR performance comparison comments
- Memory leak detection
- Performance trend tracking
### 4. Release Workflow (`release.yml`)
**Triggers:**
- Git tags starting with `v*`
- Manual workflow dispatch
**Jobs:**
- **Create Release**: Generate changelog and create GitHub release
- **Cross-platform Builds**: Build for multiple platforms and architectures
- **Docker Images**: Multi-architecture Docker image builds
- **Crates.io Publishing**: Automated crate publishing
- **Post-release Tasks**: Notification and badge updates
**Supported Platforms:**
- Linux: x86_64 (glibc and musl), aarch64
- macOS: x86_64, Apple Silicon (aarch64)
- Windows: x86_64
### 5. Documentation Workflow (`docs.yml`)
**Triggers:**
- Push to `main` branch
- Pull requests to `main` branch
- Manual workflow dispatch
**Jobs:**
- **Build Documentation**: Rustdoc, mdBook, and OpenAPI documentation
- **Deploy Documentation**: GitHub Pages deployment
- **Link Checking**: Automated link validation
- **Accessibility**: Documentation accessibility testing
**Documentation Types:**
- **Rustdoc**: API reference documentation
- **mdBook**: User guide and tutorials
- **OpenAPI**: REST API specification
### 6. Nightly Workflow (`nightly.yml`)
**Triggers:**
- Daily schedule (2 AM UTC)
- Manual workflow dispatch
**Jobs:**
- **Nightly Tests**: Extended testing with Rust nightly
- **Fuzzing**: Automated fuzz testing
- **Performance Monitoring**: Long-running performance tests
- **Memory Leak Detection**: Comprehensive memory analysis
- **Dependency Audit**: Detailed dependency security review
## 🔧 Setup and Configuration
### Required Secrets
Add these secrets to your GitHub repository:
```bash
# Required for release workflow
DOCKER_USERNAME # Docker Hub username
DOCKER_PASSWORD # Docker Hub password/token
CARGO_REGISTRY_TOKEN # crates.io API token
# Optional - automatically available
GITHUB_TOKEN # Automatically provided by GitHub
CODECOV_TOKEN # For enhanced Codecov features
```
### Repository Settings
1. **Enable GitHub Pages** for documentation deployment
2. **Enable Dependabot** for automated dependency updates
3. **Configure branch protection** for `main` branch:
- Require status checks to pass
- Require up-to-date branches
- Require signed commits (recommended)
### Local Development Setup
1. **Install required tools:**
```bash
# Core tools
cargo install cargo-audit cargo-deny cargo-outdated
cargo install cargo-llvm-cov critcmp
cargo install mdbook
# Optional tools for advanced features
cargo install cargo-fuzz cargo-valgrind cargo-supply-chain
```
2. **Run checks locally:**
```bash
# Format and lint
cargo fmt --check
cargo clippy --workspace --all-targets --all-features
# Security audit
cargo audit
cargo deny check
# Run tests
cargo test --workspace --all-features
# Run benchmarks
cargo bench --workspace --features benchmarking
```
## 📊 Performance Monitoring
### Benchmark Metrics
The pipeline tracks several performance metrics:
- **Embedding Generation**: Single and batch embedding performance
- **Memory Usage**: Allocation patterns and memory efficiency
- **Concurrency**: Multi-threaded performance scaling
- **Code Size Impact**: Performance vs input size correlation
### Regression Detection
- **Automatic Detection**: 200% performance degradation threshold
- **PR Comparisons**: Side-by-side performance comparisons
- **Trend Analysis**: Historical performance tracking
- **Alert System**: Automatic notifications for regressions
## 🛡️ Security Features
### Vulnerability Management
- **Daily Scans**: Automated vulnerability detection
- **Dependency Auditing**: License and security policy enforcement
- **Supply Chain**: Author and maintainer analysis
- **Static Analysis**: Security-focused code analysis
### Compliance
- **License Validation**: Automated license compliance checking
- **Security Policies**: Enforced security guidelines
- **Audit Trails**: Comprehensive security audit logging
## 🚀 Release Process
### Automated Release Steps
1. **Version Tagging**: Create and push version tags
2. **Changelog Generation**: Automated changelog creation
3. **Cross-platform Builds**: Multi-architecture binary generation
4. **Docker Images**: Containerized application builds
5. **Crate Publishing**: Automated crates.io publishing
6. **Documentation Updates**: Synchronized documentation updates
### Manual Release Steps
```bash
# 1. Update version numbers
sed -i 's/version = "0.1.0"/version = "0.2.0"/' Cargo.toml
# 2. Update changelog
echo "## [0.2.0] - $(date +%Y-%m-%d)" >> CHANGELOG.md
# 3. Commit and tag
git add .
git commit -m "Release v0.2.0"
git tag v0.2.0
git push origin main --tags
```
## 📚 Documentation Deployment
### Automatic Deployment
- **GitHub Pages**: Automatic deployment on main branch updates
- **Multi-format**: HTML, PDF, and interactive documentation
- **Version Control**: Documentation versioning with releases
### Documentation Structure
```
docs/
├── api/ # Rustdoc API reference
├── guide/ # mdBook user guide
├── openapi/ # REST API specification
└── index.html # Documentation hub
```
## 🔍 Monitoring and Alerts
### GitHub Checks
All workflows provide detailed status checks:
- ✅ Tests passing
- ✅ Security scans clean
- ✅ Performance within limits
- ✅ Documentation builds successfully
### Artifact Management
- **Test Reports**: JUnit XML format
- **Coverage Reports**: LCOV format for Codecov
- **Benchmark Data**: Criterion HTML and JSON reports
- **Security Reports**: SARIF format for GitHub Security
- **Build Artifacts**: Cross-platform binaries and Docker images
### Retention Policies
- **Build Artifacts**: 30 days
- **Test Reports**: 30 days
- **Security Reports**: 90 days
- **Performance Data**: Permanent (for trend analysis)
## 🛠️ Troubleshooting
### Common Issues
1. **Build Failures**: Check dependency versions and system requirements
2. **Test Failures**: Verify local environment matches CI environment
3. **Security Alerts**: Review dependency updates and security advisories
4. **Performance Regressions**: Analyze benchmark comparisons and profiles
### Debug Steps
```bash
# Local debugging commands
cargo check --workspace --all-features
cargo test --workspace -- --nocapture
cargo bench --workspace --features benchmarking -- --verbose
cargo audit --deny warnings
```
### Getting Help
- **Issues**: Use GitHub issue templates for bug reports and feature requests
- **Discussions**: Community discussions for questions and ideas
- **Documentation**: Comprehensive guides and API reference
- **Examples**: Working examples in the repository
## 🔄 Maintenance
### Regular Tasks
- **Weekly**: Review Dependabot PRs
- **Monthly**: Update CI/CD configurations
- **Quarterly**: Review security policies and performance benchmarks
- **Per Release**: Update documentation and changelog
### Continuous Improvement
The CI/CD pipeline is continuously improved based on:
- **Performance Metrics**: Regular analysis of build and test times
- **Security Updates**: Latest security best practices and tools
- **Developer Feedback**: Team input on workflow efficiency
- **Industry Standards**: Adoption of new CI/CD best practices
## 📈 Metrics and KPIs
### Build Metrics
- **Build Success Rate**: Target >95%
- **Build Time**: Target <10 minutes for CI
- **Test Coverage**: Target >80%
### Security Metrics
- **Vulnerability Response Time**: Target <24 hours
- **Security Scan Coverage**: 100%
- **Compliance Rate**: Target 100%
### Performance Metrics
- **Benchmark Stability**: <5% variation
- **Memory Usage**: Monitor for leaks and excessive allocation
- **Regression Detection**: <24 hour response time
---
This CI/CD pipeline provides a robust foundation for maintaining code quality, security, and performance while enabling rapid and reliable releases of the CodeGraph project.