Introduction: No Backup, No Data

Data loss can happen anytime. Hardware failures, human errors, security breaches, natural disasters - a solid backup strategy is essential for server operations.

1. Backup Strategy Planning

1.1 The 3-2-1 Backup Rule

# 3-2-1 Rule
- Keep 3 copies of your data
- Store on 2 different media types
- Keep 1 copy offsite

1.2 Backup Types

# Full Backup
- Copies all data
- Easiest to restore
- Requires most time and space

# Incremental Backup
- Only files changed since last backup
- Fast and space-efficient
- Restore requires full + all incrementals

# Differential Backup
- Files changed since last full backup
- Medium time/space requirements
- Restore requires full + last differential

2. Backup with tar

2.1 Basic Backup

# Create compressed backup
tar -czvf backup_$(date +%Y%m%d).tar.gz /path/to/data

# Exclude specific directories
tar -czvf backup.tar.gz --exclude='*.log' --exclude='cache' /data

# Split large archives
tar -cvf - /data | split -b 1G - backup_part_

# Extract backup
tar -xzvf backup.tar.gz -C /restore/path

2.2 Incremental Backup

# Incremental backup using snapshot file
# First run (full)
tar -czvf full_backup.tar.gz -g /var/backup/snapshot.snar /data

# Second run (incremental)
tar -czvf incr_backup_1.tar.gz -g /var/backup/snapshot.snar /data

# Restore (in order)
tar -xzvf full_backup.tar.gz -g /dev/null -C /restore
tar -xzvf incr_backup_1.tar.gz -g /dev/null -C /restore

3. Backup with rsync

3.1 Local Synchronization

# Basic sync
rsync -avh /source/ /backup/

# Sync deletions too
rsync -avh --delete /source/ /backup/

# Show progress
rsync -avh --progress /source/ /backup/

# Test run (dry-run)
rsync -avhn /source/ /backup/

3.2 Remote Backup

# Remote backup via SSH
rsync -avhz -e ssh /local/data/ user@remote:/backup/

# Bandwidth limit
rsync -avhz --bwlimit=1000 /data/ user@remote:/backup/

# Resume partial transfers
rsync -avhz --partial --progress /data/ user@remote:/backup/

# Use specific SSH key
rsync -avhz -e "ssh -i ~/.ssh/backup_key" /data/ user@remote:/backup/

3.3 rsync Backup Script

#!/bin/bash
# rsync_backup.sh

SOURCE="/home /etc /var/www"
DEST="/mnt/backup"
LOG="/var/log/backup.log"
DATE=$(date +%Y%m%d_%H%M%S)

echo "=== Backup started: $DATE ===" >> $LOG

for dir in $SOURCE; do
    rsync -avh --delete \
        --exclude='*.tmp' \
        --exclude='cache' \
        "$dir" "$DEST" >> $LOG 2>&1
done

echo "=== Backup completed: $(date +%Y%m%d_%H%M%S) ===" >> $LOG

4. Database Backup

4.1 MySQL/MariaDB

# Full backup
mysqldump -u root -p --all-databases > all_db_$(date +%Y%m%d).sql

# Specific database
mysqldump -u root -p mydb > mydb_$(date +%Y%m%d).sql

# Compressed backup
mysqldump -u root -p mydb | gzip > mydb_$(date +%Y%m%d).sql.gz

# Restore
mysql -u root -p < backup.sql
gunzip < backup.sql.gz | mysql -u root -p mydb

4.2 PostgreSQL

# Full backup
pg_dumpall -U postgres > all_db_$(date +%Y%m%d).sql

# Specific database
pg_dump -U postgres mydb > mydb_$(date +%Y%m%d).sql

# Custom format (parallel restore capable)
pg_dump -U postgres -Fc mydb > mydb.dump

# Restore
psql -U postgres < backup.sql
pg_restore -U postgres -d mydb mydb.dump

5. Automated Backup

5.1 cron Configuration

# crontab -e

# Daily full backup at 2 AM
0 2 * * * /root/scripts/daily_backup.sh

# Weekly full backup on Sunday
0 3 * * 0 /root/scripts/weekly_backup.sh

# Hourly incremental backup
0 * * * * /root/scripts/hourly_incremental.sh

5.2 Comprehensive Backup Script

#!/bin/bash
# comprehensive_backup.sh

# Configuration
BACKUP_DIR="/mnt/backup"
DATE=$(date +%Y%m%d)
RETENTION_DAYS=30
LOG="/var/log/backup.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> $LOG
}

# Create backup directory
mkdir -p "$BACKUP_DIR/$DATE"

log "Backup started"

# System configuration backup
tar -czf "$BACKUP_DIR/$DATE/etc.tar.gz" /etc 2>/dev/null
log "System config backup completed"

# Web data backup
tar -czf "$BACKUP_DIR/$DATE/www.tar.gz" /var/www 2>/dev/null
log "Web data backup completed"

# MySQL backup
mysqldump -u root --all-databases | gzip > "$BACKUP_DIR/$DATE/mysql.sql.gz"
log "MySQL backup completed"

# Remove old backups
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \;
log "Old backup cleanup completed"

log "Backup completed"

6. Recovery Procedures

6.1 File Recovery

# Extract tar archive
tar -xzvf backup.tar.gz -C /restore/path

# Restore specific file only
tar -xzvf backup.tar.gz -C /restore path/to/file

# rsync recovery
rsync -avh /backup/data/ /restore/data/

6.2 System Recovery

# 1. Boot from Live USB/CD
# 2. Mount filesystem
mount /dev/sda1 /mnt

# 3. Restore from backup
tar -xzvf /backup/root.tar.gz -C /mnt

# 4. Restore bootloader
grub-install --root-directory=/mnt /dev/sda

# 5. Reboot

7. Recovery Testing

#!/bin/bash
# backup_verify.sh

BACKUP_FILE="$1"
TEST_DIR="/tmp/backup_test_$$"

mkdir -p "$TEST_DIR"

echo "Verifying backup file: $BACKUP_FILE"

# Check archive integrity
if tar -tzf "$BACKUP_FILE" > /dev/null 2>&1; then
    echo "Archive integrity: OK"
else
    echo "Archive integrity: FAILED"
    exit 1
fi

# Test restore
tar -xzf "$BACKUP_FILE" -C "$TEST_DIR"
FILE_COUNT=$(find "$TEST_DIR" -type f | wc -l)
echo "Restored file count: $FILE_COUNT"

# Cleanup
rm -rf "$TEST_DIR"
echo "Verification complete"

8. Cloud Backup

# AWS S3 backup
aws s3 sync /data s3://mybucket/backup/

# Google Cloud Storage
gsutil rsync -r /data gs://mybucket/backup/

# rclone (supports various cloud providers)
rclone sync /data remote:backup/

9. Disaster Recovery Plan (DRP)

# Disaster Recovery Plan Checklist
1. Identify critical data and priorities
2. Define RPO (Recovery Point Objective)
3. Define RTO (Recovery Time Objective)
4. Document backup locations and methods
5. Document recovery procedures
6. Conduct regular recovery tests
7. Maintain contact list and roles
8. Regular review and updates

Conclusion

Backup is the lifeline of server operations. With regular backups, recovery testing, and disaster recovery planning, you can protect your data in any situation. The final part covers security hardening and vulnerability management.