Introduction: Backup is Insurance

When running a home server, problems will eventually occur. Hard drive failures, accidental file deletions, ransomware infections - the risk of data loss is always present. If you've completed security settings from the previous guide, it's now time to prepare for worst-case scenarios.

Many people say "Of course!" when asked if they back up their data, but few have actually tested recovery. Backup isn't just copying files - it's maintaining a recoverable state. In this guide, we'll cover systematic backup strategies along with monitoring systems that help detect problems before they occur.

1. The 3-2-1 Backup Rule

1.1 What is the 3-2-1 Rule?

The 3-2-1 rule is known as the golden rule of data backup:

  • 3 Copies: Maintain at least 3 copies of your data, including the original.
  • 2 Different Media: Store backups on at least 2 different types of storage media (e.g., internal HDD + external HDD, SSD + NAS).
  • 1 Offsite: Keep at least 1 backup in a physically different location (cloud or server in another region).

Following this rule enables data recovery from most disaster scenarios including fire, theft, and hardware failure.

1.2 Applying to Home Servers

Here's a practical example of applying the 3-2-1 rule in a home server environment:

Copy Storage Location Backup Method Frequency
Original Server Main Disk - -
Backup 1 Separate Disk or NAS rsync Daily
Backup 2 Cloud (Google Drive, Backblaze, etc.) rclone Weekly

2. Local Backup with rsync

2.1 Basic rsync Usage

rsync is the most widely used file synchronization tool in Linux. It's efficient and fast because it only transfers changed files.

# Check rsync installation (usually pre-installed)
rsync --version

# Basic usage
rsync -av /source/directory/ /backup/directory/

# Key options explained
# -a : Archive mode (preserves permissions, ownership, timestamps, etc.)
# -v : Verbose output
# -z : Compress during transfer (useful for remote backups)
# --delete : Delete files from backup if deleted from source
# --exclude : Exclude specific files/directories
# --progress : Show transfer progress

2.2 Practical rsync Examples

# Backup home directory (excluding certain files)
rsync -av --progress \
  --exclude='.cache' \
  --exclude='*.tmp' \
  --exclude='node_modules' \
  /home/username/ /backup/home/

# Backup Docker volumes
rsync -av --progress \
  /var/lib/docker/volumes/ /backup/docker-volumes/

# Backup to remote server (using SSH)
rsync -avz --progress \
  -e "ssh -p 2222" \
  /home/username/ user@remote-server:/backup/

# Bidirectional sync if needed (use with caution)
rsync -av --delete /source/ /destination/

2.3 Implementing Incremental Backups

Maintaining date-based incremental backups allows recovery to specific points in time:

# Incremental backup using hard links
#!/bin/bash
BACKUP_DIR="/backup/incremental"
SOURCE_DIR="/home/username"
DATE=$(date +%Y-%m-%d)
LATEST="$BACKUP_DIR/latest"

# Use hard links if previous backup exists
if [ -d "$LATEST" ]; then
    rsync -av --delete \
      --link-dest="$LATEST" \
      "$SOURCE_DIR/" "$BACKUP_DIR/$DATE/"
else
    rsync -av "$SOURCE_DIR/" "$BACKUP_DIR/$DATE/"
fi

# Update latest symlink
rm -f "$LATEST"
ln -s "$BACKUP_DIR/$DATE" "$LATEST"

# Delete backups older than 30 days
find "$BACKUP_DIR" -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;

3. Automated Backup Scripts (cron)

3.1 Writing a Backup Script

Let's write a systematic backup script:

#!/bin/bash
# /usr/local/bin/backup.sh

# Configuration
BACKUP_ROOT="/backup"
LOG_FILE="/var/log/backup.log"
DATE=$(date +%Y-%m-%d_%H-%M-%S)
DISCORD_WEBHOOK="YOUR_DISCORD_WEBHOOK_URL"

# Log function
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}

# Notification function
notify() {
    local message="$1"
    local status="$2"  # success or error

    if [ "$status" = "error" ]; then
        color="15158332"  # Red
    else
        color="3066993"   # Green
    fi

    curl -H "Content-Type: application/json" \
         -d "{\"embeds\":[{\"title\":\"Backup Notification\",\"description\":\"$message\",\"color\":$color}]}" \
         "$DISCORD_WEBHOOK" 2>/dev/null
}

# Start backup
log "=== Backup Started ==="

# 1. Stop Docker containers (ensure data consistency)
log "Pausing Docker containers..."
docker-compose -f /home/username/docker-compose.yml stop

# 2. Backup important directories
DIRS_TO_BACKUP=(
    "/home/username/data"
    "/var/lib/docker/volumes"
    "/etc/nginx"
    "/etc/letsencrypt"
)

for dir in "${DIRS_TO_BACKUP[@]}"; do
    if [ -d "$dir" ]; then
        dest_dir="$BACKUP_ROOT/daily/$(basename $dir)"
        log "Backing up: $dir -> $dest_dir"
        rsync -av --delete "$dir/" "$dest_dir/" 2>&1 | tee -a "$LOG_FILE"
    fi
done

# 3. Database backup
log "Backing up database..."
mkdir -p "$BACKUP_ROOT/database"
docker exec mysql-container mysqldump -u root -pPASSWORD --all-databases > "$BACKUP_ROOT/database/mysql_$DATE.sql"

# 4. Restart Docker containers
log "Restarting Docker containers..."
docker-compose -f /home/username/docker-compose.yml start

# 5. Clean up old backups
log "Cleaning up old backups..."
find "$BACKUP_ROOT/database" -name "*.sql" -mtime +7 -delete

# Backup complete
BACKUP_SIZE=$(du -sh "$BACKUP_ROOT" | cut -f1)
log "=== Backup Complete (Total size: $BACKUP_SIZE) ==="
notify "Backup completed successfully.\nTotal size: $BACKUP_SIZE" "success"

3.2 Setting Up cron

# Grant execute permission to script
sudo chmod +x /usr/local/bin/backup.sh

# Edit crontab
sudo crontab -e

# Run backup daily at 3 AM
0 3 * * * /usr/local/bin/backup.sh

# Run cloud backup every Sunday at 4 AM
0 4 * * 0 /usr/local/bin/cloud-backup.sh

# Check cron logs
grep CRON /var/log/syslog
Tip: To verify cron jobs work correctly, initially set them to run at shorter intervals, confirm normal operation, then change to your desired schedule.

4. Cloud Backup (rclone)

4.1 Installing and Configuring rclone

rclone is a tool that lets you manage various cloud storage services from the command line. It supports dozens of services including Google Drive, Dropbox, OneDrive, AWS S3, and Backblaze B2.

# Install rclone
curl https://rclone.org/install.sh | sudo bash

# Or install via apt
sudo apt install rclone

# Configure rclone (interactive)
rclone config

4.2 Google Drive Configuration Example

# After running rclone config
# n) New remote
# name> gdrive
# Storage> drive (Google Drive)
# client_id> (Press Enter for default or enter your own ID)
# client_secret> (Press Enter for default)
# scope> 1 (Full access)
# root_folder_id> (Press Enter)
# service_account_file> (Press Enter)
# Edit advanced config? n
# Use auto config? y (authenticate via web browser)
# Configure as team drive? n

# Verify configuration
rclone listremotes

# Test connection
rclone lsd gdrive:

4.3 rclone Backup Script

#!/bin/bash
# /usr/local/bin/cloud-backup.sh

BACKUP_SOURCE="/backup/daily"
REMOTE_NAME="gdrive"
REMOTE_PATH="homeserver-backup"
LOG_FILE="/var/log/cloud-backup.log"
DATE=$(date +%Y-%m-%d)

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}

log "=== Cloud Backup Started ==="

# Sync backup directory
rclone sync "$BACKUP_SOURCE" "$REMOTE_NAME:$REMOTE_PATH/$DATE" \
    --progress \
    --transfers 4 \
    --checkers 8 \
    --log-file="$LOG_FILE" \
    --log-level INFO

# Delete old cloud backups (older than 30 days)
log "Cleaning up old cloud backups..."
rclone delete "$REMOTE_NAME:$REMOTE_PATH" \
    --min-age 30d \
    --log-file="$LOG_FILE"

# Check usage
USAGE=$(rclone about "$REMOTE_NAME:" --json | jq -r '.used // 0')
log "=== Cloud Backup Complete (Usage: $USAGE bytes) ==="

4.4 Encrypted Backups

It's a good practice to encrypt sensitive data before uploading to the cloud:

# Configure rclone crypt
rclone config
# n) New remote
# name> gdrive-crypt
# Storage> crypt
# remote> gdrive:encrypted-backup
# filename_encryption> standard
# directory_name_encryption> true
# Password> (enter a strong password)
# Salt> (Press Enter for auto-generation)

# Use encrypted backup
rclone sync /backup/sensitive gdrive-crypt:

5. System Monitoring Tools

5.1 htop - Process Monitor

htop is an interactive process viewer for the terminal:

# Install htop
sudo apt install htop

# Run
htop

# Key shortcuts
# F2: Setup
# F3: Search
# F4: Filter
# F5: Tree view
# F6: Sort
# F9: Send signal (kill)
# F10: Quit

5.2 glances - Comprehensive System Monitor

glances shows more information at a glance than htop:

# Install glances
sudo apt install glances

# Or install via pip (latest version)
pip3 install glances

# Basic run
glances

# Run in web server mode (remote monitoring)
glances -w -p 61208

# Include Docker container monitoring
glances --enable-plugin docker

# Save reports at intervals
glances --export csv --export-csv-file /var/log/glances.csv

5.3 netdata - Real-time Web Dashboard

netdata provides real-time monitoring with a beautiful web interface:

# Install netdata (official script)
wget -O /tmp/netdata-kickstart.sh https://my-netdata.io/kickstart.sh
bash /tmp/netdata-kickstart.sh

# Check service status
sudo systemctl status netdata

# Access web interface (default port: 19999)
# http://ServerIP:19999

# Configuration file location
# /etc/netdata/netdata.conf

# Alert configuration
# /etc/netdata/health_alarm_notify.conf
# netdata Discord alert configuration
sudo nano /etc/netdata/health_alarm_notify.conf

# Add/modify the following
SEND_DISCORD="YES"
DISCORD_WEBHOOK_URL="YOUR_DISCORD_WEBHOOK_URL"
DEFAULT_RECIPIENT_DISCORD="alerts"

6. Grafana + Prometheus Monitoring Stack

6.1 Installation with Docker Compose

For a professional monitoring setup, the Grafana and Prometheus combination is the standard:

# docker-compose-monitoring.yml
version: '3.8'

services:
  prometheus:
    image: prom/prometheus:latest
    container_name: prometheus
    volumes:
      - ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
      - prometheus_data:/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--web.console.libraries=/usr/share/prometheus/console_libraries'
      - '--web.console.templates=/usr/share/prometheus/consoles'
      - '--storage.tsdb.retention.time=15d'
    ports:
      - "9090:9090"
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    container_name: grafana
    volumes:
      - grafana_data:/var/lib/grafana
    environment:
      - GF_SECURITY_ADMIN_USER=admin
      - GF_SECURITY_ADMIN_PASSWORD=your_secure_password
      - GF_USERS_ALLOW_SIGN_UP=false
    ports:
      - "3000:3000"
    restart: unless-stopped
    depends_on:
      - prometheus

  node-exporter:
    image: prom/node-exporter:latest
    container_name: node-exporter
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    ports:
      - "9100:9100"
    restart: unless-stopped

  cadvisor:
    image: gcr.io/cadvisor/cadvisor:latest
    container_name: cadvisor
    volumes:
      - /:/rootfs:ro
      - /var/run:/var/run:ro
      - /sys:/sys:ro
      - /var/lib/docker/:/var/lib/docker:ro
    ports:
      - "8080:8080"
    restart: unless-stopped

volumes:
  prometheus_data:
  grafana_data:

6.2 Prometheus Configuration

# prometheus/prometheus.yml
global:
  scrape_interval: 15s
  evaluation_interval: 15s

alerting:
  alertmanagers:
    - static_configs:
        - targets: []

rule_files: []

scrape_configs:
  - job_name: 'prometheus'
    static_configs:
      - targets: ['localhost:9090']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['cadvisor:8080']
# Start the stack
docker-compose -f docker-compose-monitoring.yml up -d

# Access Grafana: http://ServerIP:3000
# Default account: admin / your_secure_password

# Add Prometheus data source in Grafana
# Configuration > Data Sources > Add data source > Prometheus
# URL: http://prometheus:9090

6.3 Useful Grafana Dashboards

You can create Grafana dashboards from scratch or import community-shared dashboards:

  • Node Exporter Full (ID: 1860): Complete system monitoring
  • Docker Container Monitoring (ID: 193): Docker container monitoring
  • Nginx (ID: 9614): Nginx web server monitoring
# Import dashboard
# Grafana > Dashboards > Import
# Enter Dashboard ID and click Load
# Select Prometheus data source and click Import

7. Setting Up Notifications

7.1 Discord Webhook Notifications

#!/bin/bash
# /usr/local/bin/discord-notify.sh

WEBHOOK_URL="YOUR_DISCORD_WEBHOOK_URL"

send_notification() {
    local title="$1"
    local message="$2"
    local color="${3:-3447003}"  # Default blue

    curl -H "Content-Type: application/json" \
         -d "{
             \"embeds\": [{
                 \"title\": \"$title\",
                 \"description\": \"$message\",
                 \"color\": $color,
                 \"timestamp\": \"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"
             }]
         }" \
         "$WEBHOOK_URL"
}

# Usage examples
# send_notification "Server Alert" "Backup completed successfully." "3066993"  # Green
# send_notification "Warning" "Disk usage exceeded 90%" "15158332"  # Red

7.2 Telegram Bot Notifications

#!/bin/bash
# /usr/local/bin/telegram-notify.sh

BOT_TOKEN="YOUR_BOT_TOKEN"
CHAT_ID="YOUR_CHAT_ID"

send_telegram() {
    local message="$1"

    curl -s -X POST "https://api.telegram.org/bot$BOT_TOKEN/sendMessage" \
         -d chat_id="$CHAT_ID" \
         -d text="$message" \
         -d parse_mode="HTML"
}

# Usage example
# send_telegram "Server Alert
# Backup completed successfully.
# Time: $(date)"

7.3 System Status Alert Script

#!/bin/bash
# /usr/local/bin/system-check.sh

source /usr/local/bin/discord-notify.sh

# Check disk usage
DISK_USAGE=$(df -h / | awk 'NR==2 {print $5}' | sed 's/%//')
if [ "$DISK_USAGE" -gt 85 ]; then
    send_notification "Disk Warning" "Root partition usage: ${DISK_USAGE}%" "15158332"
fi

# Check memory usage
MEM_USAGE=$(free | awk '/Mem:/ {printf "%.0f", $3/$2 * 100}')
if [ "$MEM_USAGE" -gt 90 ]; then
    send_notification "Memory Warning" "Memory usage: ${MEM_USAGE}%" "15158332"
fi

# Check service status
SERVICES=("docker" "nginx" "ssh")
for service in "${SERVICES[@]}"; do
    if ! systemctl is-active --quiet "$service"; then
        send_notification "Service Down" "$service service has stopped!" "15158332"
    fi
done

# Check system load
LOAD=$(uptime | awk -F'load average:' '{print $2}' | awk -F',' '{print $1}' | tr -d ' ')
CPU_CORES=$(nproc)
LOAD_INT=${LOAD%.*}
if [ "$LOAD_INT" -gt "$CPU_CORES" ]; then
    send_notification "Load Warning" "System load: $LOAD (CPU cores: $CPU_CORES)" "15105570"
fi
# Register in cron (check every 5 minutes)
sudo crontab -e
*/5 * * * * /usr/local/bin/system-check.sh

8. Disaster Recovery Procedures

8.1 Disaster Response Checklist

Prepare a checklist in advance so you don't panic during server failures:

  1. Identify the Problem: Which services are affected?
  2. Check Logs: Review journalctl, /var/log/
  3. Check Resources: CPU, memory, disk space
  4. Check Network: Connection status, firewall rules
  5. Recent Changes: Any updates or configuration changes?
  6. Restart Services: Restart the affected service or server
  7. Rollback: Restore to previous state if necessary

8.2 Recovering from Backups

# Restore from rsync backup
sudo rsync -av --progress /backup/daily/data/ /home/username/data/

# Database recovery
docker exec -i mysql-container mysql -u root -pPASSWORD < /backup/database/mysql_2026-01-22.sql

# Restore from cloud backup
rclone copy gdrive:homeserver-backup/2026-01-22 /restore/ --progress

# Docker volume recovery
sudo systemctl stop docker
sudo rsync -av /backup/docker-volumes/ /var/lib/docker/volumes/
sudo systemctl start docker

8.3 The Importance of Recovery Testing

If you only backup without testing recovery, you might panic when the real situation arises. Conduct regular recovery tests:

# Recovery test script example
#!/bin/bash
# /usr/local/bin/recovery-test.sh

TEST_DIR="/tmp/recovery-test-$(date +%Y%m%d)"
mkdir -p "$TEST_DIR"

echo "=== Backup Recovery Test Started ==="

# 1. Local backup test
echo "Recovering sample file from local backup..."
rsync -av /backup/daily/data/sample-file.txt "$TEST_DIR/"

if [ -f "$TEST_DIR/sample-file.txt" ]; then
    echo "Local backup recovery successful"
else
    echo "Local backup recovery FAILED!"
fi

# 2. Cloud backup test
echo "Recovering sample file from cloud backup..."
rclone copy gdrive:homeserver-backup/latest/sample-file.txt "$TEST_DIR/cloud/"

if [ -f "$TEST_DIR/cloud/sample-file.txt" ]; then
    echo "Cloud backup recovery successful"
else
    echo "Cloud backup recovery FAILED!"
fi

# Cleanup
rm -rf "$TEST_DIR"
echo "=== Recovery Test Complete ==="

Conclusion

Backup and monitoring might go unnoticed in normal times, but they prove their value when problems arise. Even if it feels tedious, build a backup system following the 3-2-1 rule, and set up monitoring to always know your server's status.

Most importantly, maintain "recoverable" backups. Having backup files is meaningless if you can't recover from them. Conduct regular recovery tests and document your recovery procedures.

This concludes the Home Server Complete Guide series. From hardware selection to OS installation, network configuration, Docker, security, and backup and monitoring - we've covered the essential knowledge for home server operation. We hope this guide helps you on your home server journey.

If you have questions or concerns, please leave a comment!