MySQL & Database Design

Backup & Recovery

13 min Lesson 33 of 40

Backup & Recovery

Database backups are your safety net against data loss from hardware failure, software bugs, human error, or security breaches. In this lesson, you'll learn comprehensive backup strategies, restoration procedures, point-in-time recovery, and disaster recovery planning.

Why Backup Matters

Understanding the importance of backups helps you take them seriously:

Common Scenarios Requiring Backup Restoration: ✗ Hardware failure (disk crash, server failure) ✗ Accidental data deletion (DROP TABLE, DELETE without WHERE) ✗ Software bugs corrupting data ✗ Ransomware attacks ✗ Natural disasters ✗ Human error during maintenance ✗ Database corruption ✗ Migration to new server Backup Best Practice Rule: "If you don't have at least 3 copies of your data in 2 different formats with 1 copy offsite, you don't have a backup." - 3-2-1 Backup Rule
Critical: Backups are only useful if you can restore them. Regularly test your backup restoration process to ensure backups are valid and complete.

Types of Backups

Understanding different backup types helps you choose the right strategy:

1. Full Backup: - Complete copy of entire database - Pros: Simple, fast restore - Cons: Large size, time-consuming - Frequency: Weekly or monthly 2. Incremental Backup: - Only changes since last backup - Pros: Fast, small size - Cons: Slower restore (need all increments) - Frequency: Daily or hourly 3. Differential Backup: - Changes since last full backup - Pros: Faster restore than incremental - Cons: Larger than incremental - Frequency: Daily 4. Logical Backup: - SQL statements to recreate data - Example: mysqldump output - Pros: Human-readable, portable - Cons: Slower for large databases 5. Physical Backup: - Copy of database files - Example: File system copy - Pros: Fast for large databases - Cons: Platform-specific

Using mysqldump (Logical Backup)

mysqldump is the most common backup tool for MySQL. It creates SQL statements that recreate your database:

Basic mysqldump Commands: # Backup single database mysqldump -u root -p database_name > backup.sql # Backup multiple databases mysqldump -u root -p --databases db1 db2 db3 > backup.sql # Backup all databases mysqldump -u root -p --all-databases > all_backup.sql # Backup specific tables mysqldump -u root -p database_name table1 table2 > tables_backup.sql # Backup with compression mysqldump -u root -p database_name | gzip > backup.sql.gz # Backup with timestamp in filename mysqldump -u root -p database_name > backup_$(date +%Y%m%d_%H%M%S).sql
Tip: Always compress large backups to save disk space. Use gzip for good compression ratio or pigz for parallel compression on multi-core systems.

Advanced mysqldump Options

Optimize mysqldump for different scenarios:

Production-Ready Backup Command: mysqldump -u backup_user -p \ --single-transaction \ # InnoDB consistent snapshot --quick \ # Don't buffer query, dump directly --lock-tables=false \ # Don't lock tables --routines \ # Include stored procedures/functions --triggers \ # Include triggers --events \ # Include scheduled events --flush-logs \ # Flush binary logs --master-data=2 \ # Record binary log position --hex-blob \ # Use hex format for BLOB columns --default-character-set=utf8mb4 \ database_name > backup.sql Structure-Only Backup (no data): mysqldump -u root -p --no-data database_name > schema.sql Data-Only Backup (no structure): mysqldump -u root -p --no-create-info database_name > data.sql Skip specific tables: mysqldump -u root -p database_name \ --ignore-table=database_name.logs \ --ignore-table=database_name.cache > backup.sql

Restoring from mysqldump Backup

Restoration is straightforward with mysqldump backups:

Basic Restoration: # Restore to existing database mysql -u root -p database_name < backup.sql # Create database and restore mysql -u root -p -e "CREATE DATABASE database_name" mysql -u root -p database_name < backup.sql # Restore compressed backup gunzip < backup.sql.gz | mysql -u root -p database_name # Restore with progress monitoring pv backup.sql | mysql -u root -p database_name Restore all databases: mysql -u root -p < all_backup.sql Restore specific table: # Extract table from full backup first sed -n '/CREATE TABLE `users`/,/UNLOCK TABLES/p' backup.sql > users_table.sql mysql -u root -p database_name < users_table.sql
Important: Always restore to a test environment first to verify backup integrity before restoring to production.

Automated Backup Script

Create a robust automated backup script for production use:

backup_mysql.sh: #!/bin/bash # Configuration DB_USER="backup_user" DB_PASS="SecureBackupPass2024!" DB_HOST="localhost" BACKUP_DIR="/var/backups/mysql" RETENTION_DAYS=30 DATE=$(date +%Y%m%d_%H%M%S) # Create backup directory mkdir -p $BACKUP_DIR # Get list of databases (exclude system databases) DATABASES=$(mysql -u $DB_USER -p$DB_PASS -h $DB_HOST -e "SHOW DATABASES;" | \ grep -Ev "(Database|information_schema|performance_schema|mysql|sys)") # Backup each database for DB in $DATABASES; do echo "Backing up database: $DB" mysqldump -u $DB_USER -p$DB_PASS -h $DB_HOST \ --single-transaction \ --routines \ --triggers \ --events \ $DB | gzip > "$BACKUP_DIR/${DB}_$DATE.sql.gz" # Check if backup succeeded if [ $? -eq 0 ]; then echo "✓ Successfully backed up $DB" else echo "✗ Failed to backup $DB" >&2 fi done # Delete old backups find $BACKUP_DIR -type f -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete echo "Backup completed: $(date)"
Automation: Schedule this script with cron for daily backups: 0 2 * * * /path/to/backup_mysql.sh >> /var/log/mysql_backup.log 2>&1 This runs daily at 2 AM and logs output.

Binary Log Backups for Point-in-Time Recovery

Binary logs allow you to recover to a specific moment in time:

Enable Binary Logging (my.cnf): [mysqld] log-bin = /var/log/mysql/mysql-bin expire_logs_days = 7 max_binlog_size = 100M binlog_format = ROW View Binary Logs: SHOW BINARY LOGS; SHOW MASTER STATUS; Backup Binary Logs: # Flush and create new binary log mysqladmin -u root -p flush-logs # Copy binary logs to backup location cp /var/log/mysql/mysql-bin.* /backups/binlogs/ Point-in-Time Recovery Process: # 1. Restore from full backup mysql -u root -p database_name < full_backup.sql # 2. Apply binary logs from backup time to desired point mysqlbinlog --start-datetime="2024-02-01 00:00:00" \ --stop-datetime="2024-02-08 14:30:00" \ /var/log/mysql/mysql-bin.000001 \ /var/log/mysql/mysql-bin.000002 | \ mysql -u root -p database_name # 3. Or use position instead of datetime mysqlbinlog --start-position=120 --stop-position=332 \ mysql-bin.000003 | mysql -u root -p database_name

Physical Backups with File System Copy

Physical backups copy actual database files - faster for very large databases:

Cold Backup (server stopped): # 1. Stop MySQL sudo systemctl stop mysql # 2. Copy data directory sudo cp -R /var/lib/mysql /backup/mysql_data_backup # 3. Start MySQL sudo systemctl start mysql Hot Backup with Percona XtraBackup: # Install Percona XtraBackup first # Full backup xtrabackup --backup --target-dir=/backup/full # Incremental backup xtrabackup --backup \ --target-dir=/backup/inc1 \ --incremental-basedir=/backup/full # Prepare backup for restoration xtrabackup --prepare --target-dir=/backup/full # Restore sudo systemctl stop mysql sudo rm -rf /var/lib/mysql/* xtrabackup --copy-back --target-dir=/backup/full sudo chown -R mysql:mysql /var/lib/mysql sudo systemctl start mysql
Warning: Never perform cold backup on production servers during business hours. Always schedule during maintenance windows.

Backup Verification

Always verify backup integrity:

Verification Methods: # 1. Check file size (should not be 0 or suspiciously small) ls -lh backup.sql.gz # 2. Test decompression gunzip -t backup.sql.gz echo $? # Should return 0 for success # 3. Perform test restoration mysql -u root -p test_restore_db < backup.sql # 4. Verify table count and row count mysql -u root -p -e "USE test_restore_db; \ SELECT COUNT(*) FROM information_schema.TABLES \ WHERE TABLE_SCHEMA = 'test_restore_db';" # 5. Checksum verification script #!/bin/bash DB="production_db" BACKUP="backup.sql" # Get checksums from live database mysql -u root -p $DB -e "CHECKSUM TABLE users, orders, products" > live_checksums.txt # Restore backup to test database mysql -u root -p test_db < $BACKUP # Get checksums from restored database mysql -u root -p test_db -e "CHECKSUM TABLE users, orders, products" > backup_checksums.txt # Compare diff live_checksums.txt backup_checksums.txt

Off-Site and Cloud Backup

Store backups in multiple locations for disaster recovery:

1. Copy to Remote Server via rsync: rsync -avz --delete /var/backups/mysql/ \ user@remote-server:/backups/mysql/ 2. Upload to AWS S3: # Install AWS CLI first aws s3 sync /var/backups/mysql/ s3://my-backup-bucket/mysql/ 3. Automated off-site backup script: #!/bin/bash BACKUP_FILE="/var/backups/mysql/backup_$(date +%Y%m%d).sql.gz" S3_BUCKET="s3://company-backups/mysql/" # Create backup mysqldump -u root -p --all-databases | gzip > $BACKUP_FILE # Upload to S3 aws s3 cp $BACKUP_FILE $S3_BUCKET # Upload to remote server scp $BACKUP_FILE backup@remote-server:/backups/ # Verify upload if [ $? -eq 0 ]; then echo "✓ Backup uploaded successfully" else echo "✗ Backup upload failed!" | mail -s "Backup Failed" admin@company.com fi
Best Practice: Encrypt backups before uploading to cloud storage. Use GPG or OpenSSL encryption with a secure key management system.

Encrypted Backups

Protect sensitive data in backups with encryption:

Using OpenSSL: # Backup with encryption mysqldump -u root -p database_name | \ openssl enc -aes-256-cbc -salt -pbkdf2 -out backup_encrypted.sql.enc # Restore from encrypted backup openssl enc -aes-256-cbc -d -pbkdf2 -in backup_encrypted.sql.enc | \ mysql -u root -p database_name Using GPG: # Backup with GPG encryption mysqldump -u root -p database_name | \ gzip | gpg --symmetric --cipher-algo AES256 -o backup.sql.gz.gpg # Restore from GPG encrypted backup gpg --decrypt backup.sql.gz.gpg | gunzip | \ mysql -u root -p database_name

Disaster Recovery Plan

Have a documented recovery plan for different failure scenarios:

Recovery Time Objective (RTO): Maximum acceptable downtime Example: 4 hours Recovery Point Objective (RPO): Maximum acceptable data loss Example: 1 hour of data Disaster Recovery Checklist: 1. Identify the failure type □ Hardware failure □ Data corruption □ Accidental deletion □ Security breach 2. Assess damage □ What data is affected? □ When did the problem occur? □ Is recent backup available? 3. Restore from appropriate backup □ Latest full backup □ Apply incremental/binary logs □ Verify data integrity 4. Test restored database □ Run application tests □ Verify critical data □ Check for corruption 5. Switch to restored database □ Update application config □ Redirect traffic □ Monitor for issues 6. Post-incident review □ Document what happened □ Update backup procedures □ Improve monitoring

Backup Monitoring and Alerts

Monitor backup jobs and get alerted on failures:

Backup monitoring script: #!/bin/bash BACKUP_DIR="/var/backups/mysql" MAX_AGE_HOURS=26 # Alert if backup older than 26 hours ADMIN_EMAIL="admin@company.com" # Find most recent backup LATEST_BACKUP=$(find $BACKUP_DIR -type f -name "*.sql.gz" -mmin -$((MAX_AGE_HOURS*60))) if [ -z "$LATEST_BACKUP" ]; then echo "WARNING: No recent MySQL backup found!" | \ mail -s "MySQL Backup Alert" $ADMIN_EMAIL exit 1 fi # Check backup size (should be > 1MB for most databases) BACKUP_SIZE=$(du -m "$LATEST_BACKUP" | cut -f1) if [ $BACKUP_SIZE -lt 1 ]; then echo "WARNING: MySQL backup file is suspiciously small: ${BACKUP_SIZE}MB" | \ mail -s "MySQL Backup Alert" $ADMIN_EMAIL exit 1 fi echo "✓ Backup check passed: $LATEST_BACKUP (${BACKUP_SIZE}MB)"

Practical Exercise:

Scenario: Create a complete backup and recovery solution:

  1. Perform a full database backup
  2. Make some changes to the data
  3. Perform an incremental backup (binary log)
  4. Simulate data loss (drop a table)
  5. Restore from full backup
  6. Apply binary logs to recover to just before data loss

Solution:

# Step 1: Full backup at 10:00 AM mysqldump -u root -p --single-transaction \ --flush-logs --master-data=2 \ mydb > backup_full_10am.sql # Step 2: Make changes mysql -u root -p mydb -e "INSERT INTO orders (customer, amount) \ VALUES ('John', 150.00), ('Mary', 200.00);" # Step 3: Binary logs are automatically recording changes # Step 4: Disaster strikes at 2:30 PM mysql -u root -p mydb -e "DROP TABLE orders;" # Accidental! # Step 5: Restore from full backup mysql -u root -p mydb < backup_full_10am.sql # Step 6: Apply binary logs up to 2:29 PM (before DROP) mysqlbinlog --stop-datetime="2024-02-08 14:29:00" \ /var/log/mysql/mysql-bin.000001 | mysql -u root -p mydb # Verify recovery mysql -u root -p mydb -e "SELECT * FROM orders;" # Should show John and Mary's orders!

Summary

In this lesson, you learned:

  • Importance of backups and the 3-2-1 backup rule
  • Different backup types: full, incremental, differential, logical, physical
  • Using mysqldump for logical backups with advanced options
  • Restoration procedures and verification
  • Automated backup scripts with error handling
  • Binary logs for point-in-time recovery
  • Physical backups with Percona XtraBackup
  • Off-site and cloud backup strategies
  • Encryption for backup security
  • Disaster recovery planning and procedures
Next Up: In the next lesson, we'll explore importing and exporting data including LOAD DATA INFILE, SELECT INTO OUTFILE, CSV handling, and bulk data operations.