MySQL Online Backup with xtrabackup

My last post about backups was a while ago. Although it is still valid, it’s not advisable to use it as daily backup. For that we can use XtraBackup.

With XtraBackup, you can take online backups without interrupt the server workload(it will require a small lock to take binlog coordinates).
Today I’ll show how to take a complete backup.

Install:

To install it, I advise you to use Yum / Apt-get repos:

Centos / Redhat:

sudo yum install http://ift.tt/2jkmf2a
sudo yum install percona-xtrabackup-24

Debian / Ubuntu

wget http://ift.tt/2jqiETr(lsb_release -sc)_all.deb
sudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.deb
sudo apt-get update
sudo apt-get install percona-xtrabackup-24

Backup:

To take the backup, we will use innobackupex script:

innobackupex --user=root --password='Passw0rd!' /backups/

The script will produce some log messages, we need to verify if it ends with :

...
170429 21:07:12 completed OK!

Parallel:

Xtrabackup allows the copy of the tables to be done in parallel, you can set the number of threads by using the –parallel option:

innobackupex --user=root --password='Passw0rd!' --parallel=8 /backups/

We can observe that each threads receives a number which is displayed within []:

170429 21:12:27 [05] Copying ./ibdata1 to /backups/2017-04-29_21-12-27/ibdata1
170429 21:12:27 [06] Copying ./mysql/plugin.ibd to /backups/2017-04-29_21-12-27/mysql/plugin.ibd
170429 21:12:27 [06]        ...done
170429 21:12:27 [02] Copying ./mysql/servers.ibd to /backups/2017-04-29_21-12-27/mysql/servers.ibd
170429 21:12:27 [02]        ...done
170429 21:12:27 [03] Copying ./mysql/help_topic.ibd to /backups/2017-04-29_21-12-27/mysql/help_topic.ibd
170429 21:12:27 [07] Copying ./mysql/help_category.ibd to /backups/2017-04-29_21-12-27/mysql/help_category.ibd
170429 21:12:27 [07]        ...done

Compress:

Xtrabackup also allows you to compress your backup using –compress and –compress-threads (normally set to the same number of –parallel threads):

innobackupex --user=root --password='Passw0rd!' --parallel=8 --compress --compress-threads=8 /backups/

For example, a backup that takes 702M of disk space, now takes only 387M:

702M	/backups/2017-04-29_21-12-27
387M	/backups/2017-04-29_21-15-53

Restoring:

To restore the backup, we need to (1) decompress it with –decompress option in case we have compressed it before (we will require qpress for that) and (2) apply the log of transactions that happened while the backup was been taken (Basically a crash recovery that InnoDB does when MySQL starts):

innobackupex --decompress /backups/2017-04-29_21-18-04/
innobackupex --apply-log --use-memory=4G /backups/2017-04-29_21-18-04

To speed up the –apply-log process, we can configure –use-memory, which will behave like InnoDB Buffer Pool.

All .qp files will remain on the directory, we need to manually remove then. We can do it using below command:

find /backups/2017-04-29_21-18-04  -name "*.qp" -exec rm -f {} \;

Now we only need to copy the restored backup to the MySQL datadir and configure mysql as owner and group owner of all files in the folder.

That is it for today. Remember, if you don’t test your backups, YOU DON’T HAVE A BACKUP!

via Planet MySQL
MySQL Online Backup with xtrabackup