Monday, April 27, 2009

Quick and Dirty MySQL Backup

Quick and Dirty MySQL Backup
Until recently, the MySQL databases I work with contain data that can be retrieved from other sources. Most of the data is either batched in from flat files or another database. It would be inconvenient to reload a couple months worth of data, but since these databases are not mission critical, the business could operate without them for a couple days. Lately, we've been implementing some semi-critical systems that rely on a somewhat expedient recovery.

The requirements for the project were that the database must remain up during the backup and losing a day's worth of data was acceptable. All of my regular Oracle readers are cringing at the moment, but hey, that was the rules I was working with.

My first thought was to use mysqlhotcopy because it backed up the actual physical files. However, mysqlhotcopy only allows you to backup MyISAM tables and we extensively use InnoDB.

My next choice was mysqldump. mysqldump basically takes the entire database and dumps a text file containing DDL and DML that will re-create your database. Coming from an Oracle background, I knew there were shortcomings to dumping the entire database, but hopefully I could mitigate them.

The first hurdle was security. I specifically turn off unauthenticated root access on my databases, but I needed to be able to read all the tables to do a backup. I don't want to hard-code my root password or any password in a script as I don't have suicidal tendencies (diagnosed, anyway). So I created a user called backup that could only login from the server machine, but could login unauthenticated.

The next thing I had to figure out was how to get a consistent view of the data. I knew that my developers preferred InnoDB for it's Referential Integrity features and getting inconsistent data would be disasterous. Fortunately, one of the flags to mysql_dump is the --single-transaction which essentially takes a snapshot in time.

So I wrote a script around mysql_dump and --single-transaction and dumped my entire database to disk. Every now and again, I encountered an "Error 2013: Lost connection to MySQL server during query when dumping table `XYZ` at row: 12345". The row number changed each time, so I figured it had something to do with either activity in the database or memory. I could rerun the command and it usually finished the second or third time.

After the third day straight of my backup failing, I decided to research it a little more. mysql_dump has a flag called --quick which bypasses the cache and writes directly to disk. I put this flag in my backup script and the script started finishing more consistently.

The last hurdle was having enough space on disk to store my backups. Since the backup file is really a text file, I decided to pipe the output through gzip to reduce it's size.

Currently, my quick and dirty backup script is a wrapper around the following command:


mysqldump --all-databases --quick --single-transaction -u backup | gzip > mybackup.sql.gz


We're adopting MySQL at a blistering pace, so I'm sure I'll need to make changes in the future. For right now, though, it gets the job done.
Posted by Jeff Hunter at 8:43 PM
10 comments:

Anonymous said...

Hi,

Can you post SQL string you used to create backup user please.

Roland
11/07/2006 06:56:00 AM
gandolf989 said...

Under what situations is mySQL better than Oracle?
11/07/2006 10:10:00 AM
Anonymous said...

While your Database is still relatively small, you might want to look into replication, and backing up the replicated machine.

The Replicated machine will contain an exact snapshot of the live machine in case it goes down, and the backups of the replicated machine are there in case someone accidently deletes or over writes data.

/bryan
11/07/2006 10:43:00 AM
Anonymous said...

Not great I'm afraid. This will cause MyISAM tables to lock for the duration of the dump and will take ages to do innodb tables. You sohuld really consider moving everything to InnoDB and buying InnoDB hot Backup.
11/07/2006 12:20:00 PM
Jeff Hunter said...

This will cause MyISAM tables to lock for the duration of the dump and will take ages to do innodb tables. You sohuld really consider moving everything to InnoDB and buying InnoDB hot Backup.
We don't use MyISAM and I don't think 15 minutes is ages. I've investigated InnoDB backup and when the time is right, we'll probably get it. But right now, the quick and dirty method works well with my business rules.
11/07/2006 06:08:00 PM
Anonymous said...

--opt helps :)

why don't you buy innodb hot backup?
11/08/2006 02:11:00 PM
Jeff Hunter said...

why don't you buy innodb hot backup?
When the data becomes more critical and the MTTR becomes shorter, I certainly will.
11/09/2006 10:39:00 AM
Sheeri said...

firstly, --opt is the default for mysqldump. It's always used unless you specifically use --skip-opt.

Secondly, why not use Zmanda's hot backup for MySQL? It's free and open source, and quick to set up. And best of all, it doesn't use a proprietary format.

http://www.zmanda.com/backup-mysql.html
http://www.zmanda.com/zrm-mysql-community.html
11/11/2006 08:57:00 AM
Anonymous said...

As Roland said above, what was the command to create the backup user?
12/25/2006 02:44:00 AM
Anonymous said...

What if you want to restore only one database? That would be a pain using --all-databases and gzipping everything.

I wrote a small PHP script to handle all my backup needs, I also feel that it's enough with a nightly backup and it's ok to lose one day of databases if we have a crash (at least so far), so I made this script: http://iblins01.quidnovo.se/~sebjma/backup.phps

The script is called via crontab during low traffic times (usually at night) and will produce gzipped files for each database on the system. Backups older than a predefined timelimit is deleted from the filesystem.

I hope that someone else might find this handy!

No comments: