Tuesday, November 24, 2009

Database Backups

5.9.1. Database Backups

Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a LOCK TABLES on the relevant tables, followed by FLUSH TABLES for the tables. See Section 13.4.5, “LOCK TABLES and UNLOCK TABLES Syntax”, and Section 13.5.5.2, “FLUSH Syntax”. You need only a read lock; this allows other clients to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLES

statement is needed to ensure that the all active index pages are written to disk before you start the backup.

To make an SQL-level backup of a table, you can use SELECT INTO ... OUTFILE. For this statement, the output file cannot previously exist because allowing extant files to be overwritten would constitute a security risk. See Section 13.2.7, “SELECT Syntax”.

Another technique for backing up a database is to use the mysqldump program or the mysqlhotcopy script. See Section 8.10, “mysqldump — A Database Backup Program”, and Section 8.11, “mysqlhotcopy — A Database Backup Program”.

  1. Create a full backup of your database:

    shell> <strong class="userinput"><code>mysqldump --tab=<em class="replaceable"><code>/path/to/some/dir</code></em> --opt <em class="replaceable"><code>db_name</code></em></code></strong>

    Or:

    shell> <strong class="userinput"><code>mysqlhotcopy <em class="replaceable"><code>db_name</code></em> <em class="replaceable"><code>/path/to/some/dir</code></em></code></strong><br />

    You can also create a binary backup simply by copying all table files (*.frm, *.MYD, and *.MYI files), as long as the server isn't updating anything. The mysqlhotcopy script uses this method. (But note that these methods do not work if your database contains InnoDB tables. InnoDB does not store table contents in database directories, and mysqlhotcopy works only for MyISAM tables.)

  2. Stop mysqld if it is running, then start it with the --log-bin[=file_name] option. See Section 5.11.4, “The Binary Log”. The binary log files provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump.

For InnoDB tables, it is possible to perform an online backup that takes no locks on tables; see Section 8.10, “mysqldump — A Database Backup Program”.

MySQL supports incremental backups: You need to start the server with the --log-bin option to enable binary logging; see Section 5.11.4, “The Binary Log”. At the moment you want to make an incremental backup (containing all changes that happened since the last full or incremental backup), you should rotate the binary log by using FLUSH LOGS. This done, you need to copy to the backup location all binary logs which range from the one of the moment of the last full or incremental backup to the last but one. These binary logs are the incremental backup; at restore time, you apply them as explained further below. The next time you do a full backup, you should also rotate the binary log using FLUSH LOGS, mysqldump --flush-logs, or mysqlhotcopy --flushlog. See Section 8.10, “mysqldump — A Database Backup Program”, and Section 8.11, “mysqlhotcopy — A Database Backup Program”.

If your MySQL server is a slave replication server, then regardless of the backup method you choose, you should also back up the master.info and relay-log.info files when you back up your slave's data. These files are always needed to resume replication after you restore the slave's data. If your slave is subject to replicating LOAD DATA INFILE commands, you should also back up any SQL_LOAD-* files that may exist in the directory specified by the --slave-load-tmpdir option. (This location defaults to the value of the tmpdir variable if not specified.) The slave needs these files to resume replication of any interrupted LOAD DATA INFILE operations.

If you have to restore MyISAM tables, try to recover them using REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases. If myisamchk fails, try the following procedure. Note that it works only if you have enabled binary logging by starting MySQL with the --log-bin option.

  1. Restore the original mysqldump backup, or binary backup.

  2. Execute the following command to re-run the updates in the binary logs:

    shell> <strong class="userinput"><code>mysqlbinlog binlog.[0-9]* | mysql</code></strong><br />

    In some cases, you may want to re-run only certain binary logs, from certain positions (usually you want to re-run all binary logs from the date of the restored backup, excepting possibly some incorrect statements). See Section 8.8, “mysqlbinlog — Utility for Processing Binary Log Files”, for more information on the mysqlbinlog utility and how to use it.

You can also make selective backups of individual files:

  • To dump the table, use SELECT * INTO OUTFILE 'file_name' FROM tbl_name.

  • To reload the table, use LOAD DATA INFILE 'file_name' REPLACE .... To avoid duplicate rows, the table must have a PRIMARY KEY or a UNIQUE index. The REPLACE keyword causes old rows to be replaced with new ones when a new row duplicates an old row on a unique key value.

If you have performance problems with your server while making backups, one strategy that can help is to set up replication and perform backups on the slave rather than on the master. See Section 6.1, “Introduction to Replication”.

If you are using a Veritas filesystem, you can make a backup like this:

  1. From a client program, execute FLUSH TABLES WITH READ LOCK.

  2. From another shell, execute mount vxfs snapshot.

  3. From the first client, execute UNLOCK TABLES.

  4. Copy files from the snapshot.

  5. Unmount the snapshot.



1 comment:

Unknown said...

great article, but are you sure that regular backups is the best solution to keep your data safe? I prefer the combination of backups and recovery services, have you ever heard about the service of repair Microsoft SQL Server 8.0, provided by Recovery Toolbox for SQL Server?