MySQL Backups
1. Why backup is necessary
- Disaster recovery: Hardware failures, unintentional bugs causing data damage, or servers and their data being inaccessible or unusable due to certain reasons (such as a computer room building burning down, malicious hacker attacks, or MySQL bugs).
- People change their minds: Many people often want to restore certain data after deleting it.
- Audit: Sometimes it is necessary to know the status and data of the data or schema at a certain point in the past, or to discover a bug in the application, and to know what happened before that.
- Testing: Regularly use production data to update the testing server (backup is for recovery, which can also verify whether the backed up data is complete and can be restored normally, etc.). If the backup plan is very simple, simply restore the backup files to the test server.
TIP: When planning backup and recovery strategies, there are two important requirements: Recovery Point Objective (RPO) defines how much data loss can be tolerated, and Recovery Time Objective (RTO) defines how long it takes to recover data.
2. Online backup or offline backup
If possible, closing MySQL for backup is the simplest, safest, and best way to achieve consistency, with the lowest risk of corruption and inconsistency. You don’t have to worry about dirty pages or other caches in the InnoDB buffer pool at all. There is no need to worry about data being modified during the attempt to backup, and because the server does not provide access to the application, the backup will be completed faster. However, shutting down and restarting MSQL under high loads and data volumes may take a long time, and even minimizing downtime can be costly for server downtime. Therefore, online backup is required. However, due to the need for consistency, there will still be significant service interruptions when backing up the server online. One of the biggest issues with MyISAM online backup is using the FLUSH TABLES THREAD LOCK (close all open tables and lock all tables in all databases with a global read lock, without refreshing dirty blocks) operation, which causes MySQL to close and lock all tables, flush MyISAM data files to disk, and flush query cache. This operation takes a very long time, and the exact duration cannot be estimated. Because if a table lock (LOCK TABLES tbLName lock_type) statement is used in a session to apply a table lock to a certain table, before the table lock is released, another session executing FLUSH TABLES With READ LOCK will also be blocked. No data can be changed on the server unless the lock is released. FLUSH TABLES With READ LOCK is not as expensive as shutting down a server because most of the cache is still in memory and the server is always in a “warm up” state, but it can also be very destructive. The best way to avoid using FLUSH TABLES With READ LOCK is to only use the InnoDB table.
[When planning backups, there are some performance related factors to consider]
Lock time: How long does it take to hold the lock, such as the global FLUSH TABLES With READ LOCK held during backup?
Backup time: How long does it take to copy the backup to the destination.
Backup load: What is the impact on server performance when copying backups to the destination?
Recovery time: How long does it take to copy the backup image from the storage location to the MySQL server, replay binary logs, etc?
The biggest trade-off is between backup time and backup load. One can be sacrificed to enhance the other. For example, server performance can be reduced to increase the priority of backups.
3. Logical backup or physical backup
Logical backup (also known as “export”) and physical backup that directly copies the original files. Logical backup contains data in a format that MySQL can parse, either SQL or text separated by a symbol. The original file refers to the file that exists on the hard drive.
3.1 Logical backup has the following advantages:
- Logical backup can be viewed and operated on using an editor or commands such as grep and sed to create regular files. This is very helpful when you need to recover data or only want to view data without restoring it.
- Recovery is very simple. They can be inputted into MySQL through pipelines, or mysql import can be used.
- Backing up and restoring through the network is equivalent to operating on a different machine than the MySQL host.
- It can be used in systems like Amazon RDS (relational database services) that cannot access the underlying file system.
- It is very flexible because mysqldump (a tool that most people like) can accept many options, such as using a WHERE clause to limit the data that needs to be backed up.
- Not related to storage engines. Because it is generated by extracting data from the MySQL server, it eliminates the differences in underlying data storage. Therefore, it is possible to backup from the InnoDB table and restore it to the MYISAM table with minimal effort. But the original data cannot do so.
- Helps to avoid data corruption. If there is a disk drive failure that requires copying the original files, an incorrect or partially damaged backup will be obtained. If the data in MySQL’s memory is not damaged yet, a reliable logical backup can be obtained.
- [The drawbacks of logical backup are as follows]:
- The generation of logical backups must be completed by the database server, therefore requiring more CPU cycles.
- It is impossible to guarantee that the same data will be restored after export. Floating point representation issues, software bugs, etc. can all cause problems, although they are very rare.
- Restoring from a logical backup requires MySQL to load and interpret statements, convert them to storage format, and rebuild indexes, all of which can be slow.
TIP: he biggest drawback is the cost of exporting data from MySQL and loading it back through SQL statements. If using logical backup, the time required for testing recovery will be very important. The mysqldump included in Percona Server can help with grouping when using InnoDB tables, as it formats the output to leverage the fast indexing advantages of InnoDB during reloading. Our testing shows that doing so can reduce restoration time by 2/3 or even more. The more indexes there are, the more obvious the benefits.
3.2 Physical backup has the following advantages:
Based on file based physical backup, simply copy the required files to the destination address to complete the backup. No additional work is required to generate the original file.
The recovery of physical backups may be simpler, depending on the storage engine. For MyISAM, simply copy the file to the destination. For InnoDB, it is necessary to stop the database service and other steps may need to be taken.
The physical backup of InnoDB and MyISAM is very easy to cross platforms, operating systems, and MySQL versions.
Restoring from physical backup will be faster as there is no need to execute any SQL or build indexes. If there are large InnoDB tables that cannot be fully cached in memory, the recovery of physical backups is much faster, just an order of magnitude faster. In fact, the most frightening aspect of logical backup is the uncertain restore time.
[The drawbacks of physical backup are as follows]:
The original file of InnoDB is usually much larger than the corresponding logical backup. The tablespace of InnoDB often contains a lot of unused space. There is still a lot of space used for purposes other than storage (such as inserting buffers, rolling back segments, etc.).
Physical backup may not always cross platforms, operating systems, and MySQL versions. Case sensitive file names and floating-point formats can be troublesome. It is likely that files cannot be moved to another system due to different floating-point formats.
Physical backup is simple and efficient, but it is also prone to errors. Nevertheless, it is still very suitable for long-term backup retention. But try not to rely solely on physical backups. At least a logical backup needs to be done at regular intervals. It is recommended to mix physical and logical methods for backup: first, use physical replication to start the MySQL server instance with this data and run mysalcheck. Then periodically perform logical backups using mysoldump. This approach can gain the advantages of both methods, without making the production server overly burdened during export. If a snapshot of the file system can be utilized, a snapshot can also be generated, copied to another server and released, then the original file can be tested before performing a logical backup.
Backup files must undergo data recovery testing, especially physical backups. Don’t assume that data backup is normal, as it can easily lead to big mistakes. For InnoDB, this means starting a MySQL instance, performing InnoDB recovery operations, and then executing CHECK TABLES. You can also skip this operation and only run innochecksum on files (a tool used to verify the integrity of innodb tablespace files, which is an official built-in tool), but it is not recommended to do so.
4. Incremental backup and differential backup
When the amount of data is large, a common strategy is to do regular incremental or differential backups. Differential backup is a backup of all changes made since the last full backup, while incremental backup is a backup of all modifications made since the last backup of any type.
TIP: For example, do a full backup on Sunday and a differential backup on Monday for all changes made since Sunday. On Tuesday, there are two options: backup all changes (differences) since Sunday, or only backup all changes (increments) since Monday backup
Both are partial backups that can reduce server overhead (not necessarily: for example, Percona XtraBackup and MySQL Enterprise Backup still scan all data blocks on the server, so they do not save too much time), backup time, and backup space.
- Use incremental backup features in software such as Percona XtraBackup and MySQL Enterprise Backup.
- Back up binary logs. You can use FLUSH LOGS to start a new binary log after each backup, so that only the new binary log needs to be backed up.
- Do not back up unchanged tables. The MyISAM engine records the last modification time of each table. You can check this time by viewing the files on the disk or running SHOW TABLE STATUS. If InnoDB is used, triggers can be used to record the modification time in a table.
- Do not back up unchanged rows. If a table is only for insertion, you can add a column (timestamp) and back up only the rows inserted since the last backup.
- Back up all data and send it to a destination with deduplication features, such as the ZFS file management program.
- Disadvantages of incremental backup: Increase recovery complexity, additional risks, and longer recovery time. If full backup is possible, considering simplicity, try to do full backup as often as possible. It is recommended to backup at least once a week and incrementally within the week.
5.What to backup
The simplest strategy is to only backup data and define tables, but this is a minimum requirement. Restoring a database in a production environment generally requires more work, such as:
- Non significant data: Don’t forget data that is easily overlooked, such as binary logs and InnoDB transaction logs.
- Code: MySQL servers can store a lot of code, such as triggers and stored procedures. If the MySQL database is backed up, most of this type of code is also backed up, but restoring a single business database can be more troublesome at this time.
- Copy configuration: If restoring a server with designed replication relationships, all files related to replication (binary logs, relay logs, log index files, and. info files) should be backed up. At least the outputs of SHOW MASTER STATUS and SHOW SLAVE STATUS should be included. Executing FLUSH LOGS is also very beneficial as it allows MySQL to start with a new binary log. It is easier to perform recovery based on the point of failure from the beginning of the log file than from the middle.
- Server configuration: If you need to recover from a disaster, you need to build a server in a new data center. If the server configuration is included in the backup at this time, it will save a lot of work.
- Selected operating system files: For server configuration, external configuration is crucial for backup tasks that are crucial to the production server. On UNIX servers, this may include cron tasks (implementing Linux scheduled tasks), user and group configuration, management scripts, and sudo (a Linux system management instruction that allows system administrators to execute some or all root commands for regular users) rules.
6.Storage engine and consistency
MySQL’s choice of storage engine can lead to significantly more complex backups. How to obtain consistent backups for a given storage engine. There are actually two types of consistency that need to be considered: data consistency and file consistency.
Data consistency: When backing up, consideration should be given to whether data needs to be consistent at a specified point in time. For example, consistency between delivery notes and payments in online shopping. Failure to consider the shipping note during payment, or vice versa, can lead to trouble. If doing online backup, consistency backup of all relevant tables may be required. The multi version control feature of InnoDB can help us. Start a transaction, dump (store) a set of related tables, and then commit the transaction. If you want to use the REPEATABLE READ transaction isolation level on the server again, without any DDL, there will definitely be perfect consistency and point-to-point snapshot data, and no subsequent work will be blocked during the backup process. If it is not a transactional storage engine, the lock can only be used to lock all tables that need to be backed up together during backup, and the lock can be released after the backup is completed. You can also use mysqldump to obtain a consistent logical backup of the InnoDB table, using the – single transaction option. But it may lead to a very long transaction, which can result in unacceptable overhead under certain loads.
File consistency: All backed up files should be consistent with each other, including the internal consistency of each file, which is also very important. If the relevant files are copied at different times, they may not be consistent with each other. MyISAM MYD and The MYI file is just a chestnut. InnoDB detects inconsistency or corruption and logs errors until the server crashes. For the non transactional storage engine MyISAM, it is necessary to lock the table and refresh it. This means either using a combination of LOCK TABLES and FLUSH TABLES to cause the server to flush changes from memory to disk, or using FLUSH TABLES With READ Lock. Once the refresh is completed, the original MyISAM file can be safely copied. For transactional storage engine InnoDB, ensuring file consistency is quite challenging. Even with FLUSH TABLES With READ LOCK, InnoDB still runs in the background: inserting cache, log, and write threads continues to merge changes into log and tablespace files. These threads are designed to be asynchronous (which is why InnoDB achieves higher concurrency) and are not related to LOCK TABLES. Therefore, it is not only necessary to ensure that each file is consistent internally, but also to simultaneously replicate logs and tablespaces at the same point in time. If there are other threads modifying files during backup, or if the backup tablespace is different from the log file time point, it will end again due to system damage after recovery. There are several ways to avoid this problem:
- Wait until InnoDB clears threads and inserts buffer merge threads. You can observe the output of SHOW INNODB STATUS and copy files when there is no dirty cache or pending write. However, this method may take a long time; Because InnoDB’s backend threads involve too much interference and are not very safe. Not recommended for use.
- To obtain snapshots of data and log files in a system similar to LVM (Logical Volume Manager), it is necessary to ensure that the data and log files are consistent with each other in the snapshot; Taking separate snapshots of them is meaningless.
- Send a STOP signal to MySQL for backup, and then send a CONT signal to wake up MySQL again. It may seem like a rarely recommended approach, but if another approach is to shut down the server during the backup process, then this approach is worth considering. At least this technology does not require preheating after restarting the server.
After copying the data to another location, the lock can be released to allow the MySQL server to run normally again. At the same time, it is also possible to backup from the backup database, which has the advantage of not interfering with the main database and avoiding adding additional load to the main database.