Incremental backups in MySQL were always a tricky exercise. Logical backup tools like mysqldump or mydumper don’t support incremental backups, although it’s possible to emulate them with binary logs. And with snapshot-based backup tools it’s close to impossible to take incremental copies.
Percona’s XtraBackup does support incremental backups, but you have to understand well how it works under the hood and be familiar with command line options. That’s not so easy and it’s getting worse when it comes to restoring the database from an incremental copy. Some shops even ditch incremental backups due to complexity in scripting backup and restore procedures.
With TwinDB incremental backups are easy. In this post I will show how to configure MySQL incremental backups for a replication cluster with three nodes – a master and two slaves.
Configure MySQL Incremental Backups in TwinDB – online backup service for MySQL
TwinDB is online backup service for MySQL. It’s available on http://ift.tt/1Ib5VZm. Once you get there you’ll see a read-only demo. It shows how we backup our TwinDB servers.
Create Account in TwinDB
A new user has to create an account so they can backup their own servers.
For now we are in the by-invitations beta, drop me a mail to aleks@twindb.com for an invitation code.
Once you’re registered it’ll bring you to your environment where you can manage MySQL servers and storage, change schedule and retention policy.
Install Packages Repository
The next step is to install TwinDB agent on MySQL servers. It’s a python script that receives and executes commands from TwinDB. We distribute the TwinDB agent via packages repository. There are repositories for RedHat based systems as well as for Debian based systems.
For the demonstration we will register a cluster with one master and two slaves.
Let’s install TwinDB RPM repository.
# yum install http://ift.tt/1Ec6AEc
After the repository is configured we can install the agent:
# yum install twindb
Loaded plugins: fastestmirror
Setting up Install Process
Loading mirror speeds from cached hostfile
* base: mirror.cs.vt.edu
* epel: mirror.dmacc.net
* extras: mirror.cs.vt.edu
* updates: mirrors.loosefoot.com
Resolving Dependencies
–> Running transaction check
—> Package twindb.noarch 0:0.1.35-1 will be installed
–> Finished Dependency Resolution
Dependencies Resolved
=============================================================
Package Arch Version Repository Size
=============================================================
Installing:
twindb noarch 0.1.35-1 twindb 26 k
Transaction Summary
=============================================================
Install 1 Package(s)
Total download size: 26 k
Installed size: 85 k
Is this ok [y/N]: y
Downloading Packages:
twindb-0.1.35-1.noarch.rpm | 26 kB 00:00
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
Installing : twindb-0.1.35-1.noarch 1/1
Stopping ntpd service
Shutting down ntpd: [ OK ]
Starting ntpd service
Starting ntpd: [ OK ]
Starting twindb client
Starting TwinDB agent … OK
Verifying : twindb-0.1.35-1.noarch 1/1
Installed:
twindb.noarch 0:0.1.35-1
Complete!
The agent should be installed on all three servers. TwinDB discovers replication topology and makes sure the backup is taken from a slave.
Register TwinDB Agents
Now we need to register the MySQL servers in TwinDB.
To do so we need to run this command on all three servers.
# twindb –register ea29cf2eda74bb308a6cb80a910ab19a
2015-05-03 04:12:24,588: twindb: INFO: action_handler_register():1050: Registering TwinDB agent with code ea29cf2eda74bb308a6cb80a910ab19a
2015-05-03 04:12:26,804: twindb: INFO: action_handler_register():1075: Reading SSH public key from /root/.ssh/twindb.key.pub.
2015-05-03 04:12:28,356: twindb: INFO: action_handler_register():1129: Received successful response to register an agent
2015-05-03 04:12:29,777: twindb: INFO: get_config():609: Got config:
{
"config_id": "8",
"mysql_password": "********",
"mysql_user": "twindb_agent",
"retention_policy_id": "9",
"schedule_id": "9",
"user_id": "9",
"volume_id": "8"
}
2015-05-03 04:12:30,549: twindb: INFO: create_agent_user():1159: Created MySQL user twindb_agent@localhost for TwinDB agent
2015-05-03 04:12:31,084: twindb: INFO: create_agent_user():1160: Congratulations! The server is successfully registered in TwinDB.
2015-05-03 04:12:31,662: twindb: INFO: create_agent_user():1161: TwinDB will backup the server accordingly to the default config.
2015-05-03 04:12:32,187: twindb: INFO: create_agent_user():1162: You can change the schedule and retention policy on http://ift.tt/1Ib5VZm
When a MySQL server registers in TwinDB few things happen:
The agent generates a GPG keys pair to encrypt backups and for secure communication with TwinDB dispatcher
The agent generates a SSH keys for secure file transfers
TwinDB creates a schedule, retention policy for the server and allocates storage in TwinDB for backup copies.
The agent creates a MySQL user on the local MySQL instance.
At the registration step the agent has to connect to MySQL with root permissions. It’s preferable to set a user and password in ~/.my.cnf file. It is also possible to specify the user and password with -u and -p options.
After five minutes TwinDB will discover the replication topology, and will find a feasible MySQL server to take backup and will schedule a backup job.
In “Server farm” -> “All servers” we see all registered MySQL servers.
After TwinDB discovers replication cluster nodes it starts scheduling backup jobs. By default a full copy is taken every week and incremental copy is taken every hour. You can change the schedule if you click on “Schedule” -> “Default“.
On the dashboard there is a list of jobs. I was writing this post several days, so TwinDB managed to schedule a dozen of jobs.
For each newly registered server TwinDB schedules a full job, that’s why there are jobs for db01 and db02. But then it picked db03 and all further backups are taken from it.
To see what backup copies are taken from the replication cluster let’s open db03 server details, tab “Backup copies“. Here you can see full copies from db01, db02, and db03 and further incremental copies from db03.
Restore MySQL Incremental Backup
So far, taking an incremental backup was easy, but what about restoring a server from it?
Let’s go to the server list, right-click on a server where we want to restore a backup copy and choose “Restore server“:
Then choose an incremental copy to restore:
Then enter directory name where the restored database will be:
Then press “Restore” and it should show a confirmation window:
The restore job is scheduled and it’ll start after five minutes:
When the restore job is done the database files will be restored in directory /var/lib/mysql.restored on server db03:
[root@db03 mysql.restored]# cd /var/lib/mysql.restored/
[root@db03 mysql.restored]# ll
total 79908
-rw-r—–. 1 root root 295 May 5 03:36 backup-my.cnf
-rw-r—–. 1 root root 79691776 May 5 03:36 ibdata1
drwx——. 2 root root 4096 May 5 03:36 mysql
drwx——. 2 root root 4096 May 5 03:36 performance_schema
drwx——. 2 root root 4096 May 5 03:36 sakila
drwx——. 2 root root 4096 May 5 03:36 twindb
-rw-r—–. 1 root root 25 May 5 03:36 xtrabackup_binlog_info
-rw-r—–. 1 root root 91 May 5 03:36 xtrabackup_checkpoints
-rw-r—–. 1 root root 765 May 5 03:36 xtrabackup_info
-rw-r—–. 1 root root 2097152 May 5 03:36 xtrabackup_logfile
-rw-r—–. 1 root root 80 May 5 03:36 xtrabackup_slave_info
[root@db03 mysql.restored]#
And that’s it. /var/lib/mysql.restored/ is ready to be used as MySQL datadir.
by The post How to setup MySQL incremental backup appeared first on Backup and Data Recovery for MySQL.
via Planet MySQL
How to setup MySQL incremental backup
Dilbert 2015-05-05
Everything You Need To Build a Triple-Bladed Lightsaber
It’s easy to figure out exactly when Star Wars fans first started building their own replica lightsabers—it was almost certainly the same day the original film premiered back in 1977. And if the triple-bladed lightsaber seen in the teaser for the upcoming Star Wars: The Force Awakens has inspired your DIY side, this infographic will walk you through building a simple replica that actually lights up.
Your first step is a trip to your local electronics and hardware stores to get all of the various components you’ll need for your build, and thankfully the rest of the steps, including the actual assembly process, sounds just as easy. In other words, you don’t need to be a fully-trained Jedi Knight with a Masters in electrical engineering to build one yourself—just a healthy obsession with Star Wars and a free afternoon. [PureGaming]
via Gizmodo
Everything You Need To Build a Triple-Bladed Lightsaber
Super secretive malware wipes hard drive to prevent analysis
Rombertik will go to great lengths to keep its private parts private.
via Ars Technica
Super secretive malware wipes hard drive to prevent analysis
Pythons Take Over Florida, Are Busted By GPS
One of America’s most delicate ecosystems is invaded with swarms of giant, non-native Burmese pythons. They’re big. They screw up the ecosystem. And they’re hard to find. But researchers may have finally learned how to round ‘em up, thanks to radio and GPS.
The easily camouflaged, semi-aquatic Burmese python can easily hit 20 feet and 200 pounds, and they like to snack on endangered mammals dwelling in the Everglades—making them both hard-to-nab and a serious threat to the local ecosystem. They were first spotted in the region back in 1979.
In a study led by the U.S. Geological Survey, published last month in the journal Animal Biotelemetry, researchers explained how the use of tracking technology over many years has narrowed down the size of the huge snakes’ dwelling range in Florida’s Everglades National Park. It also better explains the animals’ movements and migration. This can help authorities neutralize this threat to the Everglades’ biodiversity.
This study started back in 2006 with 19 wild-caught adult pythons, which were implanted with radio transmitters or GPS devices. Sixteen were radio-tracked with VHF tags for three years, and the other three snakes were monitored with GPS tags for one year. The results determined where the pythons like to hang in the park (on tree islands and near roads, in an average range of around 14 square miles), and that they tend to move to wherever there’s surface water. Before this study, the predators’ movements and habitat ranges within the park were pretty unknown.
This multi-year effort was the largest and longest-running python-tracking study ever (both here and its native habitat of Southeast Asia). The National Park Service says that since 2002, only about 2,000 pythons have been removed from the park—“likely representing only a fraction of the total population.”
In 2013, the state kicked off the inaugural Python Challenge. It was a snake-snatching contest that awarded regular folks thousands of dollars to comb through the Everglades, wrangling and exterminating Burmese pythons. (There’s a second installment set for 2016.) Hopefully this new study can better put the task in tech’s hands.
Leafblower Volcano
(PG-13 Language) A guy and his dad were out in the yard when they had an idea. Take the leaf blower they were using and point it directly into the fiery belly of a chiminea. The result – a fire-belching mini volcano. Mom was not as amused.
via The Awesomer
Leafblower Volcano
Study shows health app Lose It! does help patients with weight loss
A study compares LoseIt! with one-on-one nutritional counselling, finding similar results.
The post Study shows health app Lose It! does help patients with weight loss appeared first on iMedicalApps.
via iMedicalApps
Study shows health app Lose It! does help patients with weight loss
Better Photos Can Sell Your Home Faster, for Thousands of Dollars More
If you’re planning on selling a home soon, you might want to consider hiring a professional photographer or improving your photography skills. Doing so could be worth over $10,000.
Brokerage firm Redfin Corp looked at listings to compare those with professional photos versus amateur ones. It found that for homes listed between $200,000 and $1 million, photos taken with a DSLR sold for $3,400 to $11,200 more relative to their list prices. They were also more likely to sell within six months and up to 3 weeks faster than the listings with amateur photos.
Although the analysis was done in 2013, it repeats a previous study the company had done in 2010 with similar results. It might sound obvious that better photos make your home look better, but it’s interesting to know just how much of a difference this one thing can make.
Look Sharp: Professional Listing Photos Sell For More Money – Research Center | Redfin via Apartment Therapy
via Lifehacker
Better Photos Can Sell Your Home Faster, for Thousands of Dollars More
Most Popular Fan: Vornado
Vornado’s excellent, attractive series of fans blew away the competition to grab the title of best fan.
As you probably expect, Vornado has a fan to fit every need, including:
- The 660 Whole Room Air Circulator
- The 530 Compact Air Circulator
- The 133 Small Room Air Circulator
- The 783 Whole Room Adjustable Height Air Circulator
- The Flippi V8 Personal Air Circulator, which is on my desk at the office
If you’re not willing to pony up for a Vornado, the Honeywell took a distant second place, and is the #1 best-seller in fans.
Commerce covers the best products on Kinja Gear, finds you deals on those products on Kinja Deals, and asks you about your favorites on Kinja Co-Op, click here to learn more. We operate independently of Editorial and Advertising, and if you buy something through our posts, we may get a small share of the sale. We want your feedback.
Follow us for the best deals on the Internet, and check out http://t.co/w8ke7mw7nT
Thank God the FAA Is Switching to Satellites for Air Traffic Control
As unnerving as it is to hear, air traffic control has always been pretty piecemeal. Relying on a combination of instrumentation—namely, radar, radios, and GPS—as well as good old fashioned eyeballs, pilots do a pretty good job navigating the sky. But they’re about to get a lot better with a new satellite-based system.
Appropriately named NextGen, the new system being deployed widely this year by the Federal Aviation Administration (FAA) promises to improve every single air traveller’s experience. The key is constant connectivity to precise satellite technology that gives all aircraft and controllers in flight towers access to real-time-data from the time the plane leaves the gate until it arrives at its destination. This means weather problems are more easily spotted and avoided—which is a huge deal since weather causes 70 percent of all delays. Beyond that, the entire air traffic control system is becoming more automated and modernized. The FAA already has a list of NextGen success stories, too.
The NextGen system will get even better as more planes use it, too. “All you need is one aircraft to land and the benefits begin,” said the FAA’s Warren Strickland in a statement. “With connections, the benefits are exponential.” Heck, even an incremental benefit would be nice at this point!
via Gizmodo
Thank God the FAA Is Switching to Satellites for Air Traffic Control