Internet Health Test Checks to See if Your ISP Slows Your Connection

Internet Health Test Checks to See if Your ISP Slows Your Connection

The FCC recently established new rules to prevent ISPs from selectively degrading your connection to certain services, like some have done in the past. This tool, from Fight for the Future, can check to see if this type of degradation is happening to you.

The tool checks your connection to different locations throughout the internet that are outside your ISPs own network. This helps determine whether there are choke points between various networks designed to slow down your connection. This type of slow down famously occurred when Comcast throttled Netflix in order to negotiate a higher fee.

On its own, the test can give you a decent idea of how consistent your connection speed is. However, the organization also puts the data from all users’ tests together to get a birds’ eye view of ISP activity. With enough people testing their connection, Fight for the Future hopes that we can see which companies are responsible for bottlenecks that violate net neutrality principles more easily.

Internet Health Test


via Lifehacker
Internet Health Test Checks to See if Your ISP Slows Your Connection

Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup – Part 2 of 2

In part one of this post, I gave you a couple examples of how to backup your MySQL databases using mysqldump. In part two, I will show you how to use the MySQL Enterprise Backup (MEB) to create a full and partial backup.
MySQL Enterprise Backup provides enterprise-grade backup and recovery for MySQL. It delivers hot, online, non-blocking backups on multiple platforms including Linux, Windows, Mac & Solaris. To learn more, you may download a whitepaper on MEB.
MySQL Enterprise Backup delivers:
NEW! Continuous monitoring – Monitor the progress and disk space usage
“Hot” Online Backups – Backups take place entirely online, without interrupting MySQL transactions
High Performance – Save time with faster backup and recovery
Incremental Backup – Backup only data that has changed since the last backup
Partial Backup – Target particular tables or tablespaces
Compression – Cut costs by reducing storage requirements up to 90%
Backup to Tape – Stream backup to tape or other media management solutions
Fast Recovery – Get servers back online and create replicated servers
Point-in-Time Recovery (PITR) – Recover to a specific transaction
Partial restore – Recover targeted tables or tablespaces
Restore to a separate location – Rapidly create clones for fast replication setup
Reduce Failures – Use a proven high quality solution from the developers of MySQL
Multi-platform – Backup and Restore on Linux, Windows, Mac & Solaris
(from: http://ift.tt/Q5GptV)
While mysqldump is free to use, MEB is part of MySQL’s Enterprise Edition (EE) – so you need a license to use it. But if you are using MySQL in a production environment, you might want to look at EE, as:
MySQL Enterprise Edition includes the most comprehensive set of advanced features, management tools and technical support to achieve the highest levels of MySQL scalability, security, reliability, and uptime. It reduces the risk, cost, and complexity in developing, deploying, and managing business-critical MySQL applications.
(from: http://ift.tt/Q5GrC6)
Before using MEB and backing up your database for the first time, you will need some information:
Information to gather – Where to Find It – How It Is Used
Path to MySQL configuration file – Default system locations, hardcoded application default locations, or from –defaults-file option in mysqld startup script. – This is the preferred way to convey database configuration information to the mysqlbackup command, using the –defaults-file option. When connection and data layout information is available from the configuration file, you can skip most of the other choices listed below.
MySQL port – MySQL configuration file or mysqld startup script. Used to connect to the database instance during backup operations. Specified via the –port option of mysqlbackup. –port is not needed if available from MySQL configuration file. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions.
Path to MySQL data directory – MySQL configuration file or mysqld startup script. – Used to retrieve files from the database instance during backup operations, and to copy files back to the database instance during restore operations. Automatically retrieved from database connection for hot and warm backups. Taken from MySQL configuration file for cold backups.
ID and password of privileged MySQL user – You record this during installation of your own databases, or get it from the DBA when backing up databases you do not own. Not needed when doing an offline (cold) backup, which works directly on the files using OS-level file permissions. For cold backups, you log in as an administrative user. – Specified via the –password option of the mysqlbackup. Prompted from the terminal if the –password option is present without the password argument.
Path under which to store backup data – You choose this. See Section 3.1.3, “Designate a Location for Backup Data” for details. – By default, this directory must be empty for mysqlbackup to write data into it, to avoid overwriting old backups or mixing up data from different backups. Use the –with-timestamp option to automatically create a subdirectory with a unique name, when storing multiple sets of backup data under the same main directory.
Owner and permission information for backed-up files (for Linux, Unix, and OS X systems) – In the MySQL data directory. – If you do the backup using a different OS user ID or a different umask setting than applies to the original files, you might need to run commands such as chown and chmod on the backup data. See Section A.1, “Limitations of mysqlbackup Command” for details.
Size of InnoDB redo log files – Calculated from the values of the innodb_log_file_size and innodb_log_files_in_group configuration variables. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.
Rate at which redo data is generated – Calculated from the values of the InnoDB logical sequence number at different points in time. Use the technique explained for the –incremental-with-redo-log-only option. – Only needed if you perform incremental backups using the –incremental-with-redo-log-only option rather than the –incremental option. The size of the InnoDB redo log and the rate of generation for redo data dictate how often you must perform incremental backups.
(from: http://ift.tt/1eiULqM
For most backup operations, the mysqlbackup command connects to the MySQL server through –user and –password options. If you aren’t going to use the root user, then you will need to create a separate user. Follow these instructions for setting the proper permissions.
All backup-related operations either create new files or reference existing files underneath a specified directory that holds backup data. Choose this directory in advance, on a file system with sufficient storage. (It could even be remotely mounted from a different server.) You specify the path to this directory with the –backup-dir option for many invocations of the mysqlbackup command.
Once you establish a regular backup schedule with automated jobs, it is preferable to keep each backup within a timestamped subdirectory underneath the main backup directory. To make the mysqlbackup command create these subdirectories automatically, specify the –with-timestamp option each time you run mysqlbackup.
For one-time backup operations, for example when cloning a database to set up a replication slave, you might specify a new directory each time, or specify the –force option of mysqlbackup to overwrite older backup files.
(from http://ift.tt/1eiULqF
If you haven’t downloaded and installed mysqlbackup, you may download it from edelivery.oracle.com (registration is required). Install the MySQL Enterprise Backup product on each database server whose contents you intend to back up. You perform all backup and restore operations locally, by running the mysqlbackup command on the same server as the MySQL instance. Information on installation may be found here.
Now that we have gathered all of the required information and installed mysqlbackup, let’s run a simple and easy backup of the entire database. I installed MEB in my /usr/local directory, so I am including the full path of mysqlbackup. I am using the backup-and-apply-log option, which combines the –backup and the –apply-log options into one. The –backup option performs the initial phase of a backup. The second phase is performed later by running mysqlbackup again with the –apply-log option, which brings the InnoDB tables in the backup up-to-date, including any changes made to the data while the backup was running.
$ /usr/local/meb/bin/mysqlbackup –user=root –password –backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log
MySQL Enterprise Backup version 3.8.2 [2013/06/18] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved.
mysqlbackup: INFO: Starting with following command line …
/usr/local/meb/bin/mysqlbackup –user=root –password –backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log Enter password: mysqlbackup: INFO: MySQL server version is ‘5.6.9-rc-log’.
mysqlbackup: INFO: Got some server configuration information from running server.
IMPORTANT: Please check that mysqlbackup run completes successfully.
At the end of a successful ‘backup-and-apply-log’ run mysqlbackup
prints "mysqlbackup completed OK!".
——————————————————————–
Server Repository Options:
——————————————————————–
datadir = /usr/local/mysql/data/
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:40M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
innodb_log_files_in_group = 2
innodb_log_file_size = 5242880
innodb_page_size = 16384
innodb_checksum_algorithm = innodb
innodb_undo_directory = /usr/local/mysql/data/
innodb_undo_tablespaces = 0
innodb_undo_logs = 128
——————————————————————–
Backup Config Options:
——————————————————————–
datadir = /Users/tonydarnell/hotbackups/datadir
innodb_data_home_dir = /Users/tonydarnell/hotbackups/datadir
innodb_data_file_path = ibdata1:40M:autoextend
innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/datadir
innodb_log_files_in_group = 2
innodb_log_file_size = 5242880
innodb_page_size = 16384
innodb_checksum_algorithm = innodb
innodb_undo_directory = /Users/tonydarnell/hotbackups/datadir
innodb_undo_tablespaces = 0
innodb_undo_logs = 128
mysqlbackup: INFO: Unique generated backup id for this is 13742482113579320
mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
130719 11:36:53 mysqlbackup: INFO: Full Backup operation starts with following threads
1 read-threads 6 process-threads 1 write-threads
130719 11:36:53 mysqlbackup: INFO: System tablespace file format is Antelope.
130719 11:36:53 mysqlbackup: INFO: Starting to copy all innodb files…
130719 11:36:53 mysqlbackup: INFO: Copying /usr/local/mysql/data/ibdata1 (Antelope file format).
130719 11:36:53 mysqlbackup: INFO: Found checkpoint at lsn 135380756.
130719 11:36:53 mysqlbackup: INFO: Starting log scan from lsn 135380480.
130719 11:36:53 mysqlbackup: INFO: Copying log…
130719 11:36:54 mysqlbackup: INFO: Log copied, lsn 135380756.
<font color="blue"><i>(I have truncated some of the database and table output to save space)</font></i>
…..
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_index_stats.ibd (Antelope file format).
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/innodb_table_stats.ibd (Antelope file format).
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_master_info.ibd (Antelope file format).
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_relay_log_info.ibd (Antelope file format).
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/mysql/slave_worker_info.ibd (Antelope file format).
…..
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t1.ibd (Antelope file format).
130719 11:36:56 mysqlbackup: INFO: Copying /usr/local/mysql/data/testcert/t3.ibd (Antelope file format).
…..
130719 11:36:57 mysqlbackup: INFO: Copying /usr/local/mysql/data/watchdb/watches.ibd (Antelope file format).
…..
130719 11:36:57 mysqlbackup: INFO: Completing the copy of innodb files.
130719 11:36:58 mysqlbackup: INFO: Preparing to lock tables: Connected to mysqld server.
130719 11:36:58 mysqlbackup: INFO: Starting to lock all the tables…
130719 11:36:58 mysqlbackup: INFO: All tables are locked and flushed to disk
130719 11:36:58 mysqlbackup: INFO: Opening backup source directory ‘/usr/local/mysql/data/’
130719 11:36:58 mysqlbackup: INFO: Starting to backup all non-innodb files in subdirectories of ‘/usr/local/mysql/data/’
…..
130719 11:36:58 mysqlbackup: INFO: Copying the database directory ‘comicbookdb’
…..
130719 11:36:59 mysqlbackup: INFO: Copying the database directory ‘mysql’
130719 11:36:59 mysqlbackup: INFO: Copying the database directory ‘performance_schema’
…..
130719 11:36:59 mysqlbackup: INFO: Copying the database directory ‘test’
…..
130719 11:36:59 mysqlbackup: INFO: Copying the database directory ‘watchdb’
130719 11:36:59 mysqlbackup: INFO: Completing the copy of all non-innodb files.
130719 11:37:00 mysqlbackup: INFO: A copied database page was modified at 135380756.
(This is the highest lsn found on page)
Scanned log up to lsn 135384397.
Was able to parse the log up to lsn 135384397.
Maximum page number for a log record 375
130719 11:37:00 mysqlbackup: INFO: All tables unlocked
130719 11:37:00 mysqlbackup: INFO: All MySQL tables were locked for 1.589 seconds.
130719 11:37:00 mysqlbackup: INFO: Full Backup operation completed successfully.
130719 11:37:00 mysqlbackup: INFO: Backup created in directory ‘/Users/tonydarnell/hotbackups’
130719 11:37:00 mysqlbackup: INFO: MySQL binlog position: filename mysql-bin.000013, position 85573
————————————————————-
Parameters Summary ————————————————————-
Start LSN : 135380480
End LSN : 135384397
————————————————————-
mysqlbackup: INFO: Creating 14 buffers each of size 65536.
130719 11:37:00 mysqlbackup: INFO: Apply-log operation starts with following threads
1 read-threads 1 process-threads
130719 11:37:00 mysqlbackup: INFO: ibbackup_logfile’s creation parameters:
start lsn 135380480, end lsn 135384397,
start checkpoint 135380756.
mysqlbackup: INFO: InnoDB: Starting an apply batch of log records to the database…
InnoDB: Progress in percent: 0 1 …. 99 Setting log file size to 5242880
Setting log file size to 5242880
130719 11:37:00 mysqlbackup: INFO: We were able to parse ibbackup_logfile up to
lsn 135384397.
mysqlbackup: INFO: Last MySQL binlog file position 0 85573, file name mysql-bin.000013
130719 11:37:00 mysqlbackup: INFO: The first data file is ‘/Users/tonydarnell/hotbackups/datadir/ibdata1’
and the new created log files are at ‘/Users/tonydarnell/hotbackups/datadir’
130719 11:37:01 mysqlbackup: INFO: Apply-log operation completed successfully.
130719 11:37:01 mysqlbackup: INFO: Full backup prepared for recovery successfully.
mysqlbackup completed OK!
Now, I can take a look at the backup file that was created:
root@macserver01: $ pwd
/Users/tonydarnell/hotbackups
root@macserver01: $ ls -l
total 8
-rw-r–r– 1 root staff 351 Jul 19 11:36 backup-my.cnf
drwx—— 21 root staff 714 Jul 19 11:37 datadir
drwx—— 6 root staff 204 Jul 19 11:37 meta
$ ls -l datadir
total 102416
drwx—— 5 root staff 170 Jul 19 11:36 comicbookdb
-rw-r—– 1 root staff 5242880 Jul 19 11:37 ib_logfile0
-rw-r—– 1 root staff 5242880 Jul 19 11:37 ib_logfile1
-rw-r–r– 1 root staff 4608 Jul 19 11:37 ibbackup_logfile
-rw-r–r– 1 root staff 41943040 Jul 19 11:37 ibdata1
drwx—— 88 root staff 2992 Jul 19 11:36 mysql
drwx—— 55 root staff 1870 Jul 19 11:36 performance_schema
drwx—— 3 root staff 102 Jul 19 11:36 test
drwx—— 30 root staff 1020 Jul 19 11:36 testcert
drwx—— 19 root staff 646 Jul 19 11:36 watchdb
root@macserver01: $ ls -l meta
total 216
-rw-r–r– 1 root staff 90786 Jul 19 11:37 backup_content.xml
-rw-r–r– 1 root staff 5746 Jul 19 11:36 backup_create.xml
-rw-r–r– 1 root staff 265 Jul 19 11:37 backup_gtid_executed.sql
-rw-r–r– 1 root staff 321 Jul 19 11:37 backup_variables.txt
As you can see, the backup was created in /Users/tonydarnell/hotbackups. If I wanted to have a unique folder for this backup, I can use the –with-timestamp.
The –with-timestamp option places the backup in a subdirectory created under the directory you specified above. The name of the backup subdirectory is formed from the date and the clock time of the backup run.
(from: http://ift.tt/1eiULH9)
I will run the same backup command again, but with the –with-timestamp option:
(I am not going to duplicate the entire output – but I will only show you the output where it creates the sub-directory under /Users/tonydarnell/hotbackups)$ /usr/local/meb/bin/mysqlbackup –user=root –password –backup-dir=/Users/tonydarnell/hotbackups backup-and-apply-log –with-timestamp
……
130719 11:49:54 mysqlbackup: INFO: The first data file is ‘/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1’
<font color="blue">and the new created log files are at ‘/Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir'</font>
130719 11:49:54 mysqlbackup: INFO: Apply-log operation completed successfully.
130719 11:49:54 mysqlbackup: INFO: Full backup prepared for recovery successfully.
mysqlbackup completed OK!
So, I ran the backup again to get a unique directory. Instead of the backup files/directories being placed in /Users/tonydarnell/hotbackups, it created a sub-directory with a timestamp for the directory name:
$ pwd
/Users/tonydarnell/hotbackups
root@macserver01: $ ls -l
total 0
drwx—— 5 root staff 170 Jul 19 11:49 2013-07-19_11-49-48
$ ls -l 2013-07-19_11-49-48
total 8
-rw-r–r– 1 root staff 371 Jul 19 11:49 backup-my.cnf
drwx—— 21 root staff 714 Jul 19 11:49 datadir
drwx—— 6 root staff 204 Jul 19 11:49 meta
Note: If you don’t use the –backup-and-apply-log option you will need to read this: Immediately after the backup job completes, the backup files might not be in a consistent state, because data could be inserted, updated, or deleted while the backup is running. These initial backup files are known as the raw backup.
You must update the backup files so that they reflect the state of the database corresponding to a specific InnoDB log sequence number. (The same kind of operation as crash recovery.) When this step is complete, these final files are known as the prepared backup.
During the backup, mysqlbackup copies the accumulated InnoDB log to a file called ibbackup_logfile. This log file is used to “roll forward” the backed-up data files, so that every page in the data files corresponds to the same log sequence number of the InnoDB log. This phase also creates new ib_logfiles that correspond to the data files.
The mysqlbackup option for turning a raw backup into a prepared backup is –apply-log. You can run this step on the same database server where you did the backup, or transfer the raw backup files to a different system first, to limit the CPU and storage overhead on the database server.
Note: Since the –apply-log operation does not modify any of the original files in the backup, nothing is lost if the operation fails for some reason (for example, insufficient disk space). After fixing the problem, you can safely retry –apply-log and by specifying the –force option, which allows the data and log files created by the failed –apply-log operation to be overwritten.
For simple backups (without compression or incremental backup), you can combine the initial backup and the –apply-log step using the option –backup-and-apply-log.
(from: http://ift.tt/1BeG5Oj)
One file that was not copied was the my.cnf file. You will want to have a separate script to copy this at regular intervals. If you put the mysqlbackup command in a cron or Windows Task Manager job, you can add a way to copy the my.cnf file as well.
Now that we have a completed backup, we are going to copy the backup files and the my.cnf file over to a different server to restore the databases. We will be using a server that was setup as a slave server to the server where the backup occurred. If you need to restore the backup to the same server, you will need to refer to this section of the mysqlbackup manual. I copied the backup files as well as the my.cnf file to the new server:
# pwd
/Users/tonydarnell/hotbackups
# ls -l
total 16
drwxrwxrwx 5 tonydarnell staff 170 Jul 19 15:38 2013-07-19_11-49-48
On the new server (where I will restore the data), I shutdown the mysqld process (mysqladmin -uroot -p shutdown), copied the my.cnf file to the proper directory, and now I can restore the database to the new server, using the copy-back option. The copy-back option requires the database server to be already shut down, then copies the data files, logs, and other backed-up files from the backup directory back to their original locations, and performs any required postprocessing on them.
(from: http://ift.tt/1eiULHh)
# /usr/local/meb/bin/mysqlbackup –defaults-file=/etc/my.cnf –backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 copy-back
MySQL Enterprise Backup version 3.8.2 [2013/06/18] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved.
mysqlbackup: INFO: Starting with following command line …
/usr/local/meb/bin/mysqlbackup –defaults-file=/etc/my.cnf –backup-dir=/Users/tonydarnell/hotbackups/2013-07-19_11-49-48 copy-back IMPORTANT: Please check that mysqlbackup run completes successfully.
At the end of a successful ‘copy-back’ run mysqlbackup
prints "mysqlbackup completed OK!".
——————————————————————–
Server Repository Options:
——————————————————————–
datadir = /usr/local/mysql/data
innodb_data_home_dir = /usr/local/mysql/data
innodb_data_file_path = ibdata1:40M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
innodb_log_files_in_group = 2
innodb_log_file_size = 5M
innodb_page_size = Null
innodb_checksum_algorithm = innodb
——————————————————————–
Backup Config Options:
——————————————————————–
datadir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
innodb_data_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
innodb_data_file_path = ibdata1:40M:autoextend
innodb_log_group_home_dir = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
innodb_log_files_in_group = 2
innodb_log_file_size = 5242880
innodb_page_size = 16384
innodb_checksum_algorithm = innodb
innodb_undo_directory = /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir
innodb_undo_tablespaces = 0
innodb_undo_logs = 128
mysqlbackup: INFO: Creating 14 buffers each of size 16777216.
130719 15:54:41 mysqlbackup: INFO: Copy-back operation starts with following threads
1 read-threads 1 write-threads
130719 15:54:41 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/ibdata1.
…..
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/comicbookdb/comics.ibd.
…..
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_index_stats.ibd.
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/innodb_table_stats.ibd.
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_master_info.ibd.
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_relay_log_info.ibd.
130719 15:54:42 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/mysql/slave_worker_info.ibd.
…..
130719 15:54:43 mysqlbackup: INFO: Copying /Users/tonydarnell/hotbackups/2013-07-19_11-49-48/datadir/watchdb/watches.ibd.
…..
130719 15:54:43 mysqlbackup: INFO: Copying the database directory ‘comicbookdb’
…..
130719 15:54:43 mysqlbackup: INFO: Copying the database directory ‘mysql’
130719 15:54:43 mysqlbackup: INFO: Copying the database directory ‘performance_schema’
…..
130719 15:54:43 mysqlbackup: INFO: Copying the database directory ‘test’
…..
130719 15:54:43 mysqlbackup: INFO: Copying the database directory ‘watchdb’
130719 15:54:43 mysqlbackup: INFO: Completing the copy of all non-innodb files.
130719 15:54:43 mysqlbackup: INFO: Copying the log file ‘ib_logfile0’
130719 15:54:43 mysqlbackup: INFO: Copying the log file ‘ib_logfile1’
130719 15:54:44 mysqlbackup: INFO: Copy-back operation completed successfully.
130719 15:54:44 mysqlbackup: INFO: Finished copying backup files to ‘/usr/local/mysql/data’
mysqlbackup completed OK!
I can now restart MySQL. I have a very small database (less than 50 megabytes). But it took less than a minute to restore the database. If I had to rebuild my database using mysqldump, it would take a lot longer. If you have a very large database, the different in using mysqlbackup and mysqldump could be in hours. For example, a 32-gig database with 33 tables takes about eight minutes to restore with mysqlbackup. Restoring the same database with a mysqldump file takes over two hours.
An easy way to check to see if the databases match (assuming that I haven’t added any new records in any of the original databases – which I haven’t), I can use one of the MySQL Utilities – mysqldbcompare. I wrote about how to do this in an earlier blog about using it to test two replicated databases, but it will work here as well – see Using MySQL Utilities Workbench Script mysqldbcompare To Compare Two Databases In Replication.
The mysqldbcompare utility “compares the objects and data from two databases to find differences. It identifies objects having different definitions in the two databases and presents them in a diff-style format of choice. Differences in the data are shown using a similar diff-style format. Changed or missing rows are shown in a standard format of GRID, CSV, TAB, or VERTICAL.” (from: mysqldbcompare — Compare Two Databases and Identify Differences)
Some of the syntax may have changed for mysqldbcompare since I wrote that blog, so you will need to reference the help notes for mysqldbcompare. You would need to run this for each of your databases.
$ mysqldbcompare –server1=scripts:scripts999@192.168.1.2 –server2=scripts:scripts999@192.168.1.123 –run-all-tests –difftype=context comicbookdb:comicbookdb
# server1 on 192.168.1.2: … connected.
# server2 on 192.168.1.123: … connected.
# Checking databases comicbookdb on server1 and comicbookdb on server2
Defn Row Data Type Object Name Diff Count Check ————————————————————————— TABLE comics pass pass pass Databases are consistent.
# …done
You can try and run this for the mysql database, but you may get a few errors regarding the mysql.backup_history and mysql.backup_progress tables:
$ mysqldbcompare –server1=scripts:scripts999@192.168.1.2 –server2=scripts:scripts999@192.168.1.123 –run-all-tests –difftype=context mysql:mysql
# server1 on 192.168.1.2: … connected.
# server2 on 192.168.1.123: … connected.
# Checking databases mysql on server1 and mysql on server2
Defn Row Data Type Object Name Diff Count Check ————————————————————————— TABLE backup_history pass FAIL SKIP Row counts are not the same among mysql.backup_history and mysql.backup_history.
No primary key found.
TABLE backup_progress pass FAIL SKIP Row counts are not the same among mysql.backup_progress and mysql.backup_progress.
No primary key found.
TABLE columns_priv pass pass pass TABLE db pass pass pass TABLE event pass pass pass TABLE func pass pass pass TABLE general_log pass pass SKIP No primary key found.
TABLE help_category pass pass pass TABLE help_keyword pass pass pass TABLE help_relation pass pass pass TABLE help_topic pass pass pass TABLE innodb_index_stats pass pass pass TABLE innodb_table_stats pass pass pass TABLE inventory pass pass pass TABLE ndb_binlog_index pass pass pass TABLE plugin pass pass pass TABLE proc pass pass pass TABLE procs_priv pass pass pass TABLE proxies_priv pass pass pass TABLE servers pass pass pass TABLE slave_master_info pass pass pass TABLE slave_relay_log_info pass pass pass TABLE slave_worker_info pass pass pass TABLE slow_log pass pass SKIP No primary key found.
TABLE tables_priv pass pass pass TABLE time_zone pass pass pass TABLE time_zone_leap_second pass pass pass TABLE time_zone_name pass pass pass TABLE time_zone_transition pass pass pass TABLE time_zone_transition_type pass pass pass TABLE user pass pass pass Database consistency check failed.
# …done
For example, when you compare the mysql.backup_history tables, the original database will have two entries – as I ran mysqlbackup twice. But the second backup entry doesn’t get entered until after the backup has occurred, and it isn’t reflected in the backup files.
Original Servermysql&gt; select count(*) from mysql.backup_history;
+———-+
| count(*) |
+———-+
| 2 |
+———-+
1 row in set (0.00 sec)
Restored Servermysql&gt; select count(*) from mysql.backup_history;
+———-+
| count(*) |
+———-+
| 1 |
+———-+
1 row in set (0.00 sec)
For the mysql.backup_progress tables, the original database has ten rows, while the restored database has seven.
There are many options for using mysqlbackup, including (but not limited to) incremental backup, partial backup , compression, backup to tape, point-in-time recovery (PITR), partial restore, etc. If you are running MySQL in a production environment, then you should look at MySQL Enterprise Edition, which includes MySQL Enterprise Backup. Of course, you should always have a backup and recovery plan in place. Finally, if and when possible, practice restoring your backup on a regular basis, to make sure that if your server crashes, you can restore your database quickly.
 
Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
via Planet MySQL
Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup – Part 2 of 2

These ridiculously detailed aerial photos of London are so stunning

These ridiculously detailed aerial photos of London are so stunning

The weather isn’t great and the pubs close too early and the food is often better in other cities and yet London is still one of the capitals of the world and is packed with so much history. Photographer Vincent LaForet took these amazing aerial shots of London and seeing the city overhead like this reminds you why that is.

The buildings may be old and the streets may be confusing when you’re down low but boy, it looks great from above.

Click on the magnifying glass to zoom and see these images up close at full screen because the detail is absolutely phenomenal. It’s the only way to do them justice.

The full set of Vincent’s London photos can be seen here on Storehouse. You can also sign up to pre-order a book on Vincent’s Air series here. The entire Air Series in Europe is sponsored by G-Technology.”

These ridiculously detailed aerial photos of London are so stunning

These ridiculously detailed aerial photos of London are so stunning

These ridiculously detailed aerial photos of London are so stunning

These ridiculously detailed aerial photos of London are so stunning

These ridiculously detailed aerial photos of London are so stunning

Vincent Laforet is a director, photographer, and a pioneer in tilt-shift, aerial photography, and in HD DSLR cameras for shooting film. He won the 2002 Pulitzer Prize for Feature Photography for his images of Afghanistan and Pakistan’s conflicts after 9/11, plus three prizes at the 2010 Cannes Lions International Advertising Festival. Vanity Fair, The New York Times Magazine, National Geographic, Sports Illustrated, Time, Newsweek, Life and many other national and international publications have commissioned his service.

Follow him on his blog, Twitter, Facebook, Storehouse, and Instagram.

You can buy his book Visual Stories: Behind the Lens with Vicent Laforet here.


This is part of a series in which we are featuring futuristic, striking, and just beautiful photography. If you are a photographer with awesome work, please drop me a line here.


SPLOID is delicious brain candy. Follow us on Facebook or Twitter.

via Gizmodo
These ridiculously detailed aerial photos of London are so stunning

How To Build The Only Five Campfires You’ll Ever Need

How To Build The Only Five Campfires You’ll Ever Need

The most fundamental outdoor skill is also often one of the most misunderstood. Learn these five campfires and you’ll be able to cook food, scare off wild animals, stay warm or just have a bonfire on the beach. They’re simple, but everyone can probably learn something here.

Fire Basics

A fire needs three things to work: fuel, heat and oxygen. What makes these fires different is largely in how the ratios of those three things is controlled.

If you’re outdoors and didn’t just bring a commercial fire starter and prepared firewood, then you’ll need a few basic tools to prepare your wood and light your fire. A fixed-blade knife, ferro rod and vaseline-coated cotton balls should be considered the minimum necessary. To that, you can add an axe or saw if you want to reduce the labor required significantly.

You’ll need dry wood to get a fire going. Best case scenario, you can just pick up dry crackly twigs off the ground. But, in bad weather, you’ll need to work a bit harder. Even in heavy rain or snow, you’ll still be able to find dry wood inside standing, dead trees and branches. A saw or axe will help you cut those down, then chop them into useable lengths. An axe or large knife will then help you split those logs, which is what allows you to access the dry wood inside and reduces the size of the pieces you have to work with. You can produce everything down to kindling from the dry center of dead trees and branches, but you’ll need to find or create another source of tinder to spark your fire into life.

Dry grasses are one good source of tinder, or you can whittle a feather stick from a dry sliver of wood. You can also try to find a mouse or bird’s nest, pull lint from your belly button or any number of other tricks. I just carry those vaseline coated cotton balls. The petroleum jelly keeps the insides dry, so you just pull them apart, hit that with a spark from your ferro rod, et voila, two minutes of four-inch-high flame.

When processing your wood, try to create stuff that looks like commercial firewood. You need a manageable length and for it to be split into fourths. This helps expose the dry center to the flames while creating more exposed angles from which fire can catch. Big round logs coated in wet bark will be reluctant to light and burn.

Once you’ve prepared your materials, you’re ready to build a fire. Let’s look at the various types, how to make them and what they can do.

How To Build The Only Five Campfires You’ll Ever Need

Tipi

The simplest and most effective way to start any kind of fire. You’ll use a tipi made from kindling to start most of the rest of these styles. We’re over-simplifying with large pieces of wood here to make it easy to follow. If you want to use a tipi to get a fire going, build one with the smallest, driest, most exposed pieces of wood possible. Then, either put your tinder in its center and light it or light it first, then scoot it in. Don’t forget to leave a “door” through which to do that!

The basic idea is to create a structure that concentrates the flame while allowing plenty of air to enter. Start with enough wood on your tipi to get going, then just add more slowly as the fire builds. Start small and work up to larger pieces as appropriate; you’ll develop a feel for when and how large with practice.

Sticking a forked branch in the ground as a center pole for your tipi makes building one easier. And you can scale a tipi all the way from the very first pile of kindling on up to a massive bonfire. You can even build the entire thing before lighting, just remember that door and remember the necessary progression of wood sizes as you work outwards and upwards.

How To Build The Only Five Campfires You’ll Ever Need

Log Cabin

Want a fire that requires very little maintenance? Building a log cabin creates a structure that will progressively burn for a long time, while lighting quickly and easily from a tipi or other kindling structure built inside it. These are great for getting bonfires going quickly or just building a campfire you can then ignore while you prep food or perform other chores. Use the thickest logs for the base, then go a bit smaller as you work up. You can also reduce the circumference into sort of a pyramid for something that will catch a bit quicker.

How To Build The Only Five Campfires You’ll Ever Need

Platform

Want a nice bed of coals to cook on? You build a pyramid like a solid log cabin, with the largest logs at the base and something about the size of your wrist on the top. Then, you just light a fire on the top and let it burn down through the logs, creating a big, thick bed of hot coals. And you don’t need to wait for it to burn down all the way to start cooking. If you have cast iron cookware, you can place that directly on the hot coals as soon as that first layer of logs is burning solidly.

Using more and bigger logs, this style is also known as the “upside down” fire, popular for its ability to burn for a long time, untended. Again, put the biggest logs on the bottom and scale up to smaller stuff, then light a fire on the top. With practice, you’ll figure out the right size wood to use to create a fire capable of burning on its own, all night.

These can be a little difficult to get going. The trick is to build a large enough tipi on top to create a bed of coals that will be capable of burning down through the first layer, igniting that, creating a larger bed of coals and so on.

How To Build The Only Five Campfires You’ll Ever Need

Star

You see this style in cowboy movies for a reason. It uses minimal wood and effort to burn for a very long time. That makes the star best for all-night fires or use in a fire pit.

Get going with a solid tipi fire in the middle, then place logs around the fire in 3 or 5 points. Slowly push the logs in further as they burn down.

Want to create a maintenance free fire? Simply dig a hole and build the star in it so the logs slide down as they burn on their own. Works equally well in the fire pit you have in your yard or on your deck.

How To Build The Only Five Campfires You’ll Ever Need

Lean To

Need to protect your fledgling fire from wind and rain as you get it going? Build a lean to does just that. There’s two main methods for doing this. The first, and easiest, is to simply lay a big log down as a windbreak, then lean your progressively larger firewood on that. Demonstrated here is the cantilevered branch approach, which allows you to build something larger. To do it this way, find a green branch to serve as a ridge pole and either prop it up with a forked stick or weight it as we did here. Then, just build that lean to starting with small kindling and working your way up to larger pieces of wood. Build a little tipi way inside the lean to, where it’s protected from the wind and rain and, by the time the whole thing collapses, it’ll be going strong enough to be impervious to weather.

Fire Tips

There’s a bunch of ways to build these basic types of campfires, none of which are wrong. The real key here is to practice, find a method your’e most comfortable with, then build the style appropriate for your unique needs on a given night.

Practicing the hard way, with a knife, ferro rod and tinder and learning to find and process your own wood with limited tools helps develop a key skill not only for survival, but just generally being comfortable and confident outdoors.

And there’s more you can do with fire beyond just build one. Want to keep warm? Find a natural reflector like a boulder, shelf or the back of a shallow cave and build the fire so you sit between the reflector and it. Build another reflector from stacked logs and place it on the opposite side of the fire if you want to be even warmer.

High wind? Dig a hole with a shallow side facing towards the wind and steep side on the downwind, that’ll funnel air to your fire without blowing it out.

Want to cook over a fire? Build it in a narrow trench able to support your cookware or position two green logs to do the same.

Want to hang a pot? Lash a tripod of green wood together and stand it over the fire.

Do you use these types of fires or others? Which is your favorite and why?

IndefinitelyWild is a new publication about adventure travel in the outdoors, the vehicles and gear that get us there and the people we meet along the way. Follow us on Facebook, Twitter and Instagram.

via Gizmodo
How To Build The Only Five Campfires You’ll Ever Need

Truly hilarious video shows how Transformers ruins all movies

Truly hilarious video shows how Transformers ruins all movies

I laughed out loud and rolled on the floor and laughed my butt off so much at this video that shows how Transformers ruins our favorite movies. Movies you’ve seen before carries along normally and then… BOOM a Transformer comes to destroy everyone in the scene. The editing of this video is perfect, it shows movies like Speed and Little Miss Sunshine and 50/50 among others.


SPLOID is delicious brain candy. Follow us on Facebook or Twitter.

via Gizmodo
Truly hilarious video shows how Transformers ruins all movies

After a Year of Secret Field-Testing, Brain-Controlled Bionic Legs Are Here

An anonymous reader writes: Today, an Icelandic prosthetic-maker announced that two amputees have been testing brain-controlled bionic legs for over a year. The devices respond to impulses in the subjects’ residual limbs, via sensors that were implanted in simple, 15-minute-long procedures. "When the electrical impulse from his brain reaches the base of his leg, a pair of sensors embedded in his muscle tissue connect the neural dots, and wirelessly transmit that signal to the Proprio Foot. Since the command reaches the foot before the wearer’s residual muscles actually contract, there’s no unnatural lag between intention and action." This is a huge step forward (sorry) for this class of bionics. It may seem like a solved problem based on reports and videos from laboratories, but it’s never been exposed to real world use and everyday wear and tear like this.

Share on Google+

Read more of this story at Slashdot.





via Slashdot
After a Year of Secret Field-Testing, Brain-Controlled Bionic Legs Are Here

How to Migrate From an Old NAS to a New One Overnight with rsync

How to Migrate From an Old NAS to a New One Overnight with rsync

A NAS, or network-attached storage device, is great for storing files you can reach from any computer in the house. But when you upgrade to a new one, you’re stuck copying everything over by hand, swapping drives, and risking data loss. Here’s a much more reliable method.

Not too long ago, I picked up a brand new NAS with way more space than my old one. It’s great, but as soon as I set it up I was stuck with the task of transferring terabytes of data from to the new NAS. I didn’t want to physically move drives, since the old drives were smaller than the new ones. Since I was moving from one Synology NAS to another, I could have used their migration tool, and you’d be right, but if I were moving from a Synology to a ReadyNAS, or from FreeNAS to Synology, I wouldn’t have had that option. I wanted a platform-agnostic method. After mulling over the dozen or so ways to get the job done, I settled on one of our old command line favorites: rsync.

Why rsync Is the Best Tool for the File Transfer Job

I can hear you now: “Why use the command line for this? I could use [X METHOD] to do the same thing,” and you’d be right! There are a lot of ways to get the job done, but rsync has a few significant advantages over just opening a pair of windows on your PC and dragging files from one to the other. Here are the big ones:

  • rsync is probably already on your NAS. Whether your NAS is an off-the-shelf enclosure packed with drives or a DIY model running your preferred operating system, odds are it’s running a variant of linux. That means the command line—and rsync—are already installed. Sure, if you’re moving from one brand of NAS to another of the same brand, there may be built-in apps to help, but the beauty of rsync is that it works anywhere, no matter the brand and no matter how big the drives or volumes are.
  • rsync retains metadata. That means things like file ownership, original creation date, modification dates, file permissions, and all that jazz are retained when your copy lands in its new destination. Users and permissions on the new system notwithstanding, if you don’t want all of your files to get brand new creation dates, generate new thumbnails, or things like revision history are important to you, this is a good method to make sure that information is preserved.
  • rsync eliminates the middleman. When you use your PC to copy files from one system to another, your computer is acting as a middleman between the two NASes. That creates another point of failure in the transfer, not to mention keeps your main computer unnecessarily busy. If you’ve ever done long, multi-file copy operations in Windows, you know this can be annoying at the least, and even the smallest thing will make that copy crap out. When it does, you’re stuck trying to figure out where the copy failed, why, and where to start again. Since rsync is a device-to-device copy, it runs whether your computer is on or not.
  • rsync supports resumed transfers, diffs, and syncing. If for some reason your transfer dies, like a freak power outage, a system error, or any other problem, rsync is capable of picking up right where you left off without issue. Similarly, since rsync was designed to synchronize directories (and keep them in sync), if you have any need to keep your old NAS in sync with your new one, or you accidentally drop new files on your old NAS that should be on the new one, rsync can move them for you, and will only touch the changed or new files in the process.
  • rsync minimizes network overhead. This may not matter to you if you do this overnight—and you probably should—but one of the best things about rsync is that it doesn’t carry the same kind of network load that other tools do. That means your other backups or downloads won’t slow to a crawl just because you’re using it, and other people on your network probably won’t notice anything’s going on at all.

Like we mentioned, rsync is designed for synchronizing files, not just copying them. We’ve shown you how to mirror systems with it before, and how to sync iTunes with any USB device, but it’s also useful just for straight machine-to-machine copies because it’s so flexible. It can even give you a handy progress bar so you can make sure everything goes smoothly.

Step One: Enable Remote Access and Choose Your Transfer Method

How to Migrate From an Old NAS to a New One Overnight with rsync

The other nice thing about rsync is that it’s just a command line away. You can’t just fire up a terminal window and start copying though—you’ll need to do a little initial setup.

First, you’ll need to make sure that SSH access is enabled on both your old NASes. SSH allows you to log in to your NAS from the command line, using a secure shell. On my Synology NAS, this was right under Control Panel, labeled “Terminal & SNMP.” You’ll likely find it in a similar place on your NAS, but it’s usually under System, Remote Access, or any other synonym for “remote management.”

Once you have remote access enabled on both NASes, and you have the admin (or root) credentials for both systems, you’re ready to go. Now we’re ready to log in to our old NAS and push our files to the new one.

How to Use rsync to Transfer Files

How to Migrate From an Old NAS to a New One Overnight with rsync

Now it’s time to get down to business. Use your favorite SSH tool (in Linux or OS X, you can just open a terminal window, in Windows I like PuTTY for this) to log in to your old NAS. In OS X or Linux, just open a terminal window and type this:

slogin admin@[IP ADDRESS OF YOUR OLD NAS]

You’ll be prompted to log in, and once you do, you’ll get dumped into a new command line representing your NAS. In Windows, open PuTTY. In the host name field, type in the IP address of your old NAS, make sure SSH is selected, and click Open. You’ll be prompted for a username (admin or root) and a password. Once you’re logged in, you’ll be dropped at the command line for your NAS.

Now it’s time to run rsync. The syntax is pretty simple. You’ll need to tell rsync how to log in to the remote device, and where to put the files, all in one command. Here’s how:

rsync -azP [SOURCE DIRECTORY] admin@[IP ADDRESS OF YOUR NEW NAS]:[SOURCE DIRECTORY]

So, let’s say I have a folder on my old NAS called “old_movies,” and a folder on the new one called “new_movies.” To copy everything inside “old_movies” to “new_movies,” the command would look like this:

rsync -azP old_movies/ admin@192.168.1.X:new_movies

Where 192.168.1.X is the IP address of your new NAS. Once you run this, you’ll be prompted to verify that the SSH key for the new NAS is indeed correct, and that you want to store it for future use. You may get a warning about the key the first time you connect, but that’s okay. Type “yes” to continue. You’ll be prompted to log in to the new NAS with the admin password. Go ahead and do that too. If everything else was successful, as soon as you do, your file sync will begin.

How to Migrate From an Old NAS to a New One Overnight with rsync

Before we go on, let’s talk about those flags—the “-azP”—you see in the command above. They’re important, and here’s why:

  • The “-a” flag in there stands for “archive,” and it’s important to include. It makes sure that the sync command is recursive (meaning any sub-folders and files inside of old_movies are copied too) and it’s important for preserving all of those modification dates, symbolic links, permissions, and other goodies we talked about earlier.
  • The “-P” flag combines two other useful flags into one here that you’ll want to use. It combines the flags for “progress” and “partial,” and when used you’ll get a progress dialog at the command line that shows you which file is currently transferring, what percentage of that transfer is complete, and how many more files are left to check. As each file completes, you’ll see an ever-growing list of completed file transfers, which is great for making sure everything transfers successfully. It also allows you to easily resume suspended or interrupted transfers. Combined, you can see how it’ll show you which file was the last one to go, where it failed, and if it failed, give you the option to resume. It’s a pretty powerful combination.
  • The “-z” flag is a handy rsync tool that compresses the files when transferred, which gives you that whole “light on bandwidth” benefit we discussed. If the files are already compressed, you won’t get much benefit here, but if they’re not, this will get the job done without slowing down the rest of your home network.

You should double-check your command before you copy, just to make sure you’re copying to the right directory and directory structure.

When it comes time to migrate your whole NAS, you can do this directory by directory, or you could do it like I did—in one swoop, overnight. Do one small directory as a test to make sure you have your syntax right, and when that works, move up to the parent directory, and copy the whole thing at once. It’ll take ages, but when you’re finished, your new NAS will be set up exactly like your old one was, all ready to go.

If you’re a stickler and prefer granular control, create your directories and shares on the new NAS, then just copy the contents of each directory from the old NAS to the new one. That way you’ve created the directories and granted them permissions using your NAS’s interface, and the files inside will retain whatever permissions they had.

Troubleshooting Issues, Updating Directories, and Additional Reading

How to Migrate From an Old NAS to a New One Overnight with rsync

If you run into issues with the copy, or with other odd, quirky problems, do some Googling for your error messages and the OS of your old or new NAS. You’ll be surprised what you’ll turn up, and how many people had the same problem you did. In my case, when I was migrating my Synology NAS, I ran into some quirky “rsync service is not running” errors when I tried to push or pull files, only to find out that I needed to make sure Synology’s “Network Backup Service” and “Network Destination Service” were enabled on both the source and destination NAS—two checkboxes buried deep on the Service tab of “Info Center” in the Control Panel. I also had to make sure I logged in as root—logging in as admin wouldn’t cut it. Luckily, Stefan came to my rescue with this helpful post on the error.

Similarly, Justin Ellingwood’s rsync walkthrough on DigitalOcean’s community forums is super helpful if you want to learn the ins and outs of rsync for the purpose of moving files across systems, and helped me identify the best flags to use and when to use that trailing slash and when not to. If you accidentally write to the old NAS instead of the new one, and need to run a differential sync—that is, use rsync to just copy the changed or different files since your last sync—he has instructions on how to do that, too. If you’re an old hat with rsync, this walkthrough at How-to Geek will help you take those skills to the next level.

Once you have all of your files migrated, you’re free to do whatever you want with the old NAS. Turn it into a download box, a home theater system, use it for unimportant files or backups, whatever you want. If you plan to sell or get rid of it though, make sure you wipe it properly and clean it up beforehand. With luck, you can get some cash for it.

Title photo made using Denis Dubrovin.


via Lifehacker
How to Migrate From an Old NAS to a New One Overnight with rsync