I Brushed My Teeth With the World’s First Bluetooth Toothbrush

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Being an adult is boring. It’s a long checklist of necessary acts of maintenance that, in the end, fail us. That’s why we rely on fun-enhancing phenomena like color runs and adult kickball leagues—and gadgets like Oral-B’s new Bluetooth-enabled toothbrushes and app, which are designed to make one of life’s most mundane tasks seem like fun.

At first glance, it sounds like a punchline to a New Yorker cartoon about the wiles of modern technology. After all, who needs an app to tell them how to scrape their teeth down every morning? But after I had my first cavity last month, I was ready to try anything that could right the wrong creeping from molar to molar inside my mouth.

What Is It?

Oral-B’s SmartSeries 7000 is a Bluetooth 4.0-enabled toothbrush that connects to a free app that tracks how often you’re brushing, whether you’re doing it right, and for how long. It will retail for $220 when it’s available in the US in January (it’s already on sale in the UK). A cheaper version, the SmartSeries 5000, will retail for $160 when it arrives this fall.

For all that money you get the following bells and whistles: The SmartSeries counts to 120 seconds for you, vibrates when you’re pushing too hard, records your daily "times," and tries to entertain you with tips and news headlines while you brush. One of the headlines was about prostitutes.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Why Does It Matter?

In the most basic sense, it matters because keeping your teeth clean matters. Going to the dentist is scary and terrible enough when it’s for routine maintenance. Cavities and root canals are another level of (potentially avoidable) nightmare.

But more specifically, given the fact that internet connectivity has wormed its way into everything from your dog’s fitness level to the very cup you’re swilling your morning beer from, a connected toothbrush seems inevitable. The fact that we’re all still scraping at our gums with bits of $2 molded plastic seems hopelessly old-fashioned. What are we, indentured servants?

With this new-fangled device, Oral-B is banking on the popularity "quantified self" movement, in which consumers track every aspect of their lives and record them in minute detail for later meditation. The whole point is to make the utterly mundane task of going about adulthood a little less boring. Even oral hygiene, a task most of us feel nothing but an ever-so-slight dread about.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Design

The app is, surprisingly enough, a joy to use: It’s pretty, well-designed, and against all odds, pretty damn fun to use.

But what about the toothbrush itself? It feels slightly clunky compared to the sweet Sonicare toothbrush I was given as a gift (a hint?) by my parents years ago; its motor is quite loud, and its body is a little larger than you’d expect. It has six cleaning modes, which you adjust with a button on the body, and arrives with a nice-looking case and small recharging port.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

If you don’t mind the bulk, this is a brush you won’t be ashamed to have in your bathroom (unless you mind people asking why the hell your toothbrush is Bluetooth-enabled, as many guests have asked me over the past few weeks). There’s a band of plastic-covered LEDs at the oscillating head, which flash red when you’re pushing on your teeth too hard thanks to a pressure monitor; when you’re brushing, the red glow is clearly visible just in front of your face. Other than that, it’s pretty standard as far as mid-2010s design goes: Sinuous curves and plenty of white plastic.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Using It

The point of this product is to trick you into taking better care of yourself. As someone who’s managed to outsmart (or at least outlazy) a whole host of other life-improving gadgets, I was not hopeful.

But the app is actually a case study in how to leverage user experience design to get people to do things they don’t enjoy. To do so, it uses a classic good dentist/bad dentist strategy, starting with positive reinforcement—good job messages, cool graphics showing your progress—followed up with negative warning signs—a red flashing light when you’re pushing too hard, or an iOS notification that you haven’t used your toothbrush for a while.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

But when all else fails, which it inevitably does because humans are lazy, the app uses the oldest trick in the book: Distraction. As you brush, it pulls up headlines. I read an update to Google news as I focused on my lower right quadrant. Another day, it showed me some pictures of lion cubs.

Needless to say, I kept brushing. Then it told me some things about how coffee is turning my teeth yellow. When I finished, a little emoticon popped up: Bravo.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Yet humans are pretty good at evading unpleasant or boring tasks. By the end of the first week, I wasn’t really looking at the app when I brushed. Soon I wasn’t even remembering to bring my phone into the bathroom so the toothbrush would link up to record my brushing.

Oral hygiene is an odd segue into a discussion of humans’ behavioral flaws, but this thing revealed mine. I took the brush on vacation, where I was constantly distracted, tired, or in a hurry, and my progress… suffered:

I Brushed My Teeth With the World's First Bluetooth Toothbrush

Like

Despite my misgivings, the app is actually the highlight: And luckily, anyone can download it for free and use it manually with their own toothbrush. It’s well-designed and actually quite fun to use, if a bit of overkill for anyone not interesting in tracking the minutia of their own lives.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

It also bears mentioning that setup was dead simple, which is important given the fact that this would make a great gift for a kid or an older adult who might not be well-versed on setting up Bluetooth-enabled gadgets.

The same goes for usability within the app itself. Oddly enough, the smartest thing about this toothbrush is the UX design, not the actual product design. Other companies looking to develop their own mobile applications would do well to study this one.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

No Like

One concern, for me, is that the brush wouldn’t record my activity unless I had my device nearby when I was brushing. Now, sure, that might not seem like a huge deal. After all, who doesn’t have their phone nearby these days, even in the bathroom (especially in the bathroom)? For starters, tired people. Drunk people. Forgetful people. People who just want to scrub the bare minimum of gunk off their teeth and get to sleep.

In theory, the SmartSeries is designed to store information on your last 20 sessions so that when you do connect with the app, your history is synced. I had trouble getting this functionality to work, and it turned out I was doing it wrong: You need to go to the main screen and then hit the "mode" button to initiate the sync. It’s not exactly intuitive, but it works.

I Brushed My Teeth With the World's First Bluetooth Toothbrush

The brush itself is fine, though I did notice an ever-so-slight aftertaste that filters through—presumably from the motor—at times. It doesn’t make brushing unpleasant, but it’s worth noting given the price of this thing.

Should You Buy It?

Do you need a $220 toothbrush that tracks your every move and analyzes your technique? No. Do you want one? Judging from the straw poll I very scientifically carried out over the past two weeks, a surprising number of people do. And given that you’re spending the money on improving a part of your body you rely on to survive, some may find it easy to rationalize as a purchase.

The Oral-B Bluetooth brush does exactly what it says it does. It will get you thinking about your routine more than you normally do. The question is how much you’re willing to spend to gameify even the most banal aspects of life as an adult human in 2014, right down to taking care of your chompers.

via Gizmodo
I Brushed My Teeth With the World’s First Bluetooth Toothbrush

Recover Corrupt MySQL Database

The unDROP for InnoDB tool can used to recover corrupt MySQL database. In this post we will show how to repair MySQL database if its files became corrupted and even innodb_force_recovery=6 doesn’t help.
The corruption of InnoDB tablespace may be caused by many reasons. A dying hard drive can write garbage, thus page checksum will be wrong. InnoDB then reports to the error log:
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 4.
MySQL is well know for poor start-up script. A simple upgrade procedure may end up with two mysqld processes writing to the same tablespace. That leads to the corruption too. Sometimes power reset corrupts not only corrupts InnoDB files, but file system becomes unusable for the operating system.
InnoDB is very strict when it works with pages. If checksum doesn’t match or some field in the header carries unexpected value InnoDB wisely prefers to crash to avoid further corruption.
The manual suggests to start MySQL with innodb_force_recovery option. The purpose of this option is to let user dump their data. There are no means to repair the tablespace. The user must drop the tablespace, create new one and load the data back.
innodb_force_recovery accepts values from one to six. The higher value, the more tests InnoDB disables.
In this post we will assume MySQL can not start even with innodb_force_recovery=6.
The recovery toolkit works directly with InnoDB files, it can read records from the InnoDB page. If some part of the page is damaged it will just skip that piece and continue to read the records further in the page.
So, let’s corrupt some InnoDB file and recover the table.
InnoDB corruption
For sake of simplicity we will overwrite a part of .ibd file in the are with user data.
In real life the corruption may be at any place of index PRIMARY.
At the middle of the PRIMARY index of table sakila.actor we will rewrite the data with 128 characters ‘A’:
0000C058 00 00 00 02 00 32 01 00 02 00 1C 69 6E 66 69 6D 75 6D 00 05 …..2…..infimum..
0000C06C 00 0B 00 00 73 75 70 72 65 6D 75 6D 07 08 00 00 10 00 29 00 ….supremum……).
0000C080 01 00 00 00 00 05 1E 9F 00 00 01 4D 01 10 50 45 4E 45 4C 4F ………..M..PENELO
0000C094 50 45 47 55 49 4E 45 53 53 43 F2 F5 A9 08 04 00 00 18 00 26 PEGUINESSC………&
0000C0A8 00 02 00 00 00 00 05 1E 9F 00 00 01 4D 01 1A 4E 49 43 4B 57 …………M..NICKW
0000C0BC 41 48 4C 42 45 52 47 43 F2 F5 A9 05 02 00 00 20 00 21 00 03 AHLBERGC……. .!..
0000C0D0 00 00 00 00 05 1E 9F 00 00 01 4D 01 24 45 44 43 48 41 53 45 ……….M.$EDCHASE
0000C0E4 43 F2 F5 A9 05 08 04 00 28 00 27 00 04 00 00 00 00 05 1E 9F C…….(.’………
0000C0F8 00 00 01 4D 01 2E 4A 45 4E 4E 49 46 45 52 44 41 56 49 53 43 …M..JENNIFERDAVISC
0000C10C F2 F5 A9 0C 06 00 00 30 00 2C 00 05 00 00 00 00 05 1E 9F 00 …….0.,……….
0000C120 00 01 4D 01 38 4A 4F 48 4E 4E 59 4C 4F 4C 4C 4F 42 52 49 47 ..M.8JOHNNYLOLLOBRIG
0000C134 49 44 41 43 F2 F5 A9 09 05 00 00 38 00 28 00 06 00 00 00 00 IDAC…….8.(……
0000C148 05 1E 9F 00 00 01 41 41 41 41 41 41 41 41 41 41 41 41 41 41 ……AAAAAAAAAAAAAA
0000C15C 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 AAAAAAAAAAAAAAAAAAAA
0000C170 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 AAAAAAAAAAAAAAAAAAAA
0000C184 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 AAAAAAAAAAAAAAAAAAAA
0000C198 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 AAAAAAAAAAAAAAAAAAAA
0000C1AC 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 AAAAAAAAAAAAAAAAAAAA
0000C1C0 41 41 41 41 41 41 41 41 41 41 41 41 41 41 4E 4B 43 F2 F5 A9 AAAAAAAAAAAAAANKC…
0000C1D4 05 09 00 00 58 00 28 00 0A 00 00 00 00 05 1E 9F 00 00 01 4D ….X.(…………M
0000C1E8 01 6A 43 48 52 49 53 54 49 41 4E 47 41 42 4C 45 43 F2 F5 A9 .jCHRISTIANGABLEC…
0000C1FC 04 04 00 00 60 00 22 00 0B 00 00 00 00 05 1E 9F 00 00 01 4D ….`."…………M
Corrupted InnoDB table crashes MySQL
When MySQL reads a page with user data check sum is wrong and the server crashes.
mysql> SELECT COUNT(*) FROM sakila.actor
+———-+
| COUNT(*) |
+———-+
| 200 |
+———-+
ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query
Before the crash MySQL write to the error log what exactly went wrong and dumps the faulty page:
Version: ‘5.6.19-67.0’ socket: ‘/var/lib/mysql/mysql.sock’ port: 3306 Percona Server (GPL), Release 67.0, Revision 618
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 4.
InnoDB: You may have to recover from a backup.
2014-07-14 20:18:44 7f060bfff700 InnoDB: Page dump in ascii and hex (16384 bytes):
len 16384; hex 1bce9a5000000004ffffffffffffffff0000000026c3095945bf00000000000000000
Recovering InnoDB table Corruption
When you see a corruption in the InnoDB tablespace the first thing to try is to start MySQL with innodb_force_recovery option. It makes sense to try all values starting from one to six.
We assume that MySQL doesn’t start even with innodb_force_recovery=6 or it starts, but any SELECT crashes it.
The recovery plan is following
Split corrupted InnoDB tablespace into pages; sort the pages by type and index_id
Fetch records from PRIMARY index of the table
DROP corrupted table and create new one
Load records back into MySQL
We would need to parse two tablespaces: ibdata1 and actor.ibd (since option innodb_file_per_table=ON). The InnoDB dictionary is stored in ibdata1, we need it to know index_id of the PRIMARY index of table sakila.actor.
Split corruped InnoDB tablespace
root@test:~/recovery/undrop-for-innodb# ./stream_parser -f /var/lib/mysql/ibdata1
Opening file: /var/lib/mysql/ibdata1
File information:
ID of device containing file: 64768
inode number: 8028
protection: 100660 (regular file)
number of hard links: 1
user ID of owner: 106
group ID of owner: 114
device ID (if special file): 0
blocksize for filesystem I/O: 4096
number of blocks allocated: 36864
time of last access: 1406832698 Thu Jul 31 14:51:38 2014
time of last modification: 1406833058 Thu Jul 31 14:57:38 2014
time of last status change: 1406833058 Thu Jul 31 14:57:38 2014
total size, in bytes: 18874368 (18.000 MiB)
Size to process: 18874368 (18.000 MiB)
All workers finished in 0 sec
Now actor.ibd’s turn
root@test:~/recovery/undrop-for-innodb# ./stream_parser -f /var/lib/mysql/sakila/actor.ibd
Opening file: /var/lib/mysql/sakila/actor.ibd
File information:
ID of device containing file: 64768
inode number: 8037
protection: 100660 (regular file)
number of hard links: 1
user ID of owner: 106
group ID of owner: 114
device ID (if special file): 0
blocksize for filesystem I/O: 4096
number of blocks allocated: 224
time of last access: 1406832349 Thu Jul 31 14:45:49 2014
time of last modification: 1406832300 Thu Jul 31 14:45:00 2014
time of last status change: 1406832300 Thu Jul 31 14:45:00 2014
total size, in bytes: 114688 (112.000 kiB)
Size to process: 114688 (112.000 kiB)
All workers finished in 0 sec
root@test:~/recovery/undrop-for-innodb# Recover InnoDB dictioanry
We need to know index_id of the PRIMARY index of table sakila.actor. See more about InnoDB dictionary. Now we will just get index_id of sakila.actor:
root@test:~/recovery/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql |grep actor
000000000504 85000001320110 SYS_TABLES "sakila/actor" 13 4 1 0 0 "" 1
00000000050D 8E0000013B0110 SYS_TABLES "sakila/film\_actor" 20 3 1 0 0 "" 8

root@test:~/recovery/undrop-for-innodb# ./c_parser -4f pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql |grep 13
000000000300 810000012D01D3 SYS_INDEXES 11 13 "REF\_IND" 1 0 0 304
000000000504 85000001320178 SYS_INDEXES 13 15 "PRIMARY" 1 3 1 3
000000000504 850000013201A6 SYS_INDEXES 13 16 "idx\_actor\_last\_name" 1 0 1 4
000000000505 860000013301CE SYS_INDEXES 14 17 "PRIMARY" 1 3 2 3

So, the index_id of the PRIMARY index of sakila.actor table is 15, the fifth column in the dump.
Recover records from PRIMARY index of the table
c_parser reads InnoDB pages, matches them with a given table structure and dumps records in tab-separated values format.
Opposite to InnoDB, when c_parser hits corrupted area it skips it and continue reading the page.
Let’s read the records from index_id 15, which is the PRIMARY index according to the dictionary.
root@test:~/recovery/undrop-for-innodb# ./c_parser -6f http://ift.tt/ULBt37 -t sakila/actor.sql > dumps/default/actor 2> dumps/default/actor_load.sql
root@test:~/recovery/undrop-for-innodb# cat dumps/default/actor
— Page id: 3, Format: COMPACT, Records list: Invalid, Expected records: (0 200)
72656D756D07 08000010002900 actor 30064 "\0\0\0\0" "" "1972-09-20 23:07:44"
1050454E454C 4F50454755494E actor 19713 "ESSC▒" "" "2100-08-09 07:52:36"
00000000051E 9F0000014D011A actor 2 "NICK" "WAHLBERG" "2006-02-15 04:34:33"
00000000051E 9F0000014D0124 actor 3 "ED" "CHASE" "2006-02-15 04:34:33"
00000000051E 9F0000014D012E actor 4 "JENNIFER" "DAVIS" "2006-02-15 04:34:33"
00000000051E 9F0000014D0138 actor 5 "JOHNNY" "LOLLOBRIGIDA" "2006-02-15 04:34:33"
00000000051E 9F000001414141 actor 6 "AAAAA" "AAAAAAAAA" "2004-09-10 01:53:05"
00000000051E 9F0000014D016A actor 10 "CHRISTIAN" "GABLE" "2006-02-15 04:34:33"

We have identify some valid records, but certainly there is also some “garbage”. Pay attention at the recovered records before Nick Wahlberg. Definitely there should be a records of Penelope Guiness, since we have not overwritten that data in the actor.ibd file.
Filters
We can improve the recovery quality by applying filters on possible values of certain fields. There are 200 records on the original table, but the first two “garbage” records have some weird identifiers (30064 and 19713). We know that actor identifier should be in the range of [1..300]. Therefore we tell the parser to match that condition. For this purpose we add a hint in the comments of actor.sql file that defines actor table. This comment should be in special format for parser to recognize them. Listing of the part of actor.sql file(note a comma after the comment!):
CREATE TABLE `actor` (
`actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT
/*!FILTER
int_min_val: 1
int_max_val: 300 */,
`first_name` varchar(45) NOT NULL,
`last_name` varchar(45) NOT NULL,
`last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`actor_id`),
KEY `idx_actor_last_name` (`last_name`)
) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8;
After applying the filter, recognized records have much better look:
root@test:~/recovery/undrop-for-innodb# ./c_parser -6f http://ift.tt/ULBt37 -t sakila/actor.sql > dumps/default/actor 2> dumps/default/actor_load.sql
root@test:~/recovery/undrop-for-innodb# head -10 dumps/default/actor
— Page id: 3, Format: COMPACT, Records list: Invalid, Expected records: (0 200)
00000000051E 9F0000014D0110 actor 1 "PENELOPE" "GUINESS" "2006-02-15 04:34:33"
00000000051E 9F0000014D011A actor 2 "NICK" "WAHLBERG" "2006-02-15 04:34:33"
00000000051E 9F0000014D0124 actor 3 "ED" "CHASE" "2006-02-15 04:34:33"
00000000051E 9F0000014D012E actor 4 "JENNIFER" "DAVIS" "2006-02-15 04:34:33"
00000000051E 9F0000014D0138 actor 5 "JOHNNY" "LOLLOBRIGIDA" "2006-02-15 04:34:33"
00000000051E 9F000001414141 actor 6 "AAAAA" "AAAAAAAAA" "2004-09-10 01:53:05"
00000000051E 9F0000014D016A actor 10 "CHRISTIAN" "GABLE" "2006-02-15 04:34:33"
00000000051E 9F0000014D0174 actor 11 "ZERO" "CAGE" "2006-02-15 04:34:33"
00000000051E 9F0000014D017E actor 12 "KARL" "BERRY" "2006-02-15 04:34:33"
You see, the record for Penelope Guiness is already here. The only issue remained – invalid record 6-”AAAAA”-”AAAAAAAAA”. This happens because the record appeared to have actor_id of 6, that corresponds to our expectations. Ideally, the dump must have not junk records, so you may try to add more filters on other fields.
Or, we can delete this records in the database manually later.
DROP corrupted table and create new one
As soon as we have dumps of all tables we need to create new instance of MySQL.
If it’s a single table corruption it makes sense to try innodb_force_recovery=6 to DROP the table.
If MySQL can’t even start, try to move the corrupt actor.ibd elsewhere. In the recovery mode after DROP TABLE actor MySQL will remove a record from the dictionary. Remove actor.frm if it still remains.
The point is to get clean up&running MySQL, ready to import the table dump.
Once MySQL is ready create an empty table actor:
mysql> CREATE TABLE `actor` (
-> `actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
-> `first_name` varchar(45) NOT NULL,
-> `last_name` varchar(45) NOT NULL,
-> `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
-> PRIMARY KEY (`actor_id`),
-> KEY `idx_actor_last_name` (`last_name`)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;
Query OK, 0 rows affected (0.01 sec)
mysql>
Load records back into MySQL
Then we will load information from recovered dump:
root@test:~/recovery/undrop-for-innodb# mysql –local-infile -uroot -p$mypass
Welcome to the MySQL monitor. Commands end with ; or \g.

mysql> use sakila;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> source dumps/default/actor_load.sql
Query OK, 0 rows affected (0.00 sec)
Query OK, 199 rows affected, 1 warning (0.00 sec)
Records: 198 Deleted: 1 Skipped: 0 Warnings: 1
The final step is to know how much data we’ve lost due to the corruption.
The c_parser provides count of expected and actually found records.
In the beginning of each page it give number of expected records:
— Page id: 3, Format: COMPACT, Records list: Invalid, Expected records: (0 200)
Which means 200 records are expected, but the list of records is broken (thus, Records list: Invalid).
In the end of each page it gives a summary of what was actually found
— Page id: 3, Found records: 197, Lost records: YES, Leaf page: YES
The post Recover Corrupt MySQL Database appeared first on Backup and Data Recovery for MySQL.
via Planet MySQL
Recover Corrupt MySQL Database

July 2014: 20 Fresh and Free WordPress Themes of the Month


  

We are back with our third month in a row, where all the new entries to the WordPress theme stage are being designed responsive. New ranking factors at Google take mobile usability into account, so building websites that are not mobile-ready today is shortsighted , to say the least. Don’t get caught that easy. Use one of our new entries instead. For another month we went out into the wild to search for the newest and coolest WordPress themes available. In the following article, we introduce you to our findings.
via noupe
July 2014: 20 Fresh and Free WordPress Themes of the Month

New Movies & Weapons Website

the princess brideMany people are familiar with the Internet Movie Firearms Database (IMFDB) that catalogs the kinds of firearms used in motion pictures.  A new site, The Real Movie Stars, has popped up and may serve to be an interesting alternative – or perhaps complement – to the IMFDB. The Real Movie Stars (TRMS) highlights the weapons used […]

Read More …

The post New Movies & Weapons Website appeared first on The Firearm Blog.


via The Firearm Blog
New Movies & Weapons Website

Video of All 135 Space Shuttle Launches Is a Rocket Tribute to Space

YouTuber lunarmodule 5 is back with another NASA compilation video. This time, it’s a four-screen tribute to the Space Shuttle, showing every launch of the Shuttle’s 135 missions. It’ll make your spine tingle.

The Shuttle program lasted 30 years, first launching in 1981 and being retired in 2011. Along the way, there were two disasters: Challenger, which blew up on launch in 1986, and Columbia, which disintegrated on re-entry into earth’s atmosphere in 2003. Those accidents were tragic, but the Shuttle program still stands as a proud accomplishment, a testament to mankind’s scientific accomplishments and zeal for exploration into the unknown.

If you’re a space buff, a science nerd, or just someone awed by the sight of a 2,000-ton space ship launching vertically on a column of flame and escaping the tyranny of earth’s gravity, you’ll love this video—all one hour and forty-four minutes of it. [lunarmodule5]

via Gizmodo
Video of All 135 Space Shuttle Launches Is a Rocket Tribute to Space

Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data

There are several ways to upgrade MySQL. In this post, we will use a combination of shell scripts and the mysqldump application to export our MySQL data, and then re-import it back into the upgraded version of MySQL.
In this example, we will be doing a minor version upgrade. We will be going from 5.6.17 to 5.6.19. This method may not work if you are upgrading from one major release to another – from 5.1 to 5.5, or 5.5 to 5.6. You will want to check each version and review the new features/functions and also what features/functions have been deprecated. We are also assuming that no one will be using the database during the time it takes for us to do the upgrade.
If you want to upgrade from a version that is more than one major release apart from your current version, then you will want to upgrade to each successive version. For example, if you want to upgrade from 5.0 to 5.6, you will want to upgrade from 5.0 to 5.1, then 5.1 to 5.5, and then 5.5 to 5.6.
You don’t have to export all of your data when you upgrade MySQL. There are ways of upgrading without doing anything to your data. But in this post, I will be exporting the data and re-importing it, for a fresh installation. I don’t have that much data, so I don’t mind doing the export and import. If you have a lot of data, you might want to consider other options. To get an idea of the size of your database(s), here is a quick script that you can use:
SELECT table_schema "Data Base Name", sum( data_length + index_length ) / 1024 / 1024 "Data Base Size in MB" FROM information_schema.TABLES GROUP BY table_schema ; When I perform an export/import, I like to export each database as a separate mysqldump file, and then also export all of the databases together in one large file. By exporting/importing the individual databases, if you have an error importing one of the database dump files, you can isolate the error to a single database. It is much easier to fix the error in one smaller data dump file than with a larger all-inclusive dump file.
I am also going to create some simple shell scripts to help me create the commands that I need to make this task much easier. First, you will want to create a directory to store all of the scripts and dump files. Do all of your work inside that directory.
Next, I want to get a list of all of my databases. I will log into mysql, and then issue the show databases; command: (which is the same command as: select schema_name from information_schema.schemata;)
mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| 12thmedia |
| cbgc |
| comicbookdb |
| coupons |
| healthcheck |
| innodb_memcache |
| landwatch |
| laurelsprings |
| ls_directory |
| mem |
| mysql |
| performance_schema |
| protech |
| scripts |
| stacy |
| storelist |
| test |
| testcert |
| tony |
| twtr |
| watchdb |
+——————–+
22 rows in set (1.08 sec)
I can then just highlight and copy the list of databases, and put that list into a text file named “list.txt“. I do not want to include these databases in my export:
information_schema
mysql
performance_schema
test
However, I will export the mysql.user table later. I will need to manually remove those databases from my list.txt file. I then want to remove all of the spaces and pipe symbols from the text file – assuming that you do not have any spaces in your database names. Instead of using spaces in a database name, I prefer to use an underline character “_“. These scripts assume that you don’t have any spaces in your database names.
If you know how to use the vi editor, you can so a substitution for the pipes and spaces with these commands: :%s/ //g
:%s/|//g
Otherwise, you will want to use another text editor and manually edit the list to remove the spaces and pipe symbols. Your finished list.txt file should look like this:
12thmedia cbgc
comicbookdb
coupons
healthcheck
innodb_memcache
landwatch
laurelsprings
ls_directory
mem
protech
scripts
stacy
storelist
testcert
tony
twtr
watchdb
You can then create a simple shell script to help create your mysqldump commands – one command for each database. You will want to create this script and the other scripts in the directory you created earlier. Name the script export.sh. You can also change the mysqldump options to meet your needs. I am using GTID’s for replication, so I want to use this option –set-gtid-purged=OFF. You will also want to change the value of my password my_pass to your mysql password. You can also skip including the password by using the -p option, and just enter the password each time you run the mysqldump command.
# export.sh
# script to create the database export commands
k=""
for i in `cat list.txt`
do
echo "mysqldump -uroot –password=my_pass –set-gtid-purged=OFF –triggers –quick –skip-opt –add-drop-database –create-options –databases $i > "$i"_backup.sql"
k="$k $i"
done
# Optional – export the entire database
# use the file extention of .txt so that your script won’t import it later
echo "mysqldump -uroot –password=my_pass –set-gtid-purged=OFF –triggers –quick –skip-opt –add-drop-database –create-options –databases $k > all_db_backup.txt"
For the individual databases, I am using the suffix of .sql. For the dump file that contains all of the databases, I am using the prefix .txt – as I use a wildcard search later to get a list of the dump files, and I don’t want to import the one dump file that contains all of the databases.
Now you can run the export.sh script to create a list of your mysqldump commands, and you are going to direct the output into another shell script named export_list.sh. # sh export.sh > export_list.sh
We can now take a look at what is in the export_list.sh file
# cat export_list.sh
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases 12thmedia > 12thmedia_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases cbgc > cbgc_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases comicbookdb > comicbookdb_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases coupons > coupons_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases healthcheck > healthcheck_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases innodb_memcache > innodb_memcache_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases landwatch > landwatch_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases laurelsprings > laurelsprings_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases ls_directory > ls_directory_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases mem > mem_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases protech > protech_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases scripts > scripts_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases stacy > stacy_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases storelist > storelist_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases testcert > testcert_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases tony > tony_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases twtr > twtr_backup.sql
mysqldump -uroot –set-gtid-purged=OFF –password=my_pass –triggers –quick –skip-opt –add-drop-database –create-options –databases watchdb > watchdb_backup.sql
mysqldump -uroot -p –set-gtid-purged=OFF –password=my_psss –triggers –quick –skip-opt –add-drop-database –create-options –databases 12thmedia cbgc comicbookdb coupons healthcheck innodb_memcache landwatch laurelsprings ls_directory mem protech scripts stacy storelist testcert tony twtr watchdb > all_db_backup.txt
Now you have created a list of mysqldump commands that you can execute to dump all of your databases. You can now go ahead and execute your mysqldump commands by running the export_list.sh script:
# sh export_list.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
….
The message “Warning: Using a password on the command line interface can be insecure.” is shown because you included the value for “–password“. If you don’t want to put your password on the command line, just change that option to “-p“, and you will have to manually enter your MySQL root user’s password after each mysqldump command.
Here is a list of the dump files that was produced:
# ls -l
total 21424
-rw-r–r– 1 root staff 26690 Aug 1 16:25 12thmedia_backup.sql
-rw-r–r– 1 root staff 5455275 Aug 1 16:26 all_db_backup.txt
-rw-r–r– 1 root staff 1746820 Aug 1 16:25 cbgc_backup.sql
-rw-r–r– 1 root staff 492943 Aug 1 16:25 comicbookdb_backup.sql
-rw-r–r– 1 root staff 1057 Aug 1 16:25 coupons_backup.sql
-rw-r–r– 1 root staff 3366 Aug 1 16:25 export_list.sh
-rw-r–r– 1 root staff 1077 Aug 1 16:25 healthcheck_backup.sql
-rw-r–r– 1 root staff 3429 Aug 1 16:25 innodb_memcache_backup.sql
-rw-r–r– 1 root staff 1815839 Aug 1 16:25 landwatch_backup.sql
-rw-r–r– 1 root staff 642965 Aug 1 16:25 laurelsprings_backup.sql
-rw-r–r– 1 root staff 660254 Aug 1 16:25 ls_directory_backup.sql
-rw-r–r– 1 root staff 1037 Aug 1 16:25 mem_backup.sql
-rw-r–r– 1 root staff 1057 Aug 1 16:25 protech_backup.sql
-rw-r–r– 1 root staff 2889 Aug 1 16:25 scripts_backup.sql
-rw-r–r– 1 root staff 11107 Aug 1 16:25 stacy_backup.sql
-rw-r–r– 1 root staff 4002 Aug 1 16:25 storelist_backup.sql
-rw-r–r– 1 root staff 1062 Aug 1 16:25 testcert_backup.sql
-rw-r–r– 1 root staff 4467 Aug 1 16:25 tony_backup.sql
-rw-r–r– 1 root staff 1042 Aug 1 16:25 twtr_backup.sql
-rw-r–r– 1 root staff 52209 Aug 1 16:25 watchdb_backup.sql
You will now want to dump your MySQL users, so you don’t have to recreate the users, passwords and privileges after the new install.
mysqldump -uroot –password=my_pass –set-gtid-purged=OFF mysql user > mysql_user_backup.txt
I am once again using the .txt prefix for this file.
After you execute the above command, make sure that the dump file was created:
# ls -l mysql_user_backup.txt
-rw-r–r– 1 root staff 9672 Aug 1 16:32 mysql_user_backup.txt
We have now finished exporting all of our data, including our user data. You will need to shutdown MySQL. You may use mysqladmin to shutdown your database, or here is a link on ways to shutdown MySQL.
# mysqladmin -uroot –password=my_pass shutdown
Warning: Using a password on the command line interface can be insecure.
Before continuing, you might want to check to make sure that the mysqld process isn’t still active.
# ps -ef|grep mysqld
0 18380 17762 0 0:00.00 ttys002 0:00.00 grep mysqld
You are now going to want to change the name of your mysql directory. This will give you access to the old directory in case the upgrade fails. For my OS (Mac OS 10.9), my MySQL home directory is a symbolic link to another directory that contains the actual MySQL data. All I have to do is to remove the symbolic link. A new symbolic link will be created with the new install. Otherwise, just use the mv command to rename your old MySQL directory.
# cd /usr/local/
# ls -ld mysql* lrwxr-xr-x 1 root wheel 36 Aug 9 2013 mysql -> mysql-advanced-5.6.17-osx10.6-x86_64
drwxr-xr-x 18 root wheel 612 Jan 16 2014 mysql-advanced-5.6.17-osx10.6-x86_64
All I have to do is to remove the link, and the MySQL directory will still be there:
# rm mysql
# ls -ld mysql* drwxr-xr-x 18 root wheel 612 Jan 16 2014 mysql-advanced-5.6.17-osx10.6-x86_64
Now I am ready to install the new version of MySQL. I won’t cover the installation process, but here is the link to the installation page.
Tip: After you have installed MySQL, don’t forget to run this script from your MySQL home directory. This will install your mysql database tables. Otherwise, you will get an error when you try to start the mysqld process.
# ./scripts/mysql_install_db
Now you can start the mysqld process. See this page if you don’t know how to start MySQL.
You can test to see if the new installation of MySQL is running by either checking the process table, or logging into mysql. With a fresh install of 5.6, you should not have to include a user name or password.
Note: (Future versions of MySQL may automatically create a random root password and put it in your data directory. You will then need to use that password to login to MySQL for the first time. Check the user’s manual for any MySQL versions beyond 5.6.)
# mysql
Welcome to the mysql monitor. Commands end with ; or \g.
Your mysql connection id is 3
….
mysql>
Now that MySQL is up and running, leave the mysql terminal window open, and open another terminal window so you can import your mysql user information from your dump file:
# mysql < /users/tonydarnell/mysql_2014_0731/2014_0731_mysql_backup.sql
You won’t be able to login with your old user names and passwords until you execute the flush privileges command. So, in your other terminal window with the mysql prompt:
mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
Open another terminal window and see if you can login with your old mysql user name and password:
# mysql -uroot -p
Enter password: Welcome to the mysql monitor. Commands end with ; or \g.
Your mysql connection id is 3
….
mysql>
You can then look at your the user names and passwords in the mysql.user table:
mysql> select user, host, password from mysql.user order by user, host;
+—————-+—————+——————————————-+
| user | host | password |
+—————-+—————+——————————————-+
| root | 127.0.0.1 | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root | 192.168.1.2 | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root | 192.168.1.5 | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root | 192.168.1.50 | *BF6F71512345332CAB67E7608EBE63005BEB705C |
| root | localhost | *BF6F71512345332CAB67E7608EBE63005BEB705C |
+—————-+—————+——————————————-+
5 rows in set (0.00 sec)
OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed, by issuing the show master status command:
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000005
Position: 644455
Binlog_Do_DB: Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: e1eb3f38-18da-11e4-aa44-0a1a64a61679:1-124
1 row in set (0.00 sec)
We are now ready to import the database dump files. We can use this script to create the import commands. Copy this into a text file named import.sh:
# import.sh
# script to import all of the export files
# run this script in the same directory as the exported dump files
#
> import_files.sh
directory=`pwd`
for file in `ls *sql`
do
if [[ $(grep -c ‘.txt’ $file) != 0 ]];then
echo "# found mysql – do nothing"
else
echo "mysql -uroot -p"my_pass" < $directory/$file"
echo "mysql -uroot -p"my_pass" > import_files.sh
fi
done
Then run the import.sh script. The script will print the output to the terminal window as well as into a new script file named import_files.sh.
# sh import.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql
Look at the contents of the new script file – import_files.sh – to make sure that it contains all of the database files. You will use this file to help you import your dump files.
# cat import_files.sh
mysql -uroot -pmy_pass < 12thmedia_backup.sql
mysql -uroot -pmy_pass < cbgc_backup.sql
mysql -uroot -pmy_pass < comicbookdb_backup.sql
mysql -uroot -pmy_pass < coupons_backup.sql
mysql -uroot -pmy_pass < healthcheck_backup.sql
mysql -uroot -pmy_pass < innodb_memcache_backup.sql
mysql -uroot -pmy_pass < landwatch_backup.sql
mysql -uroot -pmy_pass < laurelsprings_backup.sql
mysql -uroot -pmy_pass < ls_directory_backup.sql
mysql -uroot -pmy_pass < mem_backup.sql
mysql -uroot -pmy_pass < protech_backup.sql
mysql -uroot -pmy_pass < scripts_backup.sql
mysql -uroot -pmy_pass < stacy_backup.sql
mysql -uroot -pmy_pass < storelist_backup.sql
mysql -uroot -pmy_pass < testcert_backup.sql
mysql -uroot -pmy_pass < tony_backup.sql
mysql -uroot -pmy_pass < twtr_backup.sql
mysql -uroot -pmy_pass < watchdb_backup.sql
WARNING: Be sure that this script file does not contain the main dump file or the mysql user’s file that we created.
I was exporting and importing eighteen (18) database files, so I can also check the line count of the import_files.sh script to make sure it matches:
# wc -l import_files.sh
18 import_files.sh
I am now ready to import my files.
Optional: add the -v for verbose mode – sh -v import_files.sh
# sh import_files.sh
Warning: Using a password on the command line interface can be insecure.
Warning: Using a password on the command line interface can be insecure.
….
You databases should now be imported into your new instance of MySQL. You can always re-run the script to make sure that the databases are the same size.
OPTIONAL:
Since I am using GTID’s for replication, I can check to see how many transactions have been completed after importing the dump files, by issuing the show master status command:
mysql> show master status\G
*************************** 1. row ***************************
File: mysql-bin.000003
Position: 16884001
Binlog_Do_DB: Binlog_Ignore_DB: coupons,usta,ls_directory,landwatch
Executed_Gtid_Set: cc68d008-18f3-11e4-aae6-470d6cf89709:1-43160
1 row in set (0.00 sec)
Your new and fresh installation of MySQL should be ready to use.
 
Tony Darnell is a Principal Sales Consultant for MySQL, a division of Oracle, Inc. MySQL is the world’s most popular open-source database program. Tony may be reached at info [at] ScriptingMySQL.com and on LinkedIn.
via Planet MySQL
Upgrade MySQL to a new version with a fresh installation & use shell scripts and mysqldump to reload your data