The packages that are available in the yum repos contain a number of enhancements over the RPM packages that are available from dev.mysql.com.
Norvald blogged on some of these enhancements earlier. Today I wanted to walk through a safe upgrade path, as they are not quite compatible with each-other.
My Existing Installation
To start with, the packages I have installed came from “Red Hat Enterprise Linux 6 / Oracle Linux 6 (x86, 64-bit), RPM Bundle” on dev.mysql.com. You can check which packages you have installed with:
[root@localhost ~]# rpm -qa | grep -i mysql
MySQL-client-5.6.14-1.el6.x86_64
MySQL-embedded-5.6.14-1.el6.x86_64
MySQL-server-5.6.14-1.el6.x86_64
MySQL-shared-5.6.14-1.el6.x86_64
MySQL-devel-5.6.14-1.el6.x86_64
MySQL-test-5.6.14-1.el6.x86_64
MySQL-shared-compat-5.6.14-1.el6.x86_64
Uninstalling and Installing Yum repos
I recommend first running yum update before installing the new repo:
yum update
yum localinstall http://repo.mysql.com/mysql-community-release-el6-3.noarch.rpm
After this step, stop MySQL (note the missing d in the dev.mysql.com packages):
service mysql stop
Now with yum shell it’s possible to uninstall the existing packages (listed in ‘my existing installation’ above) and install the replacement packages from the yum repo in one step:
yum shell
> remove MySQL-client-5.6.14-1.el6 MySQL-embedded-5.6.14-1.el6 MySQL-server-5.6.14-1.el6 MySQL-shared-5.6.14-1.el6 MySQL-devel-5.6.14-1.el6 MySQL-test-5.6.14-1.el6 MySQL-shared-compat-5.6.14-1.el6 > install mysql-community-server
> run
Here was the summary output from my yum session:
============================================================================================================================
Package Arch Version Repository Size
============================================================================================================================
Installing:
mysql-community-server x86_64 5.6.14-3.el6 mysql-community 51 M
Removing:
MySQL-client x86_64 5.6.14-1.el6 @/MySQL-client-5.6.14-1.el6.x86_64 81 M
MySQL-devel x86_64 5.6.14-1.el6 @/MySQL-devel-5.6.14-1.el6.x86_64 19 M
MySQL-embedded x86_64 5.6.14-1.el6 @/MySQL-embedded-5.6.14-1.el6.x86_64 432 M
MySQL-server x86_64 5.6.14-1.el6 @/MySQL-server-5.6.14-1.el6.x86_64 235 M
MySQL-shared x86_64 5.6.14-1.el6 @/MySQL-shared-5.6.14-1.el6.x86_64 8.4 M
MySQL-shared-compat x86_64 5.6.14-1.el6 @/MySQL-shared-compat-5.6.14-1.el6.x86_64 11 M
MySQL-test x86_64 5.6.14-1.el6 @/MySQL-test-5.6.14-1.el6.x86_64 318 M
Installing for dependencies:
mysql-community-client x86_64 5.6.14-3.el6 mysql-community 18 M
mysql-community-common x86_64 5.6.14-3.el6 mysql-community 296 k
mysql-community-libs x86_64 5.6.14-3.el6 mysql-community 1.8 M
Removing for dependencies:
cronie x86_64 1.4.4-7.el6 @anaconda-CentOS-201303020151.x86_64/6.4 166 k
cronie-anacron x86_64 1.4.4-7.el6 @anaconda-CentOS-201303020151.x86_64/6.4 43 k
crontabs noarch 1.10-33.el6 @anaconda-CentOS-201303020151.x86_64/6.4 2.4 k
postfix x86_64 2:2.6.6-2.2.el6_1 @anaconda-CentOS-201303020151.x86_64/6.4 9.7 M
Transaction Summary
============================================================================================================================
Install 4 Package(s)
Remove 11 Package(s)
MySQL should now be installed from the yum packages. You just have two more steps to complete – start it, and configure it on boot:
service mysqld start # note the added ‘d’
chkconfig mysqld on
Still having problems? I recommend heading to the MySQL Forums. There is a section dedicated to Install & Repo help.
via Planet MySQL
Upgrading from the earlier MySQL RPM Format to Yum Repos
Override Your Hotel Room Thermostat and Set It As Hot or Cold You Like
Hotel room thermometers normally don’t let you adjust the temperature above or below a certain point, which can lead to some pretty warm rooms in the summer time or chilly ones in the winter. If you want more control, here’s how to override your hotel thermometer, put it in "VIP" mode, and tweak it where you like it.
Gary Leff, writing for View from the Wing, shared the video above, which shows you how it’s done. Most hotel wall units (Gary noted that Hilton and Hyatt specifically tend to use this type of thermostat) that you’ll have access to will work this way. The window units on the air conditioner/heaters themselves may be a bit more flexible, but give this a try on your next wall thermometer:
- Hold down the “display” button
- While holding that button, press “off”
- Release off, continue to hold down display, and Press the “up” arrow button
- Release all buttons
This trick also disables the motion sensors that many hotels use to only keep the heating and cooling system active at all when a guest is in the room—that means that you won’t have to wait for a sweltering room to gradually cool off when you get back from a long day, or wait for an ice cold room to warm up in the winter.
Gary explains that you don’t have to just be quirky about the temperature to use this trick—sometimes hotels try and save money by keeping the room thermostats in a certain range, leading to uncomfortable guests, and in his case, he had a room that got a ton of sunlight that warmed it up in the daytime, making it really hot, even with the thermostat turned down as far as it can go. Either way, the power is yours—to be more comfortable when you travel. Hit the link below to read more—his commenters, both at the link below and his much older post have some similar tricks for other hotel chains that may not use these units, too.
How to Override Your Hotel’s Thermostat Controls and Make it as Cool or Hot As You’d Like | View from the Wing
via Lifehacker
Override Your Hotel Room Thermostat and Set It As Hot or Cold You Like
MySQL Performance and Tuning Best Practices
Users are complaining about slowness in your system, MySQL load is always high… The more your database has access, the more it may get slow or worse: slowness even if it is running with low load. You are starting to get desperate! The consequences of slowness and high load are disastrous: If your site is slow, […]
via Planet MySQL
MySQL Performance and Tuning Best Practices
What SQL is running MySQL
Using the MySQL 5.6 Performance Schema it is very easy to see what is actually running on your MySQL instance. No more sampling or installing software or worrying about disk I/O performance with techniques like SHOW PROCESSLIST, enabling the general query log or sniffing the TCP/IP stack.
The following SQL is used to give me a quick 60 second view on a running MySQL system of ALL statements executed.
use performance_schema;
update setup_consumers set enabled=’YES’ where name IN (‘events_statements_history’,’events_statements_current’,’statements_digest’);
truncate table events_statements_current; truncate table events_statements_history; truncate table events_statements_summary_by_digest;
do sleep(60);
select now(),(count_star/(select sum(count_star) FROM events_statements_summary_by_digest) * 100) as pct, count_star, left(digest_text,150) as stmt, digest from events_statements_summary_by_digest order by 2 desc;
update setup_consumers set enabled=’NO’ where name IN (‘events_statements_history’,’events_statements_current’,’statements_digest’);
NOTE: These statements are for simple debugging and demonstration purposes. If you want to monitor SQL statements on an ongoing basis, you should not simply truncate tables and globally enable/disable options.
There are four performance schema tables that are applicable for looking at initial SQL analysis.
The events_statements_summary_by_digest table shown below gives as the name suggests a way to summarize all queries into a common query pattern (or digest). This is great to get a picture of volume and frequency of SQL statements.
The events_statements_current shows the currently running SQL statements
The events_statements_history shows the fun, because it provides a *short, default 10 threads* history of the SQL statements that have run in any given thread.
The events_statements_history_long (when enabled) gives you a history of the most recent 10,000 events.
One query can give me a detailed review of the type and frequency of ALL SQL statements run. The ALL is important, because on a slave you also get ALL replication applied events.
mysql> select now(),(count_star/(select sum(count_star) FROM events_statements_summary_by_digest) * 100) as pct, count_star, left(digest_text,150) as stmt, digest from events_statements_summary_by_digest order by 2 desc;
select * from events_statements_current where digest=’ffb6231b78efc022175650d37a837b99’\G
+———————+———+————+——————————————————————————————————————————————————–+———————————-+
| now() | pct | count_star | stmt | digest |
+———————+———+————+——————————————————————————————————————————————————–+———————————-+
| 2013-11-07 18:24:46 | 60.6585 | 7185 | SELECT * FROM `D…..` WHERE `name` = ? | d6399273d75e2348d6d7ea872489a30c |
| 2013-11-07 18:24:46 | 23.4192 | 2774 | SELECT nc . id , nc . name FROM A……………… anc JOIN N……….. nc ON anc . …………_id = nc . id WHERE ……._id = ? | c6e2249eb91767aa09945cbb118adbb3 |
| 2013-11-07 18:24:46 | 5.5298 | 655 | BEGIN | 7519b14a899fd514365211a895f5e833 |
| 2013-11-07 18:24:46 | 4.6180 | 547 | INSERT INTO V…….. VALUES (…) ON DUPLICATE KEY UPDATE v…. = v…. + ? | ffb6231b78efc022175650d37a837b99 |
| 2013-11-07 18:24:46 | 1.0891 | 129 | SELECT COUNT ( * ) FROM T…………… WHERE rule = ? AND ? LIKE concat ( pattern , ? ) | 22d984df583adc9a1ac282239e7629e2 |
| 2013-11-07 18:24:46 | 1.0553 | 125 | SELECT COUNT ( * ) FROM T…………… WHERE rule = ? AND ? LIKE concat ( ? , pattern , ? ) | a8ee43287bb2ee35e2c144c569a8b2de |
| 2013-11-07 18:24:46 | 0.9033 | 107 | INSERT IGNORE INTO `K……` ( `id` , `k……` ) VALUES (…) | 675e32e9eac555f33df240e80305c013 |
| 2013-11-07 18:24:46 | 0.7936 | 94 | SELECT * FROM `K……` WHERE k…… IN (…) | 8aa7dc3b6f729aec61bd8d7dfa5978fa |
| 2013-11-07 18:24:46 | 0.4559 | 54 | SELECT COUNT ( * ) FROM D….. WHERE NAME = ? OR NAME = ? | 1975f53832b0c2506de482898cf1fd37 |
| 2013-11-07 18:24:46 | 0.3208 | 38 | SELECT h . * FROM H…….. h LEFT JOIN H………… ht ON h . id = ht . ……_id WHERE ht . ………_id = ? ORDER BY h . level ASC | ca838db99e40fdeae920f7feae99d19f |
| 2013-11-07 18:24:46 | 0.2702 | 32 | SELECT h . * , ( POW ( ? * ( lat – – ? ) , ? ) + POW ( ? * ( ? – lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H…….. h FORCE INDEX ( lat ) WHER | cd6e32fc0a20fab32662e2b0a282845c |
| 2013-11-07 18:24:46 | 0.1857 | 22 | SELECT h . * , ( POW ( ? * ( lat – ? ) , ? ) + POW ( ? * ( – ? – lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H…….. h FORCE INDEX ( lat ) WHER | a7b43944f5811ef36c0ded7e79793536 |
| 2013-11-07 18:24:46 | 0.0760 | 9 | SELECT h . * , ( POW ( ? * ( lat – ? ) , ? ) + POW ( ? * ( ? – lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H…….. h FORCE INDEX ( lat ) WHERE | 4ccd8b28ae9e87a9c0b372a58ca22af7 |
| 2013-11-07 18:24:46 | 0.0169 | 2 | SELECT * FROM `K……` WHERE k…… IN (?) | 44286e824d922d8e2ba6d993584844fb |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SELECT h . * , ( POW ( ? * ( lat – – ? ) , ? ) + POW ( ? * ( – ? – lon ) * COS ( lat / ? ) , ? ) ) AS distance FROM H…….. h FORCE INDEX ( lat ) WH | 299095227a67d99824af2ba012b81633 |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SELECT * FROM `H……..` WHERE `id` = ? | 2924ea1d925a6e158397406403a63e3a |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SHOW ENGINE INNODB STATUS | 0b04d3acd555401f1cbc479f920b1bac |
| 2013-11-07 18:24:46 | 0.0084 | 1 | DO `sleep` (?) | 3d6e973c2657d0d136bbbdad05e68c7a |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SHOW ENGINE INNODB MUTEX | a031f0e6068cb12c5b7508106687c2cb |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SELECT NOW ( ) , ( `count_star` / ( SELECT SUM ( `count_star` ) FROM `events_statements_summary_by_digest` ) * ? ) AS `pct` , `count_star` , LEFT ( `d | 8a9e990cd85d6c42a2e537d04c8c5910 |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SHOW SLAVE STATUS | d2a0ffb1232f2704cef785f030306603 |
| 2013-11-07 18:24:46 | 0.0084 | 1 | TRUNCATE TABLE `events_statements_summary_by_digest` | a7bef5367816ca771571e648ba963515 |
| 2013-11-07 18:24:46 | 0.0084 | 1 | UPDATE `setup_consumers` SET `enabled` = ? WHERE NAME IN (…) | 8205ea424267a604a3a4f68a76bc0bbb |
| 2013-11-07 18:24:46 | 0.0084 | 1 | SHOW GLOBAL STATUS | ddf94d7d7b176021b8586a3cce1e85c9 |
+———————+———+————+——————————————————————————————————————————————————–+———————————-+
This immediately shows me a single simple application query that is executed 60% of the time. Further review of the data and usage pattern shows that should be cached. This is an immediate improvement on system scalability.
While you can look at the raw performance schema data, using ps_helper from Mark Leith makes live easier using the statement_analysis view because of normalizing timers into human readable formats (check out lock_latency).
mysql> select * from ps_helper.statement_analysis order by exec_count desc limit 10;
+——————————————————————-+———–+————+———–+————+—————+————-+————-+————–+———–+—————+————–+————+—————–+————-+——————-+———————————-+
| query | full_scan | exec_count | err_count | warn_count | total_latency | max_latency | avg_latency | lock_latency | rows_sent | rows_sent_avg | rows_scanned | tmp_tables | tmp_disk_tables | rows_sorted | sort_merge_passes | digest |
+——————————————————————-+———–+————+———–+————+—————+————-+————-+————–+———–+—————+————–+————+—————–+————-+——————-+———————————-+
| CREATE VIEW `io_by_thread_by_l … SUM ( `sum_timer_wait` ) DESC | | 146117 | 0 | 0 | 00:01:47.36 | 765.11 ms | 734.74 us | 00:01:02.00 | 3 | 0 | 3 | 0 | 0 | 0 | 0 | c877ec02dce17ea0aca2f256e5b9dc70 |
| SELECT nc . id , nc . name FRO … nc . id WHERE ……._id = ? | | 41394 | 0 | 0 | 16.85 s | 718.37 ms | 407.00 us | 5.22 s | 155639 | 4 | 312077 | 0 | 0 | 0 | 0 | c6e2249eb91767aa09945cbb118adbb3 |
| BEGIN | | 16281 | 0 | 0 | 223.24 ms | 738.82 us | 13.71 us | 0 ps | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 7519b14a899fd514365211a895f5e833 |
| INSERT INTO V…….. VALUES ( … KEY UPDATE v…. = v…. + ? | | 12703 | 0 | 0 | 1.73 s | 34.23 ms | 136.54 us | 696.50 ms | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ffb6231b78efc022175650d37a837b99 |
| SELECT * FROM `D…..` WHERE `name` = ? | | 10620 | 0 | 0 | 3.85 s | 25.21 ms | 362.52 us | 705.16 ms | 1 | 0 | 1 | 0 | 0 | 0 | 0 | d6399273d75e2348d6d7ea872489a30c |
| SELECT COUNT ( * ) FROM T….. … ? LIKE concat ( pattern , ? ) | | 2830 | 0 | 0 | 1.22 s | 2.14 ms | 432.60 us | 215.62 ms | 2830 | 1 | 101880 | 0 | 0 | 0 | 0 | 22d984df583adc9a1ac282239e7629e2 |
| SELECT COUNT ( * ) FROM T….. … KE concat ( ? , pattern , ? ) | | 2727 | 0 | 0 | 932.01 ms | 30.95 ms | 341.77 us | 189.47 ms | 2727 | 1 | 38178 | 0 | 0 | 0 | 0 | a8ee43287bb2ee35e2c144c569a8b2de |
| INSERT IGNORE INTO `K……` ( `id` , `k……` ) VALUES (…) | | 2447 | 0 | 0 | 499.33 ms | 9.65 ms | 204.06 us | 108.28 ms | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 675e32e9eac555f33df240e80305c013 |
| SELECT * FROM `K……` WHERE k…… IN (…) | | 2237 | 0 | 0 | 1.58 s | 62.33 ms | 704.19 us | 345.61 ms | 59212 | 26 | 59212 | 0 | 0 | 0 | 0 | 8aa7dc3b6f729aec61bd8d7dfa5978fa |
| SELECT COUNT ( * ) FROM D….. WHERE NAME = ? OR NAME = ? | | 1285 | 0 | 0 | 797.72 ms | 131.29 ms | 620.79 us | 340.45 ms | 1285 | 1 | 8 | 0 | 0 | 0 | 0 | 1975f53832b0c2506de482898cf1fd37 |
+——————————————————————-+———–+————+———–+————+—————+————-+————-+————–+———–+—————+————–+————+—————–+————-+——————-+———————————-+
Indeed, this simple query highlights a pile of additional information necessary for analysis like:
What is that CREATE VIEW command that’s executed many more times?
In this view, query 2 is executed some 3x more then query 4, yet in my 60 second sample it was 3x less. Has the profile of query load changed. What exactly is being sampled in this view?
The lock_latency shows some incredibility large lock times, over 5 seconds for the top SELECT statement. Is this an outlier. Unfortunately the views give min/avg/max for the total_latency but no breakdown on lock_latency to see how much of a problem this actually is?
A quick note, the statement_analysis_raw view gives you the full SQL statement, so for example the first point listed the statement actually was.
select query from ps_helper.statement_analysis_raw order by exec_count desc limit 1;
CREATE VIEW `io_by_thread_by_latency` AS SELECT IF ( `processlist_id` IS NULL , `SUBSTRING_INDEX` ( NAME , ? , – ? ) , `CONCAT` ( `processlist_user` , ? , `processlist_host` ) ) SYSTEM_USER , SUM ( `count_star` ) `count_star` , `format_time` ( SUM ( `sum_timer_wait` ) ) `total_latency` , `format_time` ( MIN ( `min_timer_wait` ) ) `min_latency` , `format_time` ( AVG ( `avg_timer_wait` ) ) `avg_latency` , `format_time` ( MAX ( `max_timer_wait` ) ) `max_latency` , `thread_id` , `processlist_id` FROM `performance_schema` . `events_waits_summary_by_thread_by_event_name` LEFT JOIN `performance_schema` . `threads` USING ( `thread_id` ) WHERE `event_name` LIKE ? AND `sum_timer_wait` > ? GROUP BY `thread_id` ORDER BY SUM ( `sum_timer_wait` ) DESC
via Planet MySQL
What SQL is running MySQL
“Good Leadership Is Always Human”
“Good leadership is always human. It takes time and energy. It is hard work. Which is why good leadership is so special when we find it.” – Simon Sinek
While it seems straightforward—and maybe even a little obvious—you can become preoccupied with the wrong things when taking the lead. It’s easy to get lost thinking about results, output, performance, and the sorts of quantifiable metrics that can look a lot like success. Ultimately, however, you don’t get to those points without successful human interaction. If you want to lead well, you can’t forget the human component.
Good Leadership | @simonsinek via Swissmiss
Photo by Jirsak (Shutterstock).
MySQL 5.6 DBA Beta Exam IS TOUGH!
I have just finished the MySQL 5.6 DBA Certification Exam Beta and my best advice is to study, study, study. The exam is in beta which means three things. The first is the exam is three hours long and it took me all that time to finish. Normally I am very fast test taker which has helped me earn three college degrees and enough computer related certifications to wallpaper a decent sized house. So if you are a slow reader, suck down a gallon of high octane coffee before your test appointment.
Secondly consider that some of the question will need to be cut out or reworked as the exam becomes production to fit the normal two hour test window Pearson View prefers. But in the beta period you over test and cut out the questions that prove less than satisfactory. So if you see a questions that triggers the ‘where the heck did they find that?’ reflex, you can hope that it gets determined to be a question to be relegated.
Thirdly and lastly is that while in beta you will not get your grade for several weeks. There is not immediate pass/go, pass/fail, or thumbs-up/thumbs-down. You will walk out wondering just exactly how you did. And you will also have some mental notes that you will want to go to the manual to crosscheck.
I have heard some say that because of a lack of a Certification Guide, that the only way to pass will be to take the DBA Class and that is bull cookies. I had access to the course materials (one perk of working for Oracle is the access to mountains of docs) and there are numerous items on the exam not covered in them or the class exercises. You had better go through the exam objectives with a fine tooth comb and spend a good deal of time on the new features of replication, performance schema, user administration, and everything else new in 5.6 or you will be wasting your time. I estimate that a complete certification guide for 5.6 would at least twice the size of the 5.0 version and kill off too many trees.
I do have a big hint. The PV testing software will let you right click on any item on the list of answers and strike it out to help you eliminate multiple choice options that are obviously wrong. So pick off the obviously wrong answers and mark the question for review if you get stuck.
So how does it compare to the ol’ 5.0 DBA exam? Well, the questions are much more rigorous in testing your knowledge but less picky on wording. Or to be more precise, the hard part is the material and not the way the questions are asked. In the past a candidate had to memorize a lot of minutiae and a little of that is still there. But the overwhelming majority of the questions cover items that a DBA will have to handle as part of the job.
And how does it compare to the ‘hands-on’ 5.1 DBA exam? Both measure practical DBA material and my preference will be on the hands-on approach. But you can not cover the breath of material as in the 5.6 exam with a hands-on exam.
via Planet MySQL
MySQL 5.6 DBA Beta Exam IS TOUGH!
How to Use Amazon Glacier as a Dirt Cheap Backup Solution
Not too long ago, Amazon introduced Glacier, an online storage/archiving solution that starts at just a penny per GB per month. Depending on your storage needs, Amazon Glacier could be the most cost-efficient way to back up your data for a lifetime. Here’s what you need to know about it and how to set it up.
How Amazon Glacier Works
Amazon Glacier is a low-cost, online storage service where you pay every month only for what you use (online storage space plus data transfers). It’s like Amazon’s other inexpensive storage service, S3—only about 10 times cheaper. Why does it cost so little? Amazon’s designed Glacier to be optimized for data you don’t access often—think long-term storage of photos and videos, archived project files, etc. You’re not supposed to use it to regularly retrieve files or constantly delete them off the servers, and if you do it’ll cost you.
The most important thing to know about Amazon Glacier is that if you want to retrieve files, it takes 3 to 5 hours to complete. So this isn’t for backing up and quickly retrieving a file you accidentally deleted and need right away.
Your files and folders are stored in Amazon Glacier containers called "vaults." Amazon calls all the stuff in your Glacier vaults "archives." These can be a single file or you can zip multiple files and folders into a single archive, which can be as large as 40TB. If you ever need to retrieve your data, you request it by archive. (They don’t want you downloading an entire vault at once; you’ll pay dearly if you want to.)
Finally, pricing is a bit complicated and Amazon doesn’t provide software for uploading and downloading your data, but there are great third-party tools you can use. (See below)
Amazon Glacier vs. CrashPlan and Other Backup Services
So why bother with all these quirks when you can just use CrashPlan, Backblaze, or another popular online backup service? To be honest, if you just want a set-and-forget online backup system, one of those online solutions would be best.
However, if you’re already backing up your data locally to a NAS or external drive, and perhaps also using cloud storage and syncing like Dropbox or Google Drive, Amazon Glacier can be your dirt cheap offsite backup. (Remember the 3-2-1 backup rule?) That way, you have your local backup for retrieving deleted files, restoring your system after a crash, or whatever else. Your Amazon Glacier backup is there just in case your computer and backup drive both get ruined, like in a fire or an earthquake.
All this depends on how much you want to store offsite, though. Let’s compare with our favorite online backup, CrashPlan:
- CrashPlan’s 10GB backup plan for one computer is $2.99 a month. For that same amount of data on Glacier it’s roughly $0.10 a month. (Using the Glacier server region US East as an example, since it’s one of the lowest priced ones.)
- CrashPan’s unlimited backup for one computer is $5.99 a month. That’s about the same you’ll pay per month to store 600GB on Glacier.
- Other examples: 100GB would be $0.88 a month on Glacier; 200GB would be $1.88 a month; 300GB would be $2.88 a month.
(CrashPlan has more attractive pricing if you pay for a year or more in advance rather than monthly, let’s compare apples to apples.) So essentially, if you have less than 600GB to backup offsite, Glacier is the more affordable option. Amazon Glacier is to online storage as pay-as-you-go or prepaid cell phone plans are to wireless plans.
And again, this is assuming that you don’t need to retrieve any of that data regularly (e.g., you use your local or Dropbox backups instead), because, as I mentioned earlier, there are retrieval fees.
To see if it’s worth it for you, use this Glacier Cost Calculator, putting in the size of the data you want backed up. (The early deletion fee is if you delete data uploaded in the last three months; it comes out to about $0.03 per GB—or the amount you’d spend storing it for three months, so you might as well just keep it there.)
How to Back Up to Amazon Glacier
If it sounds good to you, getting started with Glacier is pretty easy.
Step 1: Sign Up for Amazon Web Services
First, sign into your Amazon account or create a new Amazon Web Services account here. You’ll need to enter a credit card, but won’t be charged until you start using Amazon Glacier/AWS. You’ll also have to verify your identity over the phone and then choose a customer support plan (most people will want the free version).
While you’re at it, protect your account with multi-factor authentication (MFA) on the last screen of the setup, a.k.a. two-factor authentication, a second layer of security for your data. If you have Google Authenticator on your mobile device, for example:
- Select "virtual MFA" from the types of authentication methods on Amazon
- Then, in Google Authenticator, go to the options menu to Add an account > scan QR barcode
- With your mobile device, scan the barcode Amazon displays on the screen
- Then enter the authentication codes from Google Authenticator into the Amazon security page
Step 2: Create a Security Access Key for Your Amazon Glacier Account
Next, head to the Security Credentials page in Amazon Web Service, expand the "Access Keys" section, and click the "Create New Access Key" button. You’ll download the key file (CVS format) to your computer, which has the numbers required for Amazon Glacier client software to access your files.
Step 3: Create a Vault in Glacier
In the Amazon Glacier console/homescreen, click the Create Vault button. You can also click the region name at the top navigation bar to change to a different data center. (Amazon requires you to choose a data center for your Glacier storage: e.g., US East, US West, Asia, EU. These have different pricing schemes. In general, the US East and US West-Oregon are less expensive, but you’ll want to check the cost calculator mentioned earlier.)
Just name the vault, select if you want notifications on activity for the vault, and you’re done. You can have multiple vaults (e.g., "Photo archive" or "Software backups") in your Glacier account—as many as 1,000 vaults per region—if you want to organize them better. Also, many Glacier clients allow you to create vaults directly in the software.
Step 4: Download and Install an Amazon Glacier Client
Third-party software makes uploading, syncing, and automatically backing up your data easy. I’m using Fast Glacier (free, Windows) because it has a lot of features, like bandwidth throttling, drag-and-drop support, a sync tool, and support for smart data retrieval (this saves you money if you have to retrieve files, by staggering the job requests). Other popular clients include CloudBerry Explorer and previously mentioned Arq for Mac. Digital Inspiration has a nice overview of several Glacier clients.
For the rest of these examples, I’m going to use Fast Glacier screenshots, but they should be similar in other programs.
Step 5: Connect Your Amazon Glacier Client to Your Account
Open up the Security Access Key file you downloaded to grab the Access ID and Secret Key codes to put into your client. Once you do so, you’ll see the vaults you created and can upload files and folders to them.
In FastGlacier and CloudBerry Explorer (and probably the other clients), you can simply drag-and-drop folders and files to start the upload. You can do this with mapped network drives—great for backing up a NAS. This is where you would also download or delete files (knowing the limitations mentioned above).
Step 6: Automate Your Backups
FastGlacier has a convenient comparison and syncing tool (under Files > Compare with local folder), where you can see which files are missing from Glacier and choose to synchronize them (with options to upload only changed files, new files, or missing ones). This needs to be manually launched, though.
To automate the backups in FastGlacier, you’ll need to do it with Windows Task Scheduler. In Task Scheduler, create a new task, and point it to the FastGlacier sync script (C:\Program Files\FastGlacier\glacier-sync.exe). In the arguments for the task, put in the name of your Glacier Account, source folder on your computer, region you’ve selected for your vault, Vault name and directory where you want to backup to. See FastGlacier’s command line older sync tool instructions or Get in the Sky for more examples.
As an alternative, CloudBerry Backup ($29.99, Windows) is more of a traditional backup tool for this job or you can even do it for free via FTP backup.
That’s basically it. For a few dollars a year, you add more redundancy to your backup system, with the confidence of having your most precious data saved on redundant servers and an average 99.999999999% durability for your archives.
via Lifehacker
How to Use Amazon Glacier as a Dirt Cheap Backup Solution
MySQL Workbench 6.0.8 Released
Dear MySQL users,
The MySQL developer tools team at Oracle is excited to announce the availability of MySQL Workbench 6.0.8.
MySQL Workbench 6.0 is the new version the GUI Development and Administration tool for MySQL. This maintenance release contains over 50 bug fixes since version 6.0.7
Improvements in MySQL Workbench 6.0:
a new redesigned Home screen
the SQL Editor and Server Administration UIs were merged into a single connection specific interface, allowing for quick access to administration features while simplifying the location of specific features
improved model Synchronization, lets you compare and update your EER model or database with ALTER scripts and properly handle corner cases involving objects renamed externally or sync schemas with different names
improved support for model printing to PDF files
all new Schema Inspector, gives you a detailed overview of all objects in your schemas, plus access to table maintenance operations such as ANALYZE and OPTIMIZE TABLE
table data search, lets you perform a text search on any number of tables and schemas for rows matching a given pattern
improved Server Status page, gives a quick summary of server status and configuration
cascaded DELETE statement generator, automatically performs the tedious task of generating DELETE statements for deleting a row referenced by foreign keys
new Migration Wizard support for SQL Anywhere and SQLite
several performance improvements
and much more. For a detailed overview of what’s new in MySQL Workbench 6.0, please visit:
http://dev.mysql.com/doc/workbench/en/wb-what-is-new.html
For the full list of bugs fixed in this revision, visit
http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-0.html
For discussion, join the MySQL Workbench Forums:
http://forums.mysql.com/index.php?151
Download MySQL Workbench 6.0 now, for Windows, Mac OS X 10.6+, Oracle Linux 6, Fedora 18, Fedora 19, Ubuntu 12.04 and Ubuntu 13.04 or sources, from:
http://dev.mysql.com/downloads/tools/workbench/
In Windows, you can also use the integrated MySQL Installer to update MySQL Workbench and other MySQL products. For RPM package based Linux distributions, you can get and update Workbench using the new MySQL RPM repository at http://dev.mysql.com/downloads/repo/
Quick links:
– Download: http://dev.mysql.com/downloads/tools/workbench/
– Bugs: http://bugs.mysql.com
– Forums: http://forums.mysql.com/index.php?151
Read more about MySQL Workbench 6.0 and some of its features in
MySQL Workbench 6.0: What’s New
http://mysqlworkbench.org/2013/06/mysql-workbench-6-0-whats-new/
–
the MySQL Workbench team
via Planet MySQL
MySQL Workbench 6.0.8 Released
Playing Darts Is So Much Easier With a Slingshot Sniper Rifle
Not satisfied with the dart-launching pistol he whipped up last year, Joerg Sprave went back to the drawing board and came up with a dart-shooting sniper rifle that all but guarantees him domination at his local pub’s dartboard.
via Gizmodo
Playing Darts Is So Much Easier With a Slingshot Sniper Rifle