This is how fighter jet pilots pass snacks in the cockpit while flying

This is how fighter jet pilots pass snacks in the cockpit while flying

No one, not even these fighter pilots, can’t resist a chocolate bar when the munchies strike. This video shows how the pilot in the front passes a Snickers bar to his colleague just by opening his hand and letting it float to the back of the cockpit dragged by the acceleration forces. So cool.


SPLOID is delicious brain candy. Follow us on Facebook or Twitter.

via Gizmodo
This is how fighter jet pilots pass snacks in the cockpit while flying

MySQL Workbench 6.3.1 Beta has been released

The MySQL developer tools team is pleased to announce 6.3 Beta – the start of the public beta testing phase of MySQL Workbench 6.3
This release contains a number of new and enhanced features which are feature complete but might still containbugs. So use them with care. For the full list of changes in this revision, visithttp://dev.mysql.com/doc/relnotes/workbench/en/changes-6-3.html For detailed information about the new features, see What’sNew in MySQL Workbench 6.3http://dev.mysql.com/doc/workbench/en/wb-what-is-new-63.html For discussion, join the MySQL Workbench Forums:http://ift.tt/KKCsvd Download MySQL Workbench 6.3.1 beta now, for Windows, Mac OS X 10.7+,Oracle Linux 6 and 7, Fedora 20 and Fedora 21, Ubuntu 14.04 and Ubuntu14.10 or sources, from: http://ift.tt/KHX9aU Enjoy !
via Planet MySQL
MySQL Workbench 6.3.1 Beta has been released

Google CS First Teaches Kids Programming and Computer Science for Free

Google CS First Teaches Kids Programming and Computer Science for Free

There are a wealth of ways for kids to learn computer science and programming these days. Add to the mix: Google’s own CS First curriculum, a free program with a variety of themes for all kinds of kids’ interests.

CS First isn’t new (the program started last year), but as a program that’s meant for teachers and community volunteers to form computer science clubs for kids ages 9 to 14, you might not have heard about it.

Lesson themes include Music & Sound (create a music video and build an interactive music display), Fashion & Design (build fashion-themed apps), Storytelling (create interactive stories), Game Design (code a game), and so on.

All of the materials (videos, programming in Scratch, and lesson plans) are freely available online, so parents—whether homeschooling or not—and kids with a lot of initiative can check out the lessons themselves. Or, better yet, start a club in your community.

CS First


via Lifehacker
Google CS First Teaches Kids Programming and Computer Science for Free

Production Methods: How to Turn a Flat Metal Disc into a Bottle-Necked Scuba Tank

Plastic bottles are made by blow-molding, a beautiful, elegant process that can create a full-bodied shape with a narrow little neck. But how the heck do they make scuba tanks, fire extinguishers and gas canisters, which have similarly narrow necks, but are of course made of non-blow-moldable metal?

My first thought was that they make them in two halves via metal drawing, then weld them together—but obviously that makes no sense. So I looked into and found that yes, a scuba tank comes from a single flat disc of metal alloy.

And while drawing is still the initial production method, I wasn’t aware that what they call "hot spinning" is how you get the bottleneck:


via Core77 Rss Feed
Production Methods: How to Turn a Flat Metal Disc into a Bottle-Necked Scuba Tank

SHOT Show 2015: Chiappa X-Caliber Survival Rifle/Shotgun

Although the Chiappa X-Caliber was actually first shown at SHOT Show 2014, it only started shipping a few months ago. It’s a lightweight survival gun configured as a break-open over/under with a 12 gauge shotgun barrel on top and a … Read More

The post SHOT Show 2015: Chiappa X-Caliber Survival Rifle/Shotgun appeared first on The Truth About Guns.

via The Truth About Guns
SHOT Show 2015: Chiappa X-Caliber Survival Rifle/Shotgun

Temp Tables, Filesorts, UTF8, VARCHAR, and Schema Design in MySQL

The other day we were doing some peer review on a schema change at VividCortex and the topic of character set, column length, and the potential downsides of using utf8 came up. I knew from experience that there are some hidden gotchas with this, and usually I’ll just say a couple words and send a link with further reading. But Google turned up nothing, so I am writing this post to try to help fix that.
TL;DR version is that when MySQL can’t use an index for operations such as ORDER BY, it may allocate fixed-length memory buffers large enough to hold the worst-case values, and the same might apply to temporary tables on disk. This can be seen in EXPLAIN as “Using filesort; using temporary” and similar. And when this happens, you might end up making MySQL do gigabytes worth of work to finish a query on a table that’s only a fraction of that size.
Let’s see why this can happen.
The Theory of UTF8
In theory, you can “just use utf8 for everything,” and there is lots of advice on the Internet suggesting you should. There are good reasons for this. If you use a 1-byte character set, such as the default latin1, and put multibyte data into it (which is very easy to do), you can end up with quite a mess that’s pretty hard to undo. Lots of funny-looking characters can result. You know, those little diamonds with a question mark in the middle, or squares with four little digits inside instead of a character.
In theory, there’s no downside to using utf8 for ASCII data. The “normal” characters all fit in one byte in utf8 anyway, so the bytes end up being the same, regardless of the character set. You’d only get multi-byte characters when you go outside the ASCII range. (I’m being a bit vague with terms like ASCII to avoid the verbosity of being precise. Please correct me, or ask questions, if I’m taking too much of a shortcut.)
And in theory, you can use VARCHAR for everything, too. A VARCHAR begins with a little header that says how long the value is. For VARCHAR(255), for example, there will be 2 bytes that say how long the value is, followed by the data itself. If you use, say, VARCHAR(255) for a column, even if it’s not really going to store values that long, you theoretically pay no extra cost.
So in theory, this is a fine table definition:
CREATE TABLE `t1` (
`a` varchar(255) CHARACTER SET utf8 NOT NULL,
`b` varchar(255) CHARACTER SET utf8 NOT NULL,
KEY `a` (`a`)
)
Now let’s talk about the reality.
Temporary Tables and Filesorts
Suppose you write the following query against the table shown above:
SELECT a FROM t1 WHERE a > ‘abc’ ORDER BY a LIMIT 20;
MySQL can execute this query by looking into the index on the a column, finding the first value that’s greater than “abc”, and reading the next 20 values. The index is already sorted, so the ORDER BY is automatically satisfied. This is excellent.
The difficulty comes when, for example, you ORDER BY a different column.
SELECT a FROM t1 WHERE a > ‘abc’ ORDER BY b LIMIT 20;
In this case, MySQL will have to use a so-called “filesort.” Filesort doesn’t really mean files are sorted. It should be called “sort, which may use a file if it overflows the buffer.”
MySQL has a couple of sort algorithms. These are covered in some detail in High Performance MySQL, especially in Chapter 6 under Sort Optimizations. The manual also discusses them, as does Sergey Petrunia’s blog post. In brief, though, MySQL does a sort by putting values into a buffer in memory. The size of this buffer is specified by the sort_buffer_size server variable. If all the values fit into the buffer, MySQL sorts in-memory; if not, then it writes to temp files on disk and does a merge-sort.
This is where the gotchas start to appear and theory diverges from reality in two important ways:
VARCHAR isn’t variable-length anymore. The table’s on-disk storage may be variable, but the values are stored in fixed-length in memory, at their full possible width. Ditto for the temporary files.
utf8 isn’t single-byte anymore. The fixed-length values are created large enough to accomodate rows that are all 3-byte characters.
So your “hello, world” row that consumes 2+12 bytes on disk suddenly consumes 2+(3*255)=767 bytes in memory for the sort. Or on disk, if there are too many rows to fit into the sort buffer.
It can potentially get worse than this, too. The dreaded “Using filesort” is bad enough, but it could be “Using temporary; Using filesort” which should strike fear into your heart. This means MySQL is creating a temporary table for part of your query. It might use an in-memory temporary table with the MEMORY storage engine, which pads rows to full-length worst-case; it might also use on-disk MyISAM storage for the temp table too. There’s more about this in High Performance MySQL as well.
Summary
This is a fairly involved topic, with lots of good reading in the manual, the Internet, and The Book Whose Name I Shall Not Keep Repeating.
Details aside, the point is it’s fairly easy (and not uncommon) to create a situation where your temp partition fills up with 10GB of data from a single innocent-looking query against a table that’s a fraction of that size on disk. All because of utf8 and/or long VARCHAR lengths!
My advice is generally to consider things like character sets and VARCHAR length limits in three ways:
Constraints on what data can/should be stored.
Accomodation for the data you want to store; make sure your database can hold what you want to put into it.
Hints and worst-case bounds for the query execution process, which will sometimes be pessimistic/greedy and do the work the schema indicates might be needed, rather than the work that’s actually needed for the data that’s stored.
There’s some balance between overly tight schema constraints, which might force you to do an ALTER in the future (yuck, especially on large tables) versus overly loose constraints, which might surprise you in a bad way. It comes a bit from experience and unless you have a crystal ball, even with experience you’ll get bitten sometimes!
A relevant tweet I saw today:
Bad schemas < no schemas < good schemas – @jaykreps #StrataHadoop #Avro #Kafka— Tom White (@tom_e_white) February 19, 2015
Happy schema optimizing!
Thumbnail Cred
via Planet MySQL
Temp Tables, Filesorts, UTF8, VARCHAR, and Schema Design in MySQL

MySQL Character encoding – part 2

In MySQL Character encoding – part 1 we stated that the myriad of ways in which character encoding can be controlled can lead to many situations where your data may not be available as expected.
UTF8 was designed on a placemat in a New Jersey diner one night in September or so 1992.
Setting MySQL Client and Server Character encoding.
Lets restart MySQL with the correct setting for our purpose, UTF8. Here we can see the setting in the MySQL configuration file, in this case /etc/mysql/my.cnf.
character-set-server = utf8
This change is then reflected in the session and global variables once the instance is restarted with the new configuration parameter.
mysql> SELECT @@global.character_set_server, @@session.character_set_client;
+——————————-+——————————–+
| @@global.character_set_server | @@session.character_set_client | +——————————-+——————————–+
| utf8 | utf8 | +——————————-+——————————–+ 1 row in set (0.00 sec)
Now we have verified the server and client are set to use UTF8, we can go ahead, continue developing our application and create a new table people.
mysql> CREATE TABLE people (first_name VARCHAR(30) NOT NULL, last_name VARCHAR(30) NOT NULL); Query OK, 0 rows affected (0.13 sec)
Now let’s enter some data into the new table, which was created with the server and client configured for UTF8.
Something appears to have gone terribly wrong, the accent in Maciek’s surname now appears as a question mark.
mysql> SELECT @@session.character_set_server, @@session.character_set_client; +——————————–+——————————–+
| @@session.character_set_server | @@session.character_set_client | +——————————–+——————————–+
| utf8 | utf8 | +——————————–+——————————–+ 1 row in set (0.00 sec)
The database settings are still UTF8, this should have worked.
mysql> USE fosdem;
mysql> SHOW CREATE TABLE people\G
*************************** 1. row ***************************
Table: people
Create Table: CREATE TABLE `people` (
`first_name` varchar(30) NOT NULL,
`last_name` varchar(30) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec)
Looking at the table, we see that despite being created under a server set to use UTF8, it appears to be set to use latin1.
How can this be?, Let’s look at the session settings.
mysql> SHOW SESSION VARIABLES LIKE ‘character_set_%’; +————————–+—————————-+ | Variable_name | Value | +————————–+—————————-+
| character_set_client | utf8 |
| character_set_connection | utf8 |
| character_set_database | latin1 |
| character_set_filesystem | binary |
| character_set_results | utf8 |
| character_set_server | utf8 |
| character_set_system | utf8 |
| character_sets_dir | /usr/share/mysql/charsets/ |
+————————–+—————————-+ 8 rows in set (0.00 sec)
We can see the server and client values are as expected, but database is set to something else.
mysql> SHOW CREATE DATABASE fosdem\G *************************** 1. row ***************************
Database: fosdem
Create Database: CREATE DATABASE `fosdem` /*!40100 DEFAULT CHARACTER SET latin1 */ 1 row in set (0.00 sec)
Since the database was created when the server was set to latin1 it inherited that charset setting, which persists even when the server setting changes.
Can we fix this?
mysql> SET NAMES utf8;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT last_name, HEX(last_name) FROM people; +————+———————-+
| last_name | HEX(last_name) | +————+———————-+
| Lemon | 4C656D6F6E | | Müller | 4DFC6C6C6572 | | Dobrza?ski | 446F62727A613F736B69 | +————+———————-+ 3 rows in set (0.00 sec)
mysql> SET NAMES latin2;
Query OK, 0 rows affected (0.00 sec)
mysql> SELECT last_name, HEX(last_name) FROM people; +————+———————-+
| last_name | HEX(last_name) | +————+———————-+
| Lemon | 4C656D6F6E | | Müller | 4DFC6C6C6572 | | Dobrza?ski | 446F62727A613F736B69 | +————+———————-+ 3 rows in set (0.00 sec)
Unfortunately, no matter how I try to read the data, 0x3F is ‘?’, so the ‘ń’ has been lost forever. Therefore it may not be enough to reconfigure the server, as a mismatch between client and server can permanently break data, due to the implicit conversion inside the MySQL server.
Implicit conversions happen silently when characters of one character set are inserted into a column with a different character set. This behaviour can be controlled by SQL_MODE, which allows you force MySQL to raise an error instead.
In MySQL Character encoding – part 1 we established there were a number of places you can control the character settings, now we can add a couple of important observations to our view of Character encoding settings.
Session settings
character_set_server
character_set_client
character_set_connection
character_set_database
character_set_result
Schema level Defaults – Affects new tables
Table level Defaults – Affects new columns
Column charsets
We have seen how a table created with no explicit charset declaration inherits the database (schema) charset, but what happens to a column when the table charset is changed?.
mysql> USE fosdem;
mysql> CREATE TABLE test (a VARCHAR(300), INDEX (a)); Query OK, 0 rows affected (0.62 sec)
mysql> SHOW CREATE TABLE test\G *************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 1 row in set (0.00 sec)
mysql> ALTER TABLE test DEFAULT CHARSET = utf8; Query OK, 0 rows affected (0.08 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SHOW CREATE TABLE test\G *************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) CHARACTER SET latin1 DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec)
The columns in a table inherit their default charset value when the table is created, but do not change when the table is changed, however new columns added after the ALTER TABLE would inherit UTF8.
mysql> ALTER TABLE test ADD b VARCHAR(10); Query OK, 0 rows affected (0.74 sec)
Records: 0 Duplicates: 0 Warnings: 0
mysql> SHOW CREATE TABLE test\G *************************** 1. row ***************************
Table: test
Create Table: CREATE TABLE `test` (
`a` varchar(300) CHARACTER SET latin1 DEFAULT NULL, `b` varchar(10) DEFAULT NULL,
KEY `a` (`a`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 1 row in set (0.00 sec
What can you do if you detect inconsistencies in your MySQL Character encoding settings
First of all, keep calm and don’t start by changing something. Analyse the situation and make sure you understand what settings you have and what your application understands regarding reading and writing data from the database.
Once you detect a problem, try to assess the extent of the damage. Firstly, what is the scope of the damage and is it consistent. Are all the rows bad or is it just a subset such as the last days worth of inserts. Are all the bad rows broken in the same way or are there actually a mixture of problems affected different sets of rows. Are the rows actually repairable – could be that recovering from backup and rolling forward is necessary as the inserted data has already been destroyed. Has any character mapping occurred during writes (e.g. unicode over latin1/latin1) – all of this is necessary to get a good picture of where you are starting from.
Take care not to do not do any of the following:
Try to fix this table by table unless you really only have a single table. – Any fix will affect the application and database as a whole, therefore fixing a single table may lead to inconsistencies and further problems elsewhere.
ALTER TABLE … DEFAULT CHARSET = as it only changes the default character set for new columns.
ALTER TABLE … CONVERT TO CHARACTER SET … It’s not for fixing broken encoding.
ALTER TABLE … MODIFY col_name … CHARACTER SET …
What needs to be fixed?
Schema default character set
ALTER SCHEMA fosdem DEFAULT CHARSET = utf8;
Tables with text columns: CHAR, VARCHAR, TEXT, TINYTEXT, LONGTEXT
What about ENUM?
The information schema can provide a list of candidate tables.
SELECT CONCAT(c.table_schema, ‘.’, c.table_name) AS candidate_table FROM information_schema.columns c
WHERE c.table_schema = ‘fosdem’
AND c.column_type REGEXP ‘^(.*CHAR|.*TEXT|ENUM)(\(.+\))?$’ GROUP BY candidate_table;
You must also ensure the database and application configuration is correct also, to avoid having the newly fixed tables broken by new data being introduced incorrectly (for the settings) into the tables.
How do I fix this?
Option 1. Dump and restore (Requires downtime)
Dump the data preserving the bad configuration and drop the old database
# mysqldump -u root -p –skip-set-charset –default-character-set=latin1 fosdem > fosdem.sql
mysql> DROP SCHEMA fosdem;
Correct table definitions in the dump file by editing DEFAULT CHARSET in all CREATE TABLE statements, then create the database again and import the data.
mysql> CREATE SCHEMA fosdem DEFAULT CHARSET utf8;
# mysql -u root -p –default-character-set=utf8 fosdem < fosdem.sql
Option 2. Two step conversion (Requires downtime)
Perform a two step conversion with ALTER TABLE, converting the original encoding to VARBINARY/BLOB and then from there to the target encoding. Conversion from/to BINARY/BLOB removes character set context.
Stop applications
On each table, for each text column perform:
ALTER TABLE tbl MODIFY col_name VARBINARY(255);
ALTER TABLE tbl MODIFY col_name VARCHAR(255) CHARACTER SET utf8;
You may specify multiple columns per ALTER TABLE
Fix the problems (application and/or db configs)
Restart applications
Option 3. – Online character set fix; (Minimal downtime, Approximately 1 min)
Using pt-online-schema-change with the PSCE plugin and a small patch for pt-online-schema-change, you can convert columns online in the live database.
Start pt-online-schema-change on all tables – one by one with table rotation disabled (–no-swap-tables) or drop pt-online-schema-change triggers
Wait until all tables have been converted
Stop applications
Fix the problems (application and/or db configs)
Rotate the tables – should take a minute or so
Restart applications
Currently the patch to pt-online-schema-change and plugin are available on bitbucket.
In MySQL Character encoding part 3 we will cover the gotchas in the process of fixing broken encoding, and what best practise to follow to get it right each time you setup a new server or create a new database.
via Planet MySQL
MySQL Character encoding – part 2