Watch How Dead Silent an Owl Flies Compared to Other Birds

Watch How Dead Silent an Owl Flies Compared to Other Birds

When a pigeon flies, you can hear it sloppily slap its wings as it makes its way through the air. When a peregrine falcon flies, the flight is powerful and beautiful but you can still hear the movement. When a barn owl flies? Complete silence. It’s amazing to see. BBC Earth set up microphones along the flight path of the birds to let us hear the difference.

It’s because the barn owl has such giant wings attached to such a small body. That allows it to gently flap its wing to glide along. A pigeon’s wing is much smaller so it needs to furiously flap its wings to fly while a falcon’s large wings are used powerfully to generate speed in order to catch prey.

SPLOID is delicious brain candy. Follow us on Facebook, Twitter, and YouTube.

via Gizmodo
Watch How Dead Silent an Owl Flies Compared to Other Birds

Columbus small-business resources: Where to go for help

Columbus scores poorly in a nationwide index of small business vitality, including the density of small employers and the rate of new business creation, according to the cover story in this week’s Columbus Business First. Business coaches say entrepreneurs often are isolated and need a more nurturing environment.
“All these resources are out there but people don’t necessarily know about them,” economist and solopreneur Bill LaFayette says.
Here’s a list of resources for entrepreneurs to turn…

via Columbus Business News – Local Columbus News | Business First of Columbus
Columbus small-business resources: Where to go for help

6 Price Comparison Apps Compared: Which Is the Best?

price-comparison-apps

Comparison shopping is a crucial habit to develop if you want to save money on your purchases. And who doesn’t? Being able to quickly compare prices, whether from your couch or at the store, could help you save quite a bit of cash by making sure you always know where to find the best prices for the items you want. Using browser extensions or websites for this task is great, but if you’re on the go, you’re going to want a good app to get the same kind of information. I set out to find the best price comparison app…

Read the full article: 6 Price Comparison Apps Compared: Which Is the Best?

via MakeUseOf
6 Price Comparison Apps Compared: Which Is the Best?

Shooting Mentos at Diet Coke (Video)

Okay, so by now I think we all know that when Mentos candies are introduced to Diet Coke, the results can be dramatic. So much so, that the experimental shooters at Demolition Ranch received an overwhelming number of requests to load Mentos into a shotgun shell and shoot them at Diet Coke. Naturally, they did.[…..]

The post Shooting Mentos at Diet Coke (Video) appeared first on AllOutdoor.com.

via AllOutdoor.com
Shooting Mentos at Diet Coke (Video)

Writing SQL that works on PostgreSQL, MySQL and SQLite

I am one of those crazy people who attempts to write SQL that works on
SQlite, MySQL and PostgreSQL. First I should explain why:
This is all for my project sabre/dav. sabre/dav is a server for CalDAV,
CardDAV and WebDAV. One of the big design goals is that it this project has to
be a library first, and should be easily integratable into existing
applications.
To do this effectively, it’s important that it’s largely agnostic to the host
platform, and one of the best ways (in my opinion) to achieve that is to have
as little dependencies as possible. Adding dependencies such as Doctrine
is a great idea for applications or more opinionated frameworks, but for
sabre/dav lightweight is key, and I need people to be able to understand the
extension points fairly easily, without requiring them to familiarize them
with the details of the dependency graph.
So while you are completely free to choose to add Doctrine or Propel
yourself, the core adapters (wich function both as a default implementation
and as samples for writing your own), all only depend on an instance of
PDO.
The nice thing is that ORMs such as Doctrine and Propel, you can get access
to the underlying PDO connection object and pass that, thus reusing your
existing configuration.
For the longest time we only supported SQLite and MySQL, but I’m now working
on adding PostgreSQL support. So I figured, I might as well write down my
notes.
But how feasable is it to write SQL that works everywhere?
Well, it turns out that this is actually not super easy. There is such as
thing as Standard SQL, but all of these databases have many of their
own extensions and deviations.
The most important thing is that this will likely only work well for you if
you have a very simple schema and simple queries.
Well, this blog post is not intended as a full guide, I’m just listing the
particular things I’ve ran into. If you have your own, you can edit this blog
post on github, or leave a comment.
My approach
I try to keep my queries as simple as possible.
If I can rewrite a query to work on every database, that query will have the
preference.
I avoid stored procedures, triggers, functions, views. I’m really just
dealing with tables and indexes.
Even if that means that it’s not the most optimal query. So I’m ok with
sarcrificing some performance, if that means my queries can stay generic,
within reason.
If there’s no possible way to do things in a generic way, I fall back on
something like this:
<?php
if ($pdo->getAttribute(PDO::ATTR_DRIVER_NAME) === ‘pgsql’) {
$query = "…";
} else {
$query = "…’;
}
$stmt = $pdo->prepare($query);
?>
DDL
First there is the “Data Definition Language” and “Data Manipulation Language”
the former is used for queries starting with CREATE, ALTER, DROP, etc,
and the latter SELECT, UPDATE, DELETE, INSERT.
There really is no sane way to generalize your CREATE TABLE queries, as the
types and syntax are vastly different.
So for those we have a set of .sql files for every server.
Quoting
In MySQL and SQlite you can use either quotes ‘ or double quotes " to wrap
a string.
In PostgreSQL, you always have to use single quotes ‘.
In MySQL and SQLite you use backticks for identifiers. PostgreSQL uses single
quotes. SQlite can also use single quotes here if the result is unambigious,
but I would strongly suggest to avoid that.
This means that this MySQL query:
SELECT * FROM `foo` WHERE `a` = "b"
is equivalent to this PostgreSQL query:
SELECT * FROM "foo" WHERE "a" = ‘b’
Luckily you can often just write this query, which works for all databases:
SELECT * FROM foo WHERE a = ‘b’
But keep in mind that when you create your tables, using double quotes will
cause PostgreSQL to retain the upper/lower case characters. If you do not use
quotes, it will normalize everything to lower case.
For compatibility I would therefore suggest to make sure that all your table
and column names are in lower case.
REPLACE INTO
The REPLACE INTO is a useful extension that is supported by both SQLite and
MySQL. The syntax is identical INSERT INTO, except that if it runs into a
key conflict, it will overwrite the existing record instead of inserting a new
one.
So REPLACE INTO basically either updates or inserts a new record.
This works on both SQLite and MySQL, but not PostgreSQL. Since version 9.5
PostgreSQL gained a new feature that allows you to achieve the same effect.
This statement from MySQL or SQLite:
REPLACE INTO blog (uuid, title) VALUES (:uuid, :title)
then might become something like this in PostgreSQL:
INSERT INTO blog (uuid, title) VALUES (:uuid, :title)
ON CONFLICT (uuid) DO UPDATE SET title = :title
So the major difference here is with PostgreSQL we specifically have to tell
it which key conflict we’re handling (uuid) and what to do in that case
(UPDATE).
In addition to REPLACE INTO, MySQL also has this syntax to do the same thing:
INSERT INTO blog (uuid, title) VALUES (:uuid, :title)
ON DUPLICATE KEY UPDATE title = :title
But as far as I know SQLite does not have a direct equivalent.
BLOB
SQLite and MySQL have a BLOB type. This type is used for storing data as-is.
Whatever (binary) string you store, you will retrieve again and no conversion
is attempted for different character sets.
PostgreSQL has two types that have a similar purpose: Large Objects and
the bytea type.
The best way to describe large objects, is that they are stored ‘separate’ from
the table, and instead of inserting the object itself, you store a reference to
the object (in the form of an id).
bytea is more similar to BLOB, so I opted to use that. But there are some
differences.
First, if you do a select such as this:
<?php
$stmt = $pdo->prepare(‘SELECT myblob FROM binaries WHERE id = :id’);
$stmt->execute([‘id’ => $id]);
echo $stmt->fetchColumn();
?>
On MySQL and Sqlite this will just work. The myblob field is represented as
a string.
On PostgreSQL, byta is represented as a PHP stream. So you might have to
rewrite that last statement as:
<?php
echo stream_get_contents($stmt->fetchColumn());
?>
Or:
<?php
stream_copy_to_stream($stmt->fetchColumn(), STDOUT);
?>
Luckily in sabre/dav we pretty much support streams where we also support
strings, so we were already agnositic to this, but some unittests had to be
adjusted.
Inserting bytea is also a bit different. I’m not a fan of of using
PDOStatement::bindValue and PDOStatement::bindParam, instead
I prefer to just send all my bound parameters at once using execute:
<?php
$stmt = $pdo->prepare(‘INSERT INTO binaries (myblob) (:myblob)’);
$stmt->execute([
‘myblob’ => $blob
]);
?>
While that works for PostgreSQL for some strings, it will throw errors
when you give it data that’s invalid in the current character set. It’s also
dangerous, as PostgreSQL might try to transcode the data into a different
character set.
If you truly need to store binary data (like I do) you must do this:
<?php
$stmt = $pdo->prepare(‘INSERT INTO binaries (myblob) (:myblob)’);
$stmt->bindParam(‘myblob’, $blob, PDO::PARAM_LOB);
$stmt->execute();
?>
Luckily this also just works in SQlite and MySQL.
String concatenation
Standard SQL has a string concatenation operator. It works like this:
SELECT ‘foo’ || ‘bar’
// Output: foobar
This works in PostgreSQL and Sqlite. MySQL has a function for this:
SELECT CONCAT(‘foo’, ‘bar’)
PostgreSQL also has this function, but SQLite does not. You can enable Standard
SQL concatenation in MySQL by enabling it:
SET SESSION sql_mode = ‘PIPES_AS_CONCAT’
I’m not sure why this isn’t the default.
Last insert ID
The PDO object has a lastInsertId() function. For SQLite and MySQL you can
just call it as such:
<?php
$id = $pdo->lastInsertId();
?>
However, PostgreSQL requires an explicit sequence identifier. By default this
follows the format tablename_idfield_seq, so we might specifiy as this:
<?php
$id = $pdo->lastInsertId(‘articles_id_seq’);
?>
Luckily the parameter gets ignored by SQLite and MySQL, so we can just specify
it all the time.
Type casting
If you have an INT field (or similar) and you access it in this way:
<?php
$result = $pdo->query(‘SELECT id FROM articles’);
$id = $result->fetchColumn();
?>
With PostgreSQL $id will actually have the type php type integer. If you
use MySQL or SQlite, everything gets cast to a php string, which is
unfortunate.
The sane thing to do is to cast everything to int after the fact, so you can
correctly do PHP 7 strict typing with these in the future.
Testing
I unittest my database code. Yep, you read that right! I’m one of those people.
It’s been tremendously useful.
Since adding PostgreSQL I was able to come up with a nice structure. Every
unittest that does something with PDO now generally looks like this:
<?php
abstract PDOTest extends \PHPUnit_Framework_TestCase {
abstract function getPDO();
/** all the unittests go here **/
}
?>
Then I create one subclass for PostgreSQL, Sqlite and MySQL that each only
implement the getPDO() function.
This way all my tests are repeated for each driver.
I’ve also rigged up Travis CI to have a MySQL and a PostgreSQL database
server running, so everything automatically gets checked every time.
If a developer is testing locally, we detect if a database server is running,
and automatically just skip the tests if this was not the case. In most cases
this means only the Sqlite tests get hit, which is fine.
Conclusions
Created a monster.
PostgreSQL is by far the sanest database, and I would recommend everyone to
move from MySQL towards it.
via Planet MySQL
Writing SQL that works on PostgreSQL, MySQL and SQLite

Upgrading to MySQL 5.7, focusing on temporal types

In this post, we’ll discuss how MySQL 5.7 handles the old temporal types during an upgrade.
MySQL changed the temporal types in MySQL 5.6.4, and it introduced a new feature: microseconds resolution in the TIME, TIMESTAMP and DATETIME types. Now these parameters can be set down to microsecond granularity. Obviously, this means format changes, but why is this important?
Are they converted automatically to the new format?
If we had tables in MySQL 5.5 that used TIME, TIMESTAMP or DATETIME are these fields are going to be converted to the new format when upgrading to 5.6? The answer is “NO.” Even if we run mysql_upgrade, it does not warn us about the old format. If we check the MySQL error log, we cannot find anything regarding this. But the newly created tables are going to use the new format so that we will have two different types of temporal fields.
How can we find these tables?
The following query gives us a summary on the different table formats:SELECT CASE isc.mtype
WHEN ‘6’ THEN ‘OLD’
WHEN ‘3’ THEN ‘NEW’
END FORMAT,
count(*) TOTAL
FROM information_schema.tables AS t
INNER JOIN information_schema.columns AS c ON c.table_schema = t.table_schema
AND c.table_name = t.table_name
LEFT OUTER JOIN information_schema.innodb_sys_tables AS ist ON ist.name = concat(t.table_schema,’/’,t.table_name)
LEFT OUTER JOIN information_schema.innodb_sys_columns AS isc ON isc.table_id = ist.table_id
AND isc.name = c.column_name
WHERE c.column_type IN (‘time’,’timestamp’,’datetime’)
AND t.table_schema NOT IN (‘mysql’,’information_schema’,’performance_schema’)
AND t.table_type = ‘base table’
AND (t.engine = ‘innodb’)
GROUP BY isc.mtype;+——–+——-+
| FORMAT | TOTAL |
+——–+——-+
| NEW | 1     |
| OLD | 9 |
+——–+——-+Or we can use show_old_temporals, which will highlight the old formats during a show create table.CREATE TABLE `mytbl` (
`ts` timestamp /* 5.5 binary format */ NOT NULL DEFAULT CURRENT_TIMESTAMP,
`dt` datetime /* 5.5 binary format */ DEFAULT NULL,
`t` time /* 5.5 binary format */ DEFAULT NULL
) DEFAULT CHARSET=latin1MySQL can handle both types, but with the old format you cannot use microseconds, and the default DATETIME takes more space on disk.
Can I upgrade to MySQL 5.7?
Of course you can! But when mysql_upgrade is running it is going to convert the old fields into the new format by default. This basically means an alter table on every single table, which will contain one of the three types.
Depending on the number of tables, or the size of the tables, this could take hours – so you may need to do some planning…..
test.t1
error : Table rebuild required. Please do "ALTER TABLE `t1` FORCE" or dump/reload to fix it!
test.t2
error : Table rebuild required. Please do "ALTER TABLE `t2` FORCE" or dump/reload to fix it!
test.t3
error : Table rebuild required. Please do "ALTER TABLE `t3` FORCE" or dump/reload to fix it!
Repairing tables
mysql.proxies_priv OK
`test`.`t1`
Running : ALTER TABLE `test`.`t1` FORCE
status : OK
`test`.`t2`
Running : ALTER TABLE `test`.`t2` FORCE
status : OK
`test`.`t3`
Running : ALTER TABLE `test`.`t3` FORCE
status : OK
Upgrade process completed successfully.
Checking if update is needed.
Can we avoid this at upgrade?
We can run alter tables or use pt-online-schema-schange (to avoid locking) before an upgrade, but even without these preparations we can still avoid incompatibility issues.
My colleague Daniel Guzman Burgos pointed out that mysql_upgrade has an option called upgrade-system-tables. This will only upgrade the system tables, and nothing else.
Can we still write these fields?
The following query returns the schema and the table names that still use the old formats.SELECT CASE isc.mtype
WHEN ‘6’ THEN ‘OLD’
WHEN ‘3’ THEN ‘NEW’
END FORMAT,
t.schema_name,
t.table_name
FROM information_schema.tables AS t
INNER JOIN information_schema.columns AS c ON c.table_schema = t.table_schema
AND c.table_name = t.table_name
LEFT OUTER JOIN information_schema.innodb_sys_tables AS ist ON ist.name = concat(t.table_schema,’/’,t.table_name)
LEFT OUTER JOIN information_schema.innodb_sys_columns AS isc ON isc.table_id = ist.table_id
AND isc.name = c.column_name
WHERE c.column_type IN (‘time’,’timestamp’,’datetime’)
AND t.table_schema NOT IN (‘mysql’,’information_schema’,’performance_schema’)
AND t.table_type = ‘base table’
AND (t.engine = ‘innodb’);+——–+————–+————+
| FORMAT | table_schema | table_name |
+——–+————–+————+
| OLD | test | t |
| OLD | test | t |
| OLD | test | t |
| NEW | sys | sys_config |
+——–+————–+————+
4 rows in set (0.03 sec)
mysql> select version();
+———–+
| version() |
+———–+
| 5.7.11-4 |
+———–+
1 row in set (0.00 sec)As we can see, we’re using 5.7 and table “test.t” still has the old format.
The schema:CREATE TABLE `t` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`t1` time DEFAULT NULL,
`t2` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`t3` datetime DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=latin1mysql> select * from t;
+—-+———-+———————+———————+
| id | t1 | t2 | t3 |
+—-+———-+———————+———————+
| 1 | 20:28:00 | 2016-04-09 01:41:58 | 2016-04-23 22:22:01 |
| 2 | 20:28:00 | 2016-04-09 01:41:59 | 2016-04-23 22:22:02 |
| 3 | 20:28:00 | 2016-04-09 01:42:01 | 2016-04-23 22:22:03 |
| 4 | 20:28:00 | 2016-04-09 01:42:03 | 2016-04-23 22:22:04 |
| 5 | 20:28:00 | 2016-04-09 01:42:08 | 2016-04-23 22:22:05 |
+—-+———-+———————+———————+Let’s try to insert a new row:mysql> insert into `t` (t1,t3) values (’20:28′,’2016:04:23 22:22:06′);
Query OK, 1 row affected (0.01 sec)
mysql> select * from t;
+—-+———-+———————+———————+
| id | t1 | t2 | t3 |
+—-+———-+———————+———————+
| 1 | 20:28:00 | 2016-04-09 01:41:58 | 2016-04-23 22:22:01 |
| 2 | 20:28:00 | 2016-04-09 01:41:59 | 2016-04-23 22:22:02 |
| 3 | 20:28:00 | 2016-04-09 01:42:01 | 2016-04-23 22:22:03 |
| 4 | 20:28:00 | 2016-04-09 01:42:03 | 2016-04-23 22:22:04 |
| 5 | 20:28:00 | 2016-04-09 01:42:08 | 2016-04-23 22:22:05 |
| 6 | 20:28:00 | 2016-04-09 01:56:38 | 2016-04-23 22:22:06 |
+—-+———-+———————+———————+
6 rows in set (0.00 sec)It was inserted without a problem, and we can’t see any related info/warnings in the error log.
Does the Replication work?
In many scenarios, when you are upgrading a replicaset, the slaves are upgraded first. But will the replication work? The short answer is “yes.” I configured row-based replication between MySQL 5.6 and 5.7. The 5.6 was the master, and it had all the temporal types in the old format. On 5.7, I had new and old formats.
I replicated from old format to old format, and from old format to new format, and both are working.
Conclusion
Before upgrading to MySQL 5.7, tables should be altered to use the new format. If it isn’t done, however, the upgrade is still possible without altering all the tables – the drawbacks are you cannot use microseconds, and it takes more space on disk. If you had to upgrade to 5.7, however, you could change the format later using alter table or pt-online-schema-schange.
 
via Planet MySQL
Upgrading to MySQL 5.7, focusing on temporal types

Watch the replay: Become a MongoDB DBA (if you’re really a MySQL user)

Thanks to everyone who participated in this week’s webinar on ‘Become a MongoDB DBA’! Our colleague Art van Scheppingen presented from the perspective of a MySQL DBA who might be called to manage a MongoDB database, which included a live demo on how to carry out the relevant DBA tasks using ClusterControl.
The replay and the slides are now available online in case you missed Tuesday’s live session or simply would like to see it again in your own time.
Watch the replay Read the slides
This was the first session of our new webinar series: ‘How to Become a MongoDB DBA’ to answer the question: ‘what does a MongoDB DBA do’?
In this initial webinar, we went beyond the deployment phase and demonstrated how you can automate tasks, monitor a cluster and manage MongoDB; whilst also automating and managing your MySQL and/or PostgreSQL installations. Watch out for invitations for the next session in this series!
This Session’s Agenda
Introduction to becoming a MongoDB DBA
Installing & configuring MongoDB
What to monitor and how
How to perform backups
Live Demo
Speaker
Art van Scheppingen is a Senior Support Engineer at Severalnines. He’s a pragmatic MySQL and Database expert with over 15 years experience in web development. He previously worked at Spil Games as Head of Database Engineering, where he kept a broad vision upon the whole database environment: from MySQL to Couchbase, Vertica to Hadoop and from Sphinx Search to SOLR. He regularly presents his work and projects at various conferences (Percona Live, FOSDEM) and related meetups.
This series is based upon the experience we have using MongoDB and implementing it for our database infrastructure management solution, ClusterControl. For more details, read through our ‘Become a ClusterControl DBA’ blog series.
Tags: MySQLMongoDBdatabase managementdbaclustercontrol
via Planet MySQL
Watch the replay: Become a MongoDB DBA (if you’re really a MySQL user)

Watch Spider-Man Chase Down the Winter Soldier in New Captain America: Civil War Footage

Watch Spider-Man Chase Down the Winter Soldier in New Captain America: Civil War Footage

After a long wait to just see Spidey in Captain America: Civl War, we’ve finally got a better glimpse at just how the character is going to act now that he’s part of the Marvel Cinematic Universe. Maybe Bucky wasn’t the person to snark at.

Okay, so it might not be snark. Spider-Man could be totally honest when he says, “You have a metal arm? That is awesome dude.” In fact, he sounds completely genuine and exactly like a teenager. Who has spider powers. And totally, after that sentence, deserves to be punched into a wall by a very angry assassin.

The best quality TV spot with this new footage also happens to be one with subtitles. Sorry about that, but Spider-Man appears at the :18 mark.

via Gizmodo
Watch Spider-Man Chase Down the Winter Soldier in New Captain America: Civil War Footage

What happens when you create a MySQL Document Store

The MySQL Document Store introduced with version 5.7.12 allows developers to create document collections without have to know Structured Query Language. The new feature also comes with a new set of terminology. So let us create a collection and see what it in it (basically creating a table for us SQL speakin’ old timers). So start the mysqlsh program, connect to the server, change to the world-x schema (database) switch to Python mode, a create a collection (table). What did the server do for us? Switching to SQL mode, we can use describe to see what the server has done for us. We have a two column table. The first is named doc and is used to store JSON. And there is also a column named _id and please notice this column is notated as STORED GENERATED.The generated column extracts values from a JSON document and materializes that information into a new column that then can be indexed. But what did the system extract for us to create this new column?Lets use SHOW CREATE TABLE to find out. mysql-sql> SHOW CREATE TABLE foobar;+——–+—————————————————————————————————————————————————————————————————————————-+| Table | Create Table |+——–+—————————————————————————————————————————————————————————————————————————-+| foobar | CREATE TABLE `foobar` ( `doc` json DEFAULT NULL, `_id` varchar(32) GENERATED ALWAYS AS (json_unquote(json_extract(`doc`,’$._id’))) STORED NOT NULL, UNIQUE KEY `_id` (`_id`)) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 |+——–+—————————————————————————————————————————————————————————————————————————-+1 row in set (0.00 sec)mysql-sql>So the 5.7.12 document store is creating an index for us on a field named _id in our JSON document. Hmm, what if I do not have an _id field in my data. So I added two records ("Name" : "Dave" and "Name" : "Jack") into my new collection and then took a peek. mysql> select * from foobar;+————————————————————-+———————————-+| doc | _id |+————————————————————-+———————————-+| {"_id": "819a19383d9fd111901100059a3c7a00", "Name": "Dave"} | 819a19383d9fd111901100059a3c7a00 || {"_id": "d639274c3d9fd111901100059a3c7a00", "Name": "Jack"} | d639274c3d9fd111901100059a3c7a00 |+————————————————————-+———————————-+2 rows in set (0.00 sec)mysql>But what if i do have a _id of my own? The system picked up the _id for the Dexter record. Remember that the index on the _id field is marked UNIQUE which means you can not reuse that number.So we know the document store wants is creating an unique identification number (that we can also use). Update: The client generates the identification number, the server can not due to possible conflicts in future sharding projects.
via Planet MySQL
What happens when you create a MySQL Document Store