How to Make Your MySQL or MariaDB Database Highly Available on AWS and Google Cloud

Running databases on cloud infrastructure is getting increasingly popular these days. Although a cloud VM may not be as reliable as an enterprise-grade server, the main cloud providers offer a variety of tools to increase service availability. In this blog post, we’ll show you how to architect your MySQL or MariaDB database for high availability, in the cloud. We will be looking specifically at Amazon Web Services and Google Cloud Platform, but most of the tips can be used with other cloud providers too.

Both AWS and Google offer database services on their clouds, and these services can be configured for high availability. It is possible to have copies in different availability zones (or zones in GCP), in order to increase your chances to survive partial failure of services within a region. Although a hosted service is a very convenient way of running a database, note that the service is designed to behave in a specific way and that may or may not fit your requirements. So for instance, AWS RDS for MySQL has a pretty limited list of options when it comes to failover handling. Multi-AZ deployments come with 60-120 seconds failover time as per the documentation. In fact, given the “shadow” MySQL instance has to start from a “corrupted” dataset, this may take even longer as more work could be required on applying or rolling back transactions from InnoDB redo logs. There is an option to promote a slave to become a master, but it is not feasible as you cannot reslave existing slaves off the new master. In the case of a managed service, it is also intrinsically more complex and harder to trace performance problems. More insights on RDS for MySQL and its limitations in this blog post.

On the other hand, if you decide to manage the databases, you are in a different world of possibilities. A number of things that you can do on bare metal are also possible on EC2 or Compute Engine instances. You do not have the overhead of managing the underlying hardware, and yet retain control on how to architect the system. There are two main options when designing for MySQL availability – MySQL replication and Galera Cluster. Let’s discuss them.

MySQL Replication

MySQL replication is a common way of scaling MySQL with multiple copies of the data. Asynchronous or semi-synchronous, it allows to propagate changes executed on a single writer, the master, to replicas/slaves – each of which would contain the full data set and can be promoted to become the new master. Replication can also be used for scaling reads, by directing read traffic to replicas and offloading the master in this way. The main advantage of replication is the ease of use – it is so widely known and popular (it’s also easy to configure) that there are numerous resources and tools to help you manage and configure it. Our own ClusterControl is one of them – you can use it to easily deploy a MySQL replication setup with integrated load balancers, manage topology changes, failover/recovery, and so on.

One major issue with MySQL replication is that it is not designed to handle network splits or master’s failure. If a master goes down, you have to promote one of the replicas. This is a manual process, although it can be automated with external tools (e.g. ClusterControl). There is also no quorum mechanism and there is no support for fencing of failed master instances in MySQL replication. Unfortunately, this may lead to serious issues in distributed environments – if you promoted a new master while your old one comes back online, you may end up writing to two nodes, creating data drift and causing serious data consistency issues.

We’ll look into some examples later in this post, that shows you how to detect network splits and implement STONITH or some other fencing mechanism for your MySQL replication setup.

Galera Cluster

We saw in the previous section that MySQL replication lacks fencing and quorum support – this is where Galera Cluster shines. It has a quorum support built-in, it also has a fencing mechanism which prevents partitioned nodes from accepting writes. This makes Galera Cluster more suitable than replication in multi-datacenter setups. Galera Cluster also supports multiple writers, and is able to resolve write conflicts. You are therefore not limited to a single writer in a multi-datacenter setup, it is possible to have a writer in every datacenter which reduces the latency between your application and database tier. It does not speed up writes as every write still has to be sent to every Galera node for certification, but it’s still easier than to send writes from all application servers across WAN to one single remote master.

As good as Galera is, it is not always the best choice for all workloads. Galera is not a drop-in replacement for MySQL/InnoDB. It shares common features with “normal” MySQL –  it uses InnoDB as storage engine, it contains the entire dataset on every node, which makes JOINs feasible. Still, some of the performance characteristics of Galera (like the performance of writes which are affected by network latency) differ from what you’d expect from replication setups. Maintenance looks different too: schema change handling works slightly different. Some schema designs are not optimal: if you have hotspots in your tables, like frequently updated counters, this may lead to performance issues. There is also a difference in best practices related to batch processing – instead of executing queries in large transactions, you want your transactions to be small.

Proxy tier

It is very hard and cumbersome to build a highly available setup without proxies. Sure, you can write code in your application to keep track of database instances, blacklist unhealthy ones, keep track of the writeable master(s), and so on. But this is much more complex than just sending traffic to a single endpoint – which is where a proxy comes in. ClusterControl allows you to deploy ProxySQL, HAProxy and MaxScale. We will give some examples using ProxySQL, as it gives us good flexibility in controlling database traffic.

ProxySQL can be deployed in a couple of ways. For starters, it can be deployed on separate hosts and Keepalived can be used to provide Virtual IP. The Virtual IP will be moved around should one of the ProxySQL instances fail. In the cloud, this setup can be problematic as adding an IP to the interface usually is not enough. You would have to modify Keepalived configuration and scripts to work with elastic IP (or static -however it might be called by your cloud provider). Then one would use cloud API or CLI to relocate this IP address to another host. For this reason, we’d suggest to collocate ProxySQL with the application. Each application server would be configured to connect to the local ProxySQL, using Unix sockets. As ProxySQL uses an angel process, ProxySQL crashes can be detected/restarted within a second. In case of hardware crash, that particular application server will go down along with ProxySQL. The remaining application servers can still access their respective local ProxySQL instances. This particular setup has additional features. Security – ProxySQL, as of version 1.4.8, does not have support for client-side SSL. It can only setup SSL connection between ProxySQL and the backend. Collocating ProxySQL on the application host and using Unix sockets is a good workaround. ProxySQL also has the ability to cache queries and if you are going to use this feature, it makes sense to keep it as close to the application as possible to reduce latency. We would suggest to use this pattern to deploy ProxySQL.

Typical setups

Let’s take a look at examples of highly available setups.

Single datacenter, MySQL replication

The assumption here is that there are two separate zones within the datacenter. Each zone has redundant and separate power, networking and connectivity to reduce the likelihood of two zones failing simultaneously. It is possible to set up a replication topology spanning both zones.

Here we use ClusterControl to manage the failover. To solve the split-brain scenario between availability zones, we collocate the active ClusterControl with the master. We also blacklist slaves in the other availability zone to make sure that automated failover won’t result in two masters being available.

Multiple datacenters, MySQL replication

In this example we use three datacenters and Orchestrator/Raft for quorum calculation. You might have to write your own scripts to implement STONITH if master is in the partitioned segment of the infrastructure. ClusterControl is used for node recovery and management functions.

Multiple datacenters, Galera Cluster

In this case we use three datacenters with a Galera arbitrator in the third one – this makes possible to handle whole datacenter failure and reduces a risk of network partitioning as the third datacenter can be used as a relay.

For further reading, take a look at the “How to Design Highly Available Open Source Database Environments” whitepaper and watch the webinar replay “Designing Open Source Databases for High Availability”.

via Planet MySQL
How to Make Your MySQL or MariaDB Database Highly Available on AWS and Google Cloud

Migrating Database Charsets to utf8mb4: A Story from the Trenches

utf8mb4In this blog post, we’ll look at options for migrating database charsets to utf8mb4.

Migrating charsets, in my opinion, is one of the most tedious tasks in a DBA’s life. There are so many things involved that can screw up our data, making it work is always hard. Sometimes what seems like a trivial task can become a nightmare very easily, and keeps us working for longer than expected.

I’ve recently worked on a case that challenged me with lots of tests due to some existing schema designs that made InnoDB suffer. I’ve decided to write this post to put together some definitive guide to enact charset conversion with minimal downtime and pain.

  • First disclosure: I can’t emphasize enough that you need to always backup your data. If something goes wrong, you can always roll things back by keeping a healthy set of backups.
  • Second disclosure: A backup can’t be considered a good backup until you test it, so I can’t emphasize enough that running regular backups and also performing regular restore tests is a must-to-do task for being in the safe side.
  • Third and last disclosure: I’m not pretending to present the best or only way to do this exercise. This is the way I consider easiest and painless to perform a charset conversion with minimal downtime.

My approach involves at least one slave for failover and logical/physical backup operations to make sure that data is loaded properly using the right charset.

In this case, we are moving from latin1 (default until MySQL 8.0.0) to utf8mb4 (new default from 8.0.1). In this post, Lefred refers to this change and some safety checks for upgrading. For our change, an important thing to consider: Latin1 charset stores one byte per character, while utf8mb4 can store up to four bytes per character. This change definitely impacts the disk usage, but also makes us hit some limits that I describe later in the plan.

So let’s put out hands in action. First, let’s create a slave using a fresh (non-locking) backup. Remember that these operations are designed to minimize downtime and reduce any potential impact on our production server.

If you already have a slave that can act as a master replacement then you can skip this section. In our source server, configure binlog_format and flush logs to start with fresh binary logs:

Start a streaming backup using Percona Xtrabackup through netcat in the destination server:

and in our source server:

Once the backup is done, untar and restore the backup. Then set up the slave:

Now that we have the slave ready, we prepare our dataset by running two mysqldump processes so we have data and schemas in separate files. You can also run this operation using MyDumper or mysqlpump, but I will keep it easy:

Write down this output, as it may be needed later:

Notice that I’m passing a command as an argument to –databases to dump all databases but mysql, performance_schema and information_schema (hack stolen from this post, with credit to Ronald Bradford).  It is very important to keep the replication stopped, as we will resume replication after fully converting our charset.

Now we have to convert our data to utf8mb4. This is easy as we just need to touch the schema.sql file by running few commands:

Can this be a one-liner? Yes, but I’m not a good basher. 🙂

Now we are ready to restore our data using new encoding:

Notice I’ve enabled the variable innodb_large_prefix. This is important because InnoDB limits index prefixes to 768 bytes by default. If you have an index based in a varchar(255) data type, you will get an error because the new charset exceeds this limit (up to four bytes per character goes beyond 1000 bytes) unless you limit the index prefix. To avoid issues during data load, we enable this variable to extend the limit to 3072 bytes.

Finally, let’s configure our server and restart it to make sure to set new defaults properly. In the my.cnf file, add:

Let’s resume replication after the restart, and make sure everything is ok:

Ok, at this point we should be fine and our data should be already converted to utf8mb4. So far so good. The next step is to failover applications to use the new server, and rebuild the old server using a fresh backup using xtrabackup as described above.

There are few things we need to consider now before converting this slave into master:

  1. Make sure you properly configured applications. Charset and collation values can be set as session level, so if you set your connection driver to another charset then you may end up mixing things in your data.
  2. Make sure the new slave is powerful enough to handle traffic from the master.
  3. Test everything before failing over production applications. Going from Latin1 to utf8mb4 should be straightforward, as utf8mb4 includes all the characters in Latin1. But let’s face it, things can go wrong and we are trying to avoid surprises.
  4. Last but not least, all procedures were done in a relatively small/medium sized dataset (around 600G). But this conversion (done via logical backups) is more difficult when talking about big databases (i.e., in the order of TBs). In these cases, the procedure helps but might not be good enough due to time restrictions (imagine loading a 1TB table from a logical dump — it take ages). If you happen to face such a conversion, here is a short, high-level plan:
    • Convert only smaller tables in the slave (i.e., those smaller than 500MB) following same procedure. Make sure to exclude big tables from the dump using the –ignore-tables parameter in mysqldump.
    • Convert bigger tables via alter table, as follows:
    • Once everything is finished, you can resume replication. Notice you can do dump/conversion/restore in parallel with the altering of bigger tables, which should reduce the time required for conversion.

It’s important to understand why we need the double conversion from latin1 to varbinary to utf8mb4. This post from Marco Tusa largely explains this.

Conclusion

I wrote this guide from my experience working with these type of projects. If you Google a bit, you’ll find a lot of resources that make this work, along with different solutions. What I’ve tried to present here is a guide to help you deal with these projects. Normally, we have to perform these changes in existing datasets that sometimes are big enough to prevent any work getting done via ALTER TABLE commands. Hopefully, you find this useful!

Francisco Bordenave

Francisco has been working in MySQL since 2006, he has worked for several companies which includes Health Care industry to Gaming. Over the last 6 years he has been working as a Remote DBA and Database Consultant which help him to acquire a lot of technical and multi-cultural skills.
He lives in La Plata, Argentina and during his free time he likes to play football, spent time with family and friends and cook.

via Planet MySQL
Migrating Database Charsets to utf8mb4: A Story from the Trenches

Starting MongoDB Database Software

MongoDB Database Software

In this blog post, we will cover how to start MongoDB database software in the three most used platforms: Windows/Linux/MacOS.

If you have just started with NoSQL databases, you might wonder how to evaluate if MongoDB is a good fit for your application.

Percona provides a signed version of MongoDB called Percona Server for MongoDB with a couple of enterprise-grade features included free of charge that runs on all Linux flavors. We also support MongoDB, please check out our support page. But, what if you’re running a test on your study laptop, PC or not. How do you easily start a mongod process for testing? Below I demonstrate how to start MongoDB database software on the three most popular operating systems.

Microsoft Windows

First of all, be aware of this hotfix: https://support.microsoft.com/en-ca/help/2731284/33-dos-error-code-when-memory-memory-mapped-files-are-cleaned-by-using.

You might need to restart the computer after applying the fix. Then download the .zip file. The website only offers an MSI, but we don’t want to install the binaries, we just want to run it.

Click here to download the 3.4.10 version:
http://downloads.mongodb.org/win32/mongodb-win32-x86_64-2008plus-ssl-3.4.10.zip

After the download, use your favorite decompressing tool to extract the MongoDB executables. Then cut the extracted folder to your Documents or C: or even a memory stick (but don’t expect high performance):

 

Inside of the bin folder, create a data folder. We are going to use this folder to save our databases.

 

Now we have everything we need to start the database. Open the CMD, and run the following commands to start the database:

C:mongodbbinmongod --dbpath C:mongodbbindata

You will see an output like:

This means the process is running.

In a different CMD, connect to the database using:

C:mongodbbinmongo.exe

I’ve passed the –quiet to omit the warnings:

And here we go, MongoDB is running on a windows machine!

MacOS and Linux configuration:

For macOS, the process is very similar to Windows. The difference is that we can take advantage of the extensive bash commands that the UNIX-like system offers.

Open the terminal. Go to our home/Downloads folder:

cd ~/Downloads

Download MongoDB for MacOS or Linux:

wget https://fastdl.mongodb.org/osx/mongodb-osx-ssl-x86_64-3.6.3.tgz
Download mongodb for Linux:
wget https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-3.6.3.tgz
untar the file:
tar -xvzf mongodb-osx-ssl-x86_64-3.6.3.tgz
Change the folder name to mongodb // just to make it easier
mv mongodb-osx-x86_64-3.6.3/ ~/Downloads/mongodb
Right now all the binaries are on ~/Downloads/mongodb/
Create a folder to save the database is in ~/Downloads/mongodb/bin/
mkdir ~/Downloads/mongodb/bin/data
Start the mongod process
./mongod --dbpath data

The output must be similar to:

On a different tab run:

~/Downloads/mongodb/bin/mongo

At this point, you should be able to use MongoDB with the default options on MacOS or Linux.

Note that we aren’t enabling authentication either configuring a replica set.

If we don’t pass the –quiet parameter we will receive a few warnings like:

2018-03-16T14:26:20.868-0300 I CONTROL  [initandlisten]
I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
I CONTROL  [initandlisten] **  Read and write access to data and configuration is unrestricted.
I CONTROL  [initandlisten]
I CONTROL  [initandlisten] ** WARNING: This server is bound to localhost.
I CONTROL  [initandlisten] **          Remote systems will be unable to connect to this server.
I CONTROL  [initandlisten] **       Start the server with --bind_ip <address> to specify which IP
I CONTROL  [initandlisten] **  addresses it should serve responses from, or with --bind_ip_all to
I CONTROL  [initandlisten] **   bind to all interfaces. If this behavior is desired, start the
I CONTROL  [initandlisten] **          server with --bind_ip 127.0.0.1 to disable this warning.
I CONTROL  [initandlisten]

For more information about how to configure those parameters, please refer to the following blog post and or documentation:

https://www.percona.com/blog/2017/12/15/mongodb-3-6-security-improvements/

https://www.percona.com/blog/2017/05/17/mongodb-authentication-and-roles-creating-your-first-personalized-role/

https://www.percona.com/blog/2016/08/12/tuning-linux-for-mongodb/

To stop the mongod process, use ctrl+c (on any operating system) in the server window.

The post Starting MongoDB Database Software appeared first on Percona Database Performance Blog.

via MySQL Performance Blog
Starting MongoDB Database Software

GunVideo.com – New Video Sharing Website by Lenny Magill

GunVideo.com - New Video Hosting Website by Lenny Magill (2)Lenny Magill, the owner and CEO of GlockStore.com, has recently published a video announcement addressing the new YouTube strict policies regarding firearm-related content. He has created a new website (GunVideo.com) to have a safe platform for hosting their videos and which may also become a video sharing platform for other content creators in the industry. Glock […]

Read More …

The post GunVideo.com – New Video Sharing Website by Lenny Magill appeared first on The Firearm Blog.


via The Firearm Blog
GunVideo.com – New Video Sharing Website by Lenny Magill

Musk’s Flamethrower Looks Like a Toddler’s Toy Next to This Jet-Powered Fire Tornado Cannon

GIF

If you weren’t able to scrape together $500 to buy Elon Musk’s fund-raising flamethrower, YouTube’s Jairus of All has a cheaper, DIY alternative that instead spews a massive spinning tornado of fire using a pair of ducted fans and a tank of liquid propane worn as a backpack.

Jairus’ cannon has a wonderful ‘banned science fair experiment’ aesthetic to it, but it also looks like something Wile E. Coyote would have ordered from the ACME catalog in another ill-conceived attempt to take out the Road Runner. Although, in the wrong hands, Jarius’ creation looks like it could easily take out a small bird, or a nearby planet.

[YouTube via The Awesomer]

via Gizmodo
Musk’s Flamethrower Looks Like a Toddler’s Toy Next to This Jet-Powered Fire Tornado Cannon

No Amount Of Spin Will Make This Gun Control Movement Different

The anti-gunners want to believe this time will be different. In truth, they might be right. Politicians are notoriously wishy-washy when it comes to polling numbers, and many polls seem to be showing broad support for gun control measures, at least in part. It’s not hard to imagine that gun rights are in real jeopardy this time.

However, let’s also be realistic. This particular gun control movement isn’t any different, despite what some people may try and claim (via The Christian Century).

 

When Sam Zeif met with President Trump after the mass shooting at his high school in Parkland, Florida, he broke down in frustration and tears. “How have we not stopped this?” he asked. “After Columbine? After Sandy Hook?”

Zeif’s outrage is understandable. It’s also easy to understand those who have become cynical about political leaders’ persistent unwillingness to tackle gun violence. More than 200 school shootings have taken place since the murders at Columbine, Colorado, in 1999. About 39,000 gun deaths happen each year in the United States. Legislators have mourned but done nothing to address the problem. No meaningful gun control legislation has been passed at the federal level since 1994, when a ban on certain semiautomatic weapons was tucked into the crime bill. That ban expired in 2004.

A couple of points.

First, of those 39,000 deaths, two-thirds are suicides. That tends to be left out when that number is mentioned, and it’s important. Most people understand that you won’t stop suicides by banning the tools people use to commit suicide. If you do that, you’ll eventually have to ban gravity as well. Most of that number is the result of people making a decision and acting on that decision, often without hurting anyone else. Let’s keep that in mind.

Second, since they brought up the assault weapon ban from 1994, it should also be noted that crime was trending downward before the law was passed and continued downward since the law sunset. In other words, the law had no appreciable impact on crime. Imagine that.

Additionally, a red wave overtook Congress following that bill’s passage, which made even Democrats wary of passing gun control legislation.

But the hundreds of thousands of people who gathered at March for Our Lives rallies in late March offered hope that a new movement is under way, led by teens who have seen the trauma of gun violence firsthand and say: no more. What’s hopeful about the latest movement, besides the refreshing leadership of uncynical students, is how it has avoided some of the patterns that have paralyzed previous efforts.

If by “refreshing leadership of uncynical students,” you mean leadership funded by astroturf movements and consisting of foul-mouthed and uninformed individuals, then sure. It’s refreshing.

To start with, the student-led movement has recognized that gun violence affects everyone; it is an issue for people of all races and places. It is an issue that should unite Americans, not divide them.

In her speech at the rally in Washington, 17-year-old Jaclyn Corin acknowledged the racial divide that has to be overcome on this issue. In 2012, black teenagers occupied Florida’s state capitol to protest the shooting of black teenager Trayvon Martin—without getting the kind of attention the Parkland survivors are getting. “But we share this stage today and forever with those communities who have always stared down the barrel of a gun,” said Corin. Edna Chavez, a student from South Los Angeles, was one of the speakers at the Washington rally. “I learned to duck from bullets before I learned to read,” she said.

When are these people going to stop pretending that gun violence is somehow worse than any other form of violence? If your loved one is killed, it doesn’t matter if it was done with a gun or a knife, they’re just as dead. A gun is a tool, but it’s also a tool that is used to save far more lives than take them.

The real problem is violence, plain and simple. Taking away a tool doesn’t make violence go down. In fact, it increases. Take London, for example. England has strict gun control measures, measures that would never fly in the United States even in this current environment. Now London has a higher murder rate than New York City.

If you’re serious about stopping violence, you need to find out why people are violent in the first place. Refusing to start there just shows us you’re not serious about the issue.

The movement also has avoided partisan politics. “This isn’t about the GOP. This isn’t about the Democrats,” said student Cameron Kasky. “This is about us creating a badge of shame for any politicians who are accepting money from the [National Rifle Association] and using us as collateral.” Judging from past failures at gun control, effective strategies and rhetoric will be issue-oriented, not party-oriented.

Yes, Kasky said that.

But the March for Our Lives also featured a lot of bashing of the Republican Party. Marco Rubio has been a repeated target of David Hogg’s, as well as the target of ire for many of the movement’s followers. A look at the signs at the walkouts and the marches show just how wrong this claim is.

Donald Trump called Hogg, Kasky’s fellow traveler, in hopes of having a thoughtful discussion, and Hogg bragged about hanging up on the man. Here he is, the President of the United States, someone who you have to get on your side if you want national level gun control, and Hogg hangs up on him. Why? Because bashing Trump is cool.

But yeah, totally non-partisan.

Third, the movement has focused on electoral process. It has called on young people to register to vote and to hold candidates accountable. “We are going to take this to every election, to every state and every city,” said Parkland student David Hogg. “When politicians send their thoughts and prayers with no action, we say no more.”

And will they?

See, everyone knows that democracy belongs to those who show up. This isn’t new, and the strategy isn’t new. There’s a reason why so many movements also try to include voter registration drives.

But the March for Our Lives only included a very vocal subsection of American youth. There were many more who never showed up at a march. Still, others are rallying in support of the Second Amendment, because even people who don’t own guns can see that taking away our rights in such a way could have horrible ramifications.

However, let’s not kid ourselves; this movement isn’t any different than the ones in the past. It’s the same movement with many of the same people involved behind the scenes. It has a few new, young faces, but they’re spouting the same old lines. Claims of non-partisanship are lies designed to mislead people who won’t look for themselves, hoping the animosities of the past aren’t noticed this time around.

But all this is, in reality, the same old gun control movement dressed up in a shiny new outfit. It’s the same tactics, the same bogus statistics, and the same rhetoric.

Anyone who claims otherwise is either delusional or a liar.

The post No Amount Of Spin Will Make This Gun Control Movement Different appeared first on Bearing Arms.

via Bearing Arms
No Amount Of Spin Will Make This Gun Control Movement Different

Chadwick Boseman Reprised His Role as T’Challa on Last Night’s Saturday Night Live

Chadwick Boseman bringing the King to SNL.

Last night, Chadwick Boseman took a well-deserved victory lap hosting Saturday Night Live, and in the process he naturally returned to the role that’s defined 2018: the Black Panther.

In a compelling rendition of SNL’s recurring Black Jeopardy sketch, T’Challa takes his place in the competition alongside Leslie Jones and Chris Redd’s African American competitors, where he delights and disappoints host Darnell Hayes (Kenan Thompson, the most dedicated sketch actor in history) by showcasing the gulf of experience between his privileged, utopian Wakandan life and the norms of the United States.

Advertisement

Boseman does great here as the delightful fish-out-of-water king, and he eventually gets the hang of it for the funny climax of the sketch. He is a wise king, after all. All hail.

via Gizmodo
Chadwick Boseman Reprised His Role as T’Challa on Last Night’s Saturday Night Live

Composite Metal Foam (CMF) Armor Withstands 23mm HEI Shells

Composite Metal Foam (CMF) Armor Tested Against 23mm HEI Shells1About a year ago, Professor Afsaneh Rabiei of North Carolina State University developed composite metal foam (CMF) armor plates. In a test conducted in 2017, the 1″ thick plate performed impressively against small arms bullets and managed to get NIJ Level IV certification. You can find below the video footage of that test. Recently, the NC State University has […]

Read More …

The post Composite Metal Foam (CMF) Armor Withstands 23mm HEI Shells appeared first on The Firearm Blog.


via The Firearm Blog
Composite Metal Foam (CMF) Armor Withstands 23mm HEI Shells

Hot-air dryers suck in nasty bathroom bacteria and shoot them at your hands

Researchers found these spewing bacteria and spores.

Washing your grubby mitts is one of the all-time best ways to cut your chances of getting sick and spreading harmful germs to others. But using the hot-air dryers common in bathrooms can undo that handy hygienic work.

Hot-air dryers suck in bacteria and hardy bacterial spores loitering in the bathroom—perhaps launched into the air by whooshing toilet flushes—and fire them directly at your freshly cleaned hands, according to a study published in the April issue of Applied and Environmental Microbiology. The authors of the study, led by researchers at the University of Connecticut, found that adding HEPA filters to the dryers can reduce germ-spewing four-fold. However, the data hints that places like infectious disease research facilities and healthcare settings may just want to ditch the dryers and turn to trusty towels.

Indeed, in the wake of the blustery study—which took place in research facility bathrooms around UConn—”paper towel dispensers have recently been added to all 36 bathrooms in basic science research areas in the UConn School of Medicine surveyed in the current study,” the authors note.

via Ars Technica
Hot-air dryers suck in nasty bathroom bacteria and shoot them at your hands

Star Wars: The Last Laser Master

The Auralnauts have finished up their epic comedic retelling of the first six episodes of Star Wars with episode 6, The Last Laser Master. Follow Laser Master Duke Dirtfarmer and his friends in the fight against the Empire and its fearsome planet-killing weapon: Laser Moon II.

You can watch the five other episodes — including Jedi Party, The Friend Zone, and Revenge of Middle Management — in this playlist.

For snackier Auralnauts fare, see How to make a blockbuster movie trailer, some Bane outtakes from the Dark Knight Rises, and the Star Wars throne room scene minus the John Williams score.

Tags: movies   remix   Star Wars   video
via kottke.org
Star Wars: The Last Laser Master