How to Win Your Physics Class Egg Drop Competition

How to Win Your Physics Class Egg Drop Competition

The problem is simple enough. You’ve got a raw egg, and it’s going to be dropped from a high place. You have to build a contraption so that it doesn’t break when it hits. Easy? Maybe. But how do you make it as light as possible? Or as small as possible?

These questions have had high school physics students pulling all-nighters for decades. Luckily for the generations to come, our good friend former NASA engineer Mark Rober is here to enlighten you. Just like with his Pinewood Derby video, he not only shows you some of the best systems for winning (depending on what the rules of your particular contest consist of), but it breaks down the science of exactly how each one works. Not only will this help you sound smart when your teacher asks you how you came up with it, but hopefully it will inspire you to try something original that’s based on these principles.

Personally, I don’t have any kids in high school, nor am I likely to be entering an egg drop contest anytime soon, and yet I was absolutely glued to this video. I don’t know what I’m going to do with this knowledge, but it’s probably going to be something fun. And I trust that these solutions will work. If he can safely land a rover on Mars then he can safely land an egg on the ground, okay? [Mark Rober]

via Gizmodo
How to Win Your Physics Class Egg Drop Competition

This Graphic Shows How to Defeat Garden Pests with Other Insects

When it comes to gardening, not all insects are bad. For a natural pest control solution, consider welcoming beneficial insects or one of the other DIY pest control options in this infographic.

The graphic, from First Choice Environmental, highlights some of the bugs that destroy your garden—and the “good” insects that feed on those garden pests. You can either order them or, as the graphic shows, use insectary plants to lure the beneficial insects. The graphic also provides a few details on using row covers and diatomaceous earth. (For further reference, there’s also a beneficial insects poster with photos of more good bugs from University of California.)

The info here might be old news for experienced gardeners, but for hapless beginners like me, good to know.

Infographic: How to protect your garden with organic pest control methods | Inhabitat

This Graphic Shows How to Defeat Garden Pests with Other Insects


via Lifehacker
This Graphic Shows How to Defeat Garden Pests with Other Insects

Coil Your Extension Cords Like a Roadie with the Over-Under Method

No one understands the value of a properly wound cable more than concert roadies and TV crew professionals, who wrap and unwrap hundreds of feet of cord on a daily basis. Here’s the method they use to keep their shows running on time.

The over-under technique shown above and below is best suited for long cables and as an alternative to wrapping the cord around your elbow and hand. Coil the cord with thumbs facing the same direction then alternate and end the coil with the thumb of your bottom hand facing towards you.

This method eliminates unnecessary twists in the cord and allows the cord to coil in it’s natural state (like it was wrapped from the factory). It also allows the cord or cable to be quickly straightened by either throwing the coil away from your or just pulling on one end.

This Old House also approves of the over-under method and they’ve added a great tip to keep the cord tied together. Wrap a long string around one end of the extension cord near the plug, and after the coiling the cord, tie a bow knot around the entire coil.

They also suggest storing your wound cords in a 5 gallon bucket which will keep them from unraveling. You can also dispense the coil straight from the bucket but cutting a hole on the side of the bucket near the bottom, and pulling the male end of the plug through the hole to plug into the wall. Then pull the length of cord needed straight out of the bucket.

The chain link method is another effective coiling technique and eliminates knots, but it adds more twists and bends to your cord and is not very compact for storage. Check out our advice for wrapping smaller cables while you’re at it.


Workshop is a new blog from Lifehacker all about DIY tips, techniques, and projects. Follow us on Twitter here.


via Lifehacker
Coil Your Extension Cords Like a Roadie with the Over-Under Method

Turn on Tracking Protection in Firefox to Make Pages Load 44% Faster

Turn on Tracking Protection in Firefox to Make Pages Load 44% Faster

Even if you don’t care about the privacy implications of tracking cookies and other technologies sites use to identify us online, you might want to turn on Tracking Protection in Firefox anyway for a potential big speed boost.

Former Mozilla software engineer Monica Chew and Computer Science researcher Georgios Kontaxis took a look at the top 200 news sites (according to Alexa) and found a median 44% reduction in page load time, as well as a 39% reduction in data usage.

Tracking Protection actively blocks domains known to track users. You might not see huge performance benefits for all sites, depending on how much each site relies on third-party content and similar extras from tracking domains. Still, with a range of between 20% and 90% decreased page load times according to the study—and better privacy control—it’s worth a shot.

To turn on Tracking Protection in Firefox:

  1. Type in about:config in the location bar and hit enter.
  2. You’ll see a warning about possibly voiding your warranty. Hit “I’ll be careful, I promise!” to continue.
  3. Search for privacy.trackingprotection.enabled.
  4. Double-click that to toggle the value to true.

You can read the researchers’ paper (PDF) here.

Tracking Protection for Firefox at Web 2.0 Security and Privacy 2015 | Monica at Mozilla via Venture Beat and Boing Boing


via Lifehacker
Turn on Tracking Protection in Firefox to Make Pages Load 44% Faster

What Makes A Database Mature?

Many database vendors would like me to take a look at their products and
consider adopting them for all sorts of purposes. Often they’re pitching
something quite new and unproven as a replacement for mature, boring technology
I’m using happily.
I would consider a new and unproven technology, and I often have. As I’ve
written previously, though, a real evaluation takes a lot of
effort, and that
makes most evaluations non-starters.
Perhaps the most important thing I’m considering is whether the product is
mature. There are different levels of maturity, naturally, but I want to
understand whether it’s mature enough for me to take a look at it. And in that
spirit, it’s worth understanding what makes a database mature.
For my purposes, maturity really means demonstrated capability and quality with
a lot of thought given to all the little things.
The database needs to demonstrate the ability to solve specific problems well and
with high quality. Sometimes this comes from customers, sometimes from a large
user community (who may not be customers).
Here are some things I’ll consider when thinking about a database, in no
particular order.
What problem do I have? It’s easy to fixate on a technology and start
thinking about how awesome it is. Some databases are just easy to fall in love
with, to be frank. Riak is in this category. I get really excited about the
features and capabilities, the elegance. I start thinking of all the things I
could do with Riak. But now I’m putting the cart before the horse. I need to
think about my problems first.
Query flexibility. Does it offer sophisticated execution models to handle
the nuances of real-world queries? If not, I’ll likely run into queries that
run much more slowly than they should, or that have to be pulled into
application code. MySQL has lots of examples of this. Queries such as ORDER
BY with a LIMIT clause, which are super-common for web workloads, did way
more work than they needed to in older versions of MySQL. (It’s better now,
but the scars remain in my mind).
Query flexibility. The downside of a sophisticated execution engine with
smart plans is they can go very wrong. One of the things people like about
NoSQL is the direct, explicit nature of queries, where an optimizer can’t be
too clever for its own good and cause a catastrophe. A database needs to make
up its mind: if it’s simple and direct, OK. If it’s going to be smart, the bar
is very high. A lot of NoSQL databases that offer some kind of “map-reduce”
query capability fall into the middle ground here: key-value works great, but
the map-reduce capability is far from optimal.
Data protection. Everything fails, even things you never think about. Does
it automatically check for and guard against bit rot, bad memory, partial page
writes, and the like? What happens if data gets corrupted? How does it behave?
Backups. How do you back up your data? Can you do it online, without
interrupting the running database? Does it require proprietary tools? If you
can do it with standard Unix tools, there’s infinitely more flexibility. Can
you do partial/selective backups? Differential backups since the last backup?
Restores. How do you restore data? Can you do it online, without taking
the database down? Can you restore data in ways you didn’t plan for when
taking the backup? For example, if you took a full backup, can you efficiently
restore just a specific portion of the data?
Replication. What is the model—synchronous, async, partial, blend?
Statement-based, change-based, log-based, or something else? How flexible is
it? Can you do things like apply intensive jobs (schema changes, big
migrations) to a replica and then trade master-and-replica? Can you filter and
delay and fidget with replication all different ways? Can you write to
replicas? Can you chain replication? Replication flexibility is an absolutely
killer feature. Operating a database at scale is very hard with inflexible
replication. Can you do multi-source replication? If replication breaks, what
happens? How do you recover it? Do you have to rebuild replicas from scratch?
Lack of replication flexibility and operability is still one of the major pain
points in PostgreSQL today. Of course, MySQL’s replication provides a lot of
that flexibility, but historically it didn’t work reliably, and gave users a
huge foot-gun. I’m not saying either is best, just that replication is hard
but necessary.
Write stalls. Almost every new database I’ve seen in my career, and a lot
of old ones, has had some kind of write stalls. Databases are very hard to
create, and typically it takes 5-10 years to fix these problems if they aren’t
precluded from the start (which they rarely are). If you don’t talk about
write stalls in your database in great detail, I’m probably going to assume
you are sweeping them under the rug or haven’t gone looking for them. If you
show me you’ve gone looking for them and either show that they’re contained or
that you’ve solved them, that’s better.
Independent evaluations. If you’re a solution in the MySQL space, for
example, you’re not really serious about selling until you’ve hired Percona to
do evaluations and write up the results. In other database communities, I’d
look for some similar kind of objective benchmarking and evaluations.
Operational documentation. How good is your documentation? How complete?
When I was at Percona and we released XtraBackup, it was clearly a
game-changer, except that there was no documentation for a long time, and this
hurt adoption badly. Only a few people could understand how it worked. There
were only a few people inside of Percona who knew how to set it up and operate
it, for that matter. This is a serious problem for potential adopters. The
docs need to explain important topics like daily operations, what the database
is good at, what weak points it has, and how to accomplish a lot of common
tasks with it. Riak’s documentation is fantastic in this regard. So is MySQL’s
and PostgreSQL’s.
Conceptual documentation. How does it work, really? One database that I
think has been hurt a little bit by not really explaining how-it-works is
NuoDB, which used an analogy of a flock of birds all working together. It’s a
great analogy, but it needs to be used only to set up a frame of reference for
a deep-dive, rather than as a pat answer. (Perhaps somewhat unfairly, I’m
writing this offline, and not looking to see if NuoDB has solved this issue I
remember from years ago.) Another example was TokuDB’s Fractal Tree indexes.
For a long time it was difficult to understand exactly what fractal tree
indexes really did. I can understand why, and I’ve been guilty of the same
thing, but I wasn’t selling a database. People really want to feel sure they
understand how it works before they’ll entrust it with their data, or even
give it a deep look. Engineers, in particular, will need to be convinced that
the database is architected to achieve its claimed benefits.
High availability. Some databases are built for HA, and those need to have
a really clear story around how they achieve it. Walk by the booth of most new
database vendors at a conference and ask them how their automatically HA
solution works, and they’ll tell you it’s elegantly architected for zero
downtime and seamless replacement of failed nodes and so on. But as we know,
these are really hard problems. Ask them about their competition, and they’ll
say “sure, they claim the same stuff, but our code actually works in failure
scenarios, and theirs doesn’t.” They can’t all be right.
Monitoring. What does the database tell me about itself? What can I
observe externally? Most new or emerging databases are basically black boxes.
This makes them very hard to operate in real production scenarios. Most
people building databases don’t seem to know what a good set of
monitoring capabilities even looks like. MemSQL is a notable exception, as is
Datastax Enterprise. As an aside, the astonishing variety of opensource databases
that are not monitorable in a useful way is why I founded VividCortex.
Tooling. It can take a long time for a database’s toolbox to become robust
and sophisticated enough to really support most of the day-to-day development
and operational duties. Good tools for supporting the trickier emergency
scenarios often take much longer. (Witness the situation with MySQL HA tools
after 20 years, for example.) Similarly, established databases often offer
rich suites of tools for integrating with popular IDEs like Visual Studio,
spreadsheets and BI tools, migration tools, bulk import and export, and the like.
Client libraries. Connecting to a database from your language of choice,
using idiomatic code in that language, is a big deal. When we adopted Kafka at
VividCortex, it was tough for us because the client libraries at the time
were basically only mature for Java users. Fortunately, Shopify had
open-sourced their Kafka libraries for Go, but unfortunately they weren’t
mature yet.
Third-party offerings. Sometimes people seem to think that third-party
providers are exclusively the realm of open-source databases, where third
parties are on equal footing with the parent company, but I don’t think this
is true. Both Microsoft and Oracle have enormous surrounding ecosystems of
companies providing alternatives for practically everything you could wish,
except for making source code changes to the database itself. If I have only
one vendor to help me with consulting, support, and other professional
services, it’s a dubious proposition. Especially if it’s a small team that
might not have the resources to help me when I need it most.
The most important thing when considering a database, though, is success
stories. The world is different from a few decades ago, when the good databases
were all proprietary and nobody knew how they did their magic, so proofs of
concept were a key sales tactic. Now, most new databases are opensource and the
users either understand how they work, or rest easy in the knowledge that they
can find out if they want. And most are adopted at a ratio of hundreds of
non-paying users for each paying customer. Those non-paying users are a
challenge for a company in many ways, but at least they’re vouching for the
solution.
Success stories and a community of users go together. If I can choose from a
magical database that claims to solve all kinds of problems perfectly, versus
one that has broad adoption and lots of discussions I can Google, I’m not going
to take a hard look at the former. I want to read online about use cases,
scaling challenges met and solved, sharp edges, scripts, tweaks, tips and
tricks. I want a lot of Stack Exchange discussions and blog posts. I want to see
people using the database for workloads that look similar to mine, as well as
different workloads, and I want to hear what’s good and bad about it.
(Honest marketing helps a lot with this, by the way. If the company’s own claims
match bloggers’ claims, a smaller corpus online is more credible as a
result.)
These kinds of dynamics help explain why most of the fast-growing emerging
databases are opensource. Opensource has an automatic advantage because of free
users vouching for the product. Why would I ever consider a proof-of-concept to
do a sales team a favor, at great cost and effort to myself, when I could use an
alternative database that’s opensource and has an active community discussing
the database? In this environment, the proof of concept selling model is
basically obsolete for the mass market. It may still work for specialized
applications where you’ll sell a smaller number of very pricey deals, but it
doesn’t work in the market of which I’m a part.
In fact, I’ve never responded positively to an invitation to set up a PoC for a
vendor (or even to provide data for them to do it). It’s automatically above my
threshold of effort. I know that no matter what, it’s going to involve a huge
amount of time and effort from me or my teams.
There’s another edge-case—databases that are built in-house at a specific
company and then are kicked out of the nest, so to speak. This is how Cassandra
got started, and Kafka too. But the difference between a database that works
internally for a company (no matter how well it works for them) and one that’s
ready for mass adoption is huge, and you can see that easily in both of those
examples. I suspect few people have that experience to point to, but probably a
lot of readers have released some nifty code sample as open-source and seen how
different it is to create an internal-use library, as opposed to one that’ll be
adopted by thousands or more people.
Remarkably few people at database companies seem to understand the
things I’ve written about above. The ones who do—and I’ve named some of
them—might have great success as a result. The companies who aren’t run by
people who have actually operated databases in their target markets recently,
will probably have a much harder time of it.
I don’t make much time to coach companies on how they should approach me. It’s
not my problem, and I feel no guilt saying no without explanation. (One of my
favorite phrases is “no is a complete sentence.”) But enough companies have
asked me, and I have enough friends at these companies, that I thought it would
be helpful to write this up. Hopefully this serves its intended purpose and
doesn’t hurt any feelings. Please use the comments to let me know if I can
improve this post.
Bristlecone pine by
yenchao, roots by
mclcbooks
via Planet MySQL
What Makes A Database Mature?

The Sun’s Magnetic Field Has Never Looked So Good

The Sun's Magnetic Field Has Never Looked So Good

In photographs taken in the spectrum of visible light, the Sun’s magnetic field is invisible. But this image wasn’t taken in our familiar, visible wavelengths, which is why the Sun’s magnetic field is both apparent and beautiful.

The bright white strands in this image are called coronal loops; they’re the lines of the Sun’s magnetic field passing through the corona, the outer layer of gas around the Sun. The blue and yellow patches show the two opposite polarities of the magnetic field.

NASA combined data from two advanced instruments aboard NASA’s Solar Dynamics Observatory to produce this dramatic image, with color added to make the image clearer. The Atmospheric Imaging Assembly, which creates images of the solar atmosphere in 10 different wavelengths every 10 seconds, imaged the coronal loops, while the blue and yellow magnetic field images came from the Helioseismic and Magnetic Imager. The overlay provides a striking look at the invisible force of the Sun’s magnetic field.

[NASA]

Image Credit: NASA SDO

via Gizmodo
The Sun’s Magnetic Field Has Never Looked So Good