Data recovery can be useful in all kinds of situations, whether your SSD just happened to die or you went on vacation and came back with a corrupted SD card. Instead of giving up and tossing out the device, you can try recovering your files on your own using an app. It may not always work, but when it does, it’ll turn out to be a huge relief. If you’re on a Mac, these are the main apps you should turn to when you need to recover files that have been lost or deleted. If you’re interested, you can also learn more…
We all have a mental image of the quintessential sniper; a man who only needs his rifle and some camouflage to make the impossible billion-yard shot. The reality is that, whether you are sniper or Joe distance shooter, the fundamentals of shooting apply and that does not mean going to the 500 yard range and blasting 1000 rounds downrange like a madman. In fact our range does not allow it for obvious safety reasons, because you have no idea where each of those bullets are going when doing something like that.
A military sniper once told me that I would be stunned how few rounds were fired on an average day at sniper training. He indicated it was more like hunting in that there is a lot of waiting and maneuvering and calculating, with a heck of a lot of analytical classroom work.
At heart, sniper training puts emphasis more on prepping for the shot, making the shot, and then a whole bunch of analysis about where it went and why.
Since most modern cartridges shoot flat enough for hunting out to 100-150 yards, most of us never really have to think about any significant holdovers. When is the last time you thought about barometric pressure, ambient temperature, ranging, scope adjustment, wind estimation, spin drift, and flight time before you took a shot at a deer?
Also, most target benchrest distance shooters are used to shooting nice and comfy on a level stable bench and picking “nice days for shooting” that are less temperate and less windy, ensuring fewer things to affect bullet flight. Snipers don’t have that advantage.
Any shooter that tries and has the right rifle and ammo can shoot sub-MOA (Minute Of Angle) groups at 100 yards — but try that while it’s raining at dusk, shooting under a log while lying sideways in a muddy ditch with 30 MPH cross winds at a 400-yard target that is trying to kill you and your buddies.
iPhone Apps
The first step to shooting accurately at long distances is to figure out where the bullet will go. There are a couple iPhone apps I really like. One is military’s now-standard Knight’s Armament Corporation KAC BulletFlight series of apps in three versions from basic $3.99 to Military $39.99. If you want a comprehensive feature-rich app that gives you cutting-edge ballistics calculation including really complex spin drift calculations with most commercial and military caliber data loaded on board, this is the app to download.
The other iPhone ballistics bullet drop calculation app I like (and probably use more often) is iStrelok, a free or inexpensive app (depending on which version) that provides a complete range of ballistics calculation, but lacks the extra data inputs and sophistication of Bulletflight… buthey it’s cheap and has worked great for me breaking clay targets out to 400 yards.
This is more of a fast calculation app designed around Kentucky windage holdovers, but it does calculate scope clicks and the newer version can even do truing. The app shows you where to aim on various reticle designs (Mil Dot, NP-R2 & R1 Nightforce, TMR Leupold, Ballistic Mildot & XTR Ballistic Mildot, BR, SPR, and more). Set up your rifle and cartridge data, enter conditional information (distance, wind, temp, slope) and it shows you where to aim when your look through the scope.
Seems to work very well on 22 LR, 357, 9mm, and308s that I have tried with it.
Another useful app is a distance estimation app for use with Mildot reticle equipped scopes isSnipers Mildot which allows you to set the size of the target with a slider and use the other slider to match the height of the target as seen through your Mildot scope.
Most deer chests are 18″ tall so you can make pretty accurate yardage calculations with this, though not as accurate as a laser range finder. This app requires a Mildot scope on the noted calibrated power setting. It took me a little bit to figure out, but in the end, it’s a handy tool for long range shooting.
Theodolite is another great iPhone app tool for shooting, surveying, outdoor enthusiast, and handyman use, which allows for precise angle calculation, distance and height calculations, GPS, Azimuth bearing, altitude, time, precision compass, and a host of other information critical for positioning and measurement. This is just a stunning display of what the embedded iPhone sensors can do and handy when shooting.
If you want to add some stress to your training, I recommend a Shot Timer, I have several including IA-Innovative Applications Shot Timer and Surefire’s Shot Timer... both free. These provide the beep and shoot with recording of total times, first shot, split time and shot counts. I use these with a handgun and they can provide some stress during long range shooting as well. Adding a little urgency and stress to your long range shooting can really change your accuracy — and not for the better.
Dry Firing
I cannot possibly convey how much I have learned from dry firing. If you want to tighten your groups with rifle and pistol: dry fire, dry fire, and dry fire some more. This will improve your grip, will make you painfully aware of flinching, thumb squeezing, trigger yanks, and other issues that make you miss the target.
As a test the next time you are watching television, assure your gun is completely unloaded, point it at the TV, and pull the trigger… did your sight picture move when you pulled the trigger, did you flinch like a newbie, was there any movement? By dry firing the hell out of my pistols, I have significantly tightened my groups and now I am working through the same drills with my rifles.
A Good Rest
A great portable rest is your backpack, a Harris bipod, or even bean or rice bags which are really light and still provide the same utility as the lead bags… plus you can eat them if you have to go all nature-boy-survival. I also encourage practicing both standard and uncomfortable shooting positions while you are at the range. The reality is you can practice all you want in a bench, but when you start crouching or lying down, things change.
Ethical Questions
We all know the guy who considers hunting, buys a new 300 Magnum, puts three rounds through it and heads to a guided hunt where he attempts 300-400-yard shots. He then brags that it took two shots, but he brought the beast down at 400 yards. This is totally irresponsible and inhumane.
We have a responsibility to make a clean kill. I pulled off a 150-yard squirrel shot one year, only to find that I shot its leg off and had to dispatch it upon retrieval. That haunts me to this day and if it would have been a deer I probably would have never hunted again, but it did teach me to to only take humane shots inside my field shooting capabilities. If you can’t cleanly take the game, don’t take the shot.
Also, you need to ask yourself “Can I safely make the shot from here? Am I shooting into a good backstop?” You need to answer these — correctly — before setting up to take the shot.
Observation
Sniping and hunting have a lot of similarities, starting with the hunt which means a lot of glassing and looking with binoculars and not much shooting. You may have spent a pile of money for a rifle and scope, but the most critical hunting gear is an excellent set of binoculars in 8x or 10x magnification. I can guarantee you will find more game.
The other vital piece of equipment that is getting cheap is a rangefinder. My Nikon Riflehunter is awesome and will provide not only distance, but shot angle and/or adjusted shot distances affected by the angle to the target. Without good range estimation, that $2000 trip and hunt may leave you empty-handed from an incorrect holdover.
The Obvious Questions
Have you sighted in and do you know what the rifle will do at the 10, 25, 50, 74, 100, 150, and 200+ yards ranges with the ammo you are hunting and shooting with? Do you have that data printed out on a little waterproof card? A lot of guys figure that if they are good on the 100 yard range, that they are good at longer ranges – but that is not the case and experience validating ballistic calculator data is critical. Have you shot off the bench, prone, kneeling, and from seated in odd uncomfortable positions? How did that affect your accuracy?
Are you using the same ammo you sighted in with, is it good ammo or cheap stuff that is inconsistent? Always, always, always sight in with the same ammo you hunt with. Have you dry fired enough to assure good grip, trigger and breath control?
Are you keeping a shot log providing distances, sight in distance, wind, clean/cold/warm bore information, and ammunition variances? Most guns will shoot differently with first round from a dirty and/or clean bore and when the barrel is cold and sometimes the accuracy loss can be significant. Have you recorded how your gun shoots from cold bore to hot bore? Sometimes you only have the one shot and you’d better know where the round will go.
Final Thoughts
All this boils down to knowing your rifle and how it shoots. You are better off having only one rifle and knowing how it shoots than to have 50 rifles and not know how any of them really shoot. Making a great shot isn’t not about how expensive your rifle is, it is about how well you know your rifle and how it and you shoot in various conditions.
Ballistics calculators take some of the guesswork out of where the bullet will go, but a great shot takes lots of range practice — and as the saying goes, if it was easy everyone would do it.
After a lengthy absence, the Six Days of Genesis slide show is finally back. It’s been updated and refined, and is on a new hosting service. It looks to be working on most browsers, but please let me know in the comments if you have any problems viewing it. [Permanent link here.]
via SixDay Science It’s back!
Previously, we looked at the most common type of locking mechanism for rifles, but what about handguns? Well, today we’ll be looking at tilting-barrel locking, a method used in virtually every modern locked-breech handgun today. Tilting barrel locking was invented by that Utahn gun maestro, John Moses Browning, as an evolution of a translating barrel mechanism […]
Mr. Colion Noir, NRA News Commentator and curator of the Pew Pew Life, released a new video titled Orlando Terror Attack Gun Control Response, which began with a bold declaration:
“I can honestly say, in all my years as a Second Amendment advocate, that I have never seen a group of people pimp a tragedy like President Obama, Hillary Clinton and the anti-gun avengers pimped the tragedy in Orlando.”
…and it only got more and more delicious from there:
Noir also created the #SteelWaiting hashtag that he uses to promote responsible gun ownership on social media, reinforcing the idea of personal responsibility and accountability. “Guns Don’t Kill People, People Kill People”
We don’t blame the car when a drunk driver kills someone, so why do we blame the gun?
“It’s time for Responsible Gun Owners to take back the image of gun ownership in this country,” Noir said of the hashtag on his Instagram account.
Side note: I would also like to thank Noir for coining the term ‘breakfast potatoes’ in this video. I am seriously crushing on that one, buddy.
Talk show host and shock jock Howard Stern wasn’t holding back yesterday when he took to the show with his response to politicians and anti-gun groups call for gun control in the wake of Sunday’s deadly Orlando shooting.
Stern kicked off his pro-gun rant with, “I can’t believe these people would come out afterward and their answer to Orlando is to take away guns from the public. It’s f***ing mind blowing to me!”
“The sheepdogs are protecting you, but some of them can’t be with you all day. There’s not a sheepdog for every citizen, and a wolf is still eating one of you every night. ‘Baaaaaaaa, oh I know, let’s remove all the guns from the sheep.’ What?”
Stern, a proud New Yorker, also called out gun free zones and referenced 9/11 in a poignant analogy.
“The wolves are always planning. They’ll use boxcutters. They’ll use an airplane fly it right into a building. They don’t need AR-15s.”
“I’m going to tell you about the most gun-free zone on the planet: it happened during 9/11, it was on a plane. You know you can’t get a gun on a plane, it’s completely gun-free.”
“So what did the wolves do? They said, ‘This is great, we’ll just kill the sheep with box-cutters.’ They went on the plane with box-cutters, and all the sheep went ‘Baaaaaaaaaa!”
“Now if there had been an air marshal on that plane, a whole f***ing other thing would have went down. There wouldn’t be no 9/11.”
The talk show host also took a jab at the politicians looking to disarm citizens.
“I don’t like violence, I don’t like any of this stuff, but I consider myself a sheep. Most of your politicians have private security, so they’re OK. Those are sheep that are very well protected. You, on the other hand, are a sitting duck.”
Stern even used the segment to lament on how history could have been much different for Jews had they not been stripped of their guns.
“Can you imagine if the Jews, at least when the Nazis were banging on the doors, if they had a couple of pistols and AR-15s to fight the Nazis? If Anne Frank’s father had a f***ing gun, maybe he at least could have taken a few Nazis out.”
The segment also featured calls from pro- and anti-gun callers:
00:00:10 Sheep Analogy (Dogs, Wolves) 00:09:05 Caller Jeff – Green Beret (pro gun) 00:13:50 Ralph calls in (anti-gun) Howard Nazi analogy 00:19:38 “In my dreamworld, every gay bar is full of people armed to the teeth..”
An anonymous reader writes from a report via Ars Technica: A federal judge has been convinced by the FBI to block the disclosure of where the bureau has attached surveillance cams on Seattle utility poles. Ars Technica writes about how such a privacy dispute is highlighting a powerful tool the authorities are employing across the country to spy on the public with or without warrants. Ars Technica reports: "The deployment of such video cameras appears to be widespread. What’s more, the Seattle authorities aren’t saying whether they have obtained court warrants to install the surveillance cams. And the law on the matter is murky at best. In an e-mail to Ars, Seattle city attorney spokeswoman Kimberly Mills declined to say whether the FBI obtained warrants to install surveillance cams on Seattle City Light utility poles. ‘The City is in litigation and will have no further comment,’ she said. Mills suggested [Ars] speak with the FBI office in Seattle, and they did. Peter Winn [assistant U.S. attorney in Seattle] wrote to Judge Jones that the location information about the disguised surveillance cams should be withheld because the public might think they are an ‘invasion of privacy.’ Winn also said that revealing the cameras’ locations could threaten the safety of FBI agents. And if the cameras become ‘publicly identifiable,’ Winn said, ‘subjects of the criminal investigation and national security adversaries of the United States will know what to look for to discern whether the FBI is conducting surveillance in a particular location.’"
A 200+ document that appears to be a Democratic anti-Trump playbook compiled by the Democratic National Committee has leaked online following this week’s report that the DNC was breached by Russian hackers. In it, Trump is pilloried as a “bad businessman” and “misogynist in chief.”
The document—which according to embedded metadata was created by a Democratic strategist named Warren Flood—was created on December 19th, 2015, and forwarded to us by an individual calling himself “Guccifer 2.0,” a reference to the notorious, now-imprisoned Romanian hacker who hacked various American political figures in 2013.
The package forwarded to us also contained a variety of donor registries and other strategy files, “just a few docs from many thousands I extracted when hacking into DNC’s network,” the purported hacker claimed over email, adding that he’s in possession of “about 100 Gb of data including financial reports, donors’ lists, election programs, action plans against Republicans, personal mails, etc.”
His stated motive is to be “a fighter against all those illuminati that captured our world.”
The enormous opposition document, titled simply “Donald Trump Report,” appears to be a summary of the Democratic Party’s strategy for delegitimizing and undermining Trump’s presidential aspirations—at least as they existed at the end of last year, well before he unseated a field of establishment Republicans and clinched the nomination. A section titled “Top Narratives” describes a seven-pronged attack on Trump’s character and record.
The first is the argument that “Trump has no core”:
One thing is clear about Donald Trump, there is only one person he has ever looked out for and that’s himself. Whether it’s American workers, the Republican Party, or his wives, Trump’s only fidelity has been to himself and with that he has shown that he has no problem lying to the American people. Trump will say anything and do anything to get what he wants without regard for those he harms.
Second, that Trump is running a “divisive and offensive campaign”:
There’s no nice way of saying it – Donald Trump is running a campaign built on fear-mongering, divisiveness, and racism. His major policy announcements have included banning all Muslims from entering the U.S., and calling Mexican immigrants “rapists” and “drug dealers” while proposing a U.S.-Mexico border wall. And Trump’s campaign rallies have become a reflection of the hateful tone of his campaign, with protestors being roughed up and audience members loudly calling for violence.
Third, Trump is a “bad businessman”:
Despite Trump’s continual boasting about his business success, he has repeatedly run into serious financial crises in his career and his record raises serious questions about whether he is qualified to manage the fiscal challenges facing this country. Trump’s business resume includes a long list of troubling issues, including his company’s record of forcing people from their homes to make room for developments and outsourcing the manufacturing of his clothing line to take advantage of lower-wage countries like China and Mexico. His insight about the marketplace has proven wrong many times, including in the run-up to the Great Recession. And Trump’s record of irresponsible and reckless borrowing to build his empire – behavior that sent his companies into bankruptcy four times – is just one indication of how out-of-touch he is with the way regular Americans behave and make a living, and it casts doubt on whether he has the right mindset to tackle the country’s budget problems.
Trump’s policies – if you can call them that – are marked by the same extreme and irresponsible thinking that shape his campaign speeches. There is no question that Donald Trump’s rhetoric is dangerous – but his actual agenda could be a catastrophe.
Fifth, in classically corny Democratic Party style, Donald Trump is the “misogynist in chief”:
Through both his words and actions, Trump has made clear he thinks women’s primary role is to please men. Trump’s derogatory and degrading comments to and about women, as well as his tumultuous marriages, have been well publicized. And as a presidential candidate, Trump has adopted many of the backwards GOP policies that we’ve come to expect from his party.
Sixth, Donald Trump is an “out of touch” member of the elite:
Trump’s policies clearly reflect his life as a 1-percenter. His plans would slash taxes for the rich and corporations while shifting more of the burden to the shoulders of working families. He stands with Republicans in opposing Wall Street reform and opposing the minimum wage. Trump clearly has no conception of the everyday lives of middle class Americans. His description of the “small” $1 million loan that his father gave him to launch his career is proof enough that his worldview is not grounded in reality.
The seventh strategy prong is to focus on Trump’s “personal life,” including that “Trump’s Ex-Wife Accused Him Of Rape,” which is true.
What follows is roughly two hundred pages of dossier-style background information, instances of Trump dramatically changing his stance on a litany of issues, and a round-up of the candidate’s most inflammatory and false statements (as of December ‘15, at least).
It appears that virtually all of the claims are derived from published sources, as opposed to independent investigations or mere rumor. It’s also very light on anything that could be considered “dirt,” although Trump’s colorful marital history is covered extensively:
The DNC hack was first revealed Tuesday, when the cybersecurity firm CrowdStrike announced it had discovered two hacking collectives, linked to Russian intelligence, inside the DNC network after the DNC reported a suspected breach. In a blog post, the company identified the groups as “COZY BEAR” and “FANCY BEAR”—two “sophisticated adversaries” that “engage in extensive political and economic espionage for the benefit of the government of the Russian Federation.”
The hackers were able to access opposition files and may have been able to read email and chat traffic, but did not touch any financial, donor, or personal information, the DNC said Tuesday. However, the user who sent the files to Gawker refuted that claim, writing, “DNC chairwoman Debbie Wasserman Schultz said no financial documents were compromised. Nonsense! Just look through the Democratic Party lists of donors! They say there were no secret docs! Lies again! Also I have some secret documents from Hillary’s PC she worked with as the Secretary of State.”
Among the files sent to Gawker are what appear to be several lists of donors, including email addresses and donation amounts, grouped by wealth and specific fundraising events. Gawker has not yet been able to verify that the Trump file was produced by the DNC, but we have been able to independently verify that the financial documents were produced by people or groups affiliated with the Democratic Party.
Also included are memos marked “confidential” and “secret” that appear to date back to 2008, and pertain to Obama’s transition into the White House, and a file marked “confidential” containing Hillary’s early talking points, at least some of which ended up being repeated verbatim in her April, 2015 candidacy announcement.
Finally, there is a May, 2015 memo outlining a proposed strategy against the field of potential GOP candidates. Donald Trump, who had not yet officially announced his candidacy, does not appear in the document.
The purported hacker writes “it was easy, very easy” to hack and extract thousands of files from the DNC network, “the main part” of which he or she claims are in the custody of Wikileaks. He or she also appears to have sent the documents to The Smoking Gun, which posted about the dossier earlier today.
Warren Flood did not immediately return a request for comment. DNC Press Secretary Mark Paustenbach was not able to immediately confirm the authenticity of the documents, but the party is aware that they’re circulating.
“Be agile; release early; release often.” We know the drill. But is it strategically wise to keep rolling out features often? Especially once a product you’re building reaches a certain size, you probably don’t want to risk the integrity of your application with every new minor release.
The worst thing that can happen is that loyal users, customers who have been using that one little feature consistently over the years, suddenly aren’t able to use it in the same convenient way; the change might empower users more, but the experience becomes less straightforward. Frustration and anxiety enter social media quickly and suddenly, and the pressure on customer support to respond meaningfully and in time increases with every minute. Of course, we don’t want to roll out new features only to realize that they actually hurt loyal users.
We can prevent this by being more strategic when rolling out new versions of our products. In this article, we’ll look into a strategy for product designers and front-end engineers to thoroughly test and deploy a feature before releasing it to the entire user base, and how to avoid UX issues from creeping up down the road.
Before diving into an actual testing strategy, let’s step back and examine common misconceptions of how a new feature is designed, built and eventually deployed.
Whenever a new feature for an existing product is designed, the main focus is usually on how exactly it should be integrated in the existing interface. To achieve consistency, we designers will often look into existing patterns and apply the established design language to make the new feature sit well in the UI. However, problems often occur not because components don’t work together visually, but rather because they turn out to be confusing or ambiguous when combined in unexpected ways.
Perhaps the interface’s copy is ambiguous in related but distant areas of the website, or the outcome of two features being actively used at the same time makes sense from a technical perspective but doesn’t match user expectations or has major performance implications and hurts the UX.
In fact, in design, it is these numerous combinations that are so difficult to thoroughly predict and review. One way to approach the problem while already in the design process is by considering the outliers — use cases when things are more likely to go wrong. What would a user profile look like if the user’s name is very long? Is an overview of unanswered emails still obvious when a dozen inbox labels are being used? Would a new filter make sense for users who have just signed up and have just a few emails in their inbox?
How exactly can we design the outliers once we’ve identified them? A good strategy is to study the different states of the user interface. The “user interface stack,” an idea introduced by Scott Hurff, is versatile and complicated, and when we design our interfaces, usually it’s not enough to craft a pixel-perfect mockup in Photoshop, Sketch or HTML and CSS — we have to consider various edge cases1 and states2: the blank state, the loading state, the partial state, the error state and the ideal state. These aren’t as straightforward as we might think.
3 As designers, we tend to focus on the ideal state and the error state. Yet from a UX perspective, the ideal state isn’t necessarily perfect, and the error state doesn’t have to be broken. Large view.4 (Image: “Why Your UI Is Awkward5,” Scott Hurff)
The blank state doesn’t have to be empty — we could be using service workers6 to provide a better offline experience to regular visitors. The partial state doesn’t have to be broken — we could improve the experience with broken images7 and broken JavaScript through progressive enhancement.
The ideal state might significantly differ from our “perfect result” mockups — due to custom user preferences and the user’s browser choice; some content and web fonts might not be displayed8 because of a browser’s configuration, for example.
9 Prefill Forms Bookmarklet10 lets you plug in pre-defined content snippets to check your web forms, including inputs that are too lengthy or too short.
So, the landscape is, as always, complex, convoluted and unpredictable, and we can’t make the risk of things going wrong negligible, but this doesn’t mean we can’t minimize the risk effectively. By exploring outliers and the entire user interface stack early on, we can prevent common UX issues in the early design stage. It doesn’t get easier on the technical side, though.
Even minor changes tend to lead to chain reactions, introducing bugs in areas and situations that seem to be absolutely unrelated. The main reason for this is the sheer amount of variables that influence the user experience but that are out of our control. We do know our ways with browsers, but that doesn’t mean we know more about the context11 in which a user chooses to see the website we have so tirelessly and thoroughly crafted.
Now, while minor changes like the padding on a button or a progressively enhanced textarea might not seem like a big deal, we tend to underestimate the impact of these shiny little changes or features on a large scale. Every single time we make a design or development decision, that change does have some effect in the complex system we’re building, mostly because the components we are building never exist in isolation.
The reality is that we never just build a button, nor do we never just write a new JavaScript function — buttons and functions belong to a family of components or libraries, and they all operate within a certain setting, and they are unavoidably connected to other parts of the system by their properties or by their scope or by their name or by the team’s unwritten conventions.
These “silent,” hardly noticeable connections are the reason why rolling out features is difficult, and why predicting the far-reaching consequences of a change often proves to be an exercise in keen eyesight. That’s why it’s a good idea to avoid unnecessary dependencies12 as far as you can, be it in CSS or JavaScript — they won’t help you with maintenance or debugging, especially if you’re relying on a library that you don’t fully understand.
13 Close area is typically reserved for our best friends, so no wonder that we develop emotional connections with our phones. Yes, individual context matters, but there are also many other contexts14 that we have to consider. Large view.15
Luckily, to better understand the impact of a change, we can use resources such as a browser’s developer tools. We can measure the reach16 of a selector17 or the reach18of a JavaScript function19, and sometimes it might be a good idea to keep coming back to it during development to keep the scope of the change as local and minimal as possible.
This is helpful, but it’s also just one part of the story. We make assumptions, consciously and unconsciously, based on our own experience with the interface and our own habits — often forgetting that assumptions might (and, hence, will) vary significantly from user to user. Most applications do have just one interface, but this interface or its configurations can have dozens of states — with views changing depending on the user’s settings and preferences.
Think about dashboards with cards that can be customized (analytics software), mail clients with “compact,” “comfortable” and “detailed” views (Gmail), a booking interface that changes for logged-in customers and for guests, a reading experience for people using an ad blocker or an aggressive antivirus filter. The butterfly effect has an impact on more than just the code base; all of those external factors weigh in as well, and testing against them — unlike with unit tests or QA in general — is very difficult because we often don’t even know what to test against.
We can use diagnostics and metrics to determine what changes need to be made, but by following data alone, you might end up stagnating at what we tend to call a “local maximum,” a state of the interface with a good enough design but that utterly lacks innovation because it always follows predictable, logical iterations. When working on a project and exploring the data, we tend to group features in the following four buckets:
Broken features. Features that appear to be broken or inefficient — obviously, we need to fix them;
Unused features. Features that work as intended but are rarely used — often a sign that they either should be removed or desperately need innovation;
Unexpected use features. Features that are used in a way that is extremely different from what their creators had originally envisioned — a good candidate for slow, continual refinement;
Workhorse features. Features that are heavily used and seem to be working as planned — in which case we ask ourselves whether there is any way to further improve their UX by exploring both the slow iterative process and entirely different innovative concepts in parallel.
The first two buckets are critical for keeping an interface functional and usable, while the latter two are critical for keeping users engaged and delighted. Ideally, we want to reach both goals at the same time, but time, budget and team restrictions have the upper hand.
Still, once a new iteration or a new idea is chosen, it can be tempting to jump into designing or building the new feature right away. But before even thinking about how a feature would fit in an existing interface, it’s a good strategy to validate the idea first — with a quick prototype and user research. A common way to achieve this is by using a quick iterative process, such as Google Ventures’ design sprint20. By iterating within a couple of days, you can identify how the new feature should be implemented and/or whether it’s useful in the way you had imagined it to be initially.
21 In design sprints22, on Monday, you map out the problem; on Tuesday, you sketch solutions; on Wednesday, you build a testable hypothesis; on Thursday, you build a high-fidelity prototype; on Friday, you test.
With design sprints, we expose the idea to usability research early on. In Google Ventures’ methodology, you would test a design with five users a day; then, you would iterate and go through another round of testing of the new design. The reason why all of the same users are involved is because if you test a different design with each user that day, you would have no valid data to know which elements should change. You need a few users to validate one design iteration.
We apply a slightly different model in our sprints. When we start working on a new feature, once an early first prototype is built, we bring designers, developers and the UX team together in the same room, invite real users to test and then iterate on a tight schedule. On the first day, the first testers (two to three people) might be scheduled for a 30-minute interview at 9:00 am, the second group at 11:00 am, the next one at 2:00 pm, and the last one around 4:00 pm. In between user interviews, we have “open time windows,” when we actually iterate on the design and the prototype until at some point we have something viable.
The reason for this is that, early on we want to explore entirely different, sometimes even opposite, directions quickly; once we gather feedback on different interfaces, we can converge towards what feels like the “absolute maximum” interface. We can get very diverse feedback on very diverse design iterations faster this way. The feedback is mostly based on three factors: heat maps that record user clicks, the time users need to complete a task and how delightful the experience is to them. Later in the week, we keep working consistently with a larger number of users, very much like Google does, permanently validating the new design as we go.
So far so good, but sometimes a seemingly innovative new feature collides with an existing feature, and having them both in the same interface would clutter the design. In this case, we explore whether one of the options could be considered an extension of the other. If it could be, then we start by reiterating its functionality and the design. That’s when we have to choose radical redesign or incremental change23. The latter is less risky and will keep a familiar interaction pattern for users, while the former is required if critical changes are impossible to achieve otherwise or if the gains from incremental changes would be too shallow.
In either case, it’s critical to keep the focus on the entire user experience of the product, rather than on the value of a single feature within that product. And once you’ve chosen the feature and you’ve designed and built the first prototype, it’s time to test.
Well, then, how do we effectively prevent errors and failures from creeping into an actual live environment? How many checks and reviews and tests do we run before a feature gets deployed? And in what sequence do we run these tests? In other words, what would the ultimate strategy for rolling out features look like?
24 At Mail.ru, a new feature has to go through seven levels of testing before it sees the light of day. (Video25 and article26 in Russian)
One of the better strategies for rolling out features was proposed27 by Andrew Sumin, head of development at Mail.ru, a major email provider in Russia. The strategy wouldn’t be applicable to every project, but it’s a reasonable and comprehensive approach for companies serving mid-sized and large products to thousands of customers.
Let’s look at the strategy in detail and cover the eight steps of a feature roll-out, covering Mail.ru’s process of product development:
test with developers,
test with real users in a controlled environment,
test with company-wide users,
test with beta testers,
test with users who manually opt in,
split-test and check retention,
release slowly and gradually,
measure the aftermath.
In Mail.ru’s case, the most important feature to keep intact no matter what is composing a message (obviously). That’s the most used piece of the interface, and allowing it to be unavailable or to work incorrectly even for seconds would be absolutely out of the question. So, what if we wanted to extend the functionality of a textarea, perhaps by adding a few smart autocomplete functions, or a counter, or a side preview?
The more time passes by in development, the more expensive it becomes to fix a problem. Again, think about how connected all decisions are in product development; the more refined the product is, the more decisions have to be reverted, costing time and resources. So, identifying and resolving problems early on matters from both a business perspective and a design and development perspective.
You can’t debug an idea, though, so initial testing should take place during production, on the very first prototypes. The first testers at Mail.ru, then, are the developers who actually write the code. The company encourages its employees to use the product for in-house communication (and even private communication); so, developers could be considered hardcore users of the product.
28 Gremlins.js3129 helps you check the robustness of a website by “unleashing a horde of undisciplined gremlins.”
The first step is quite obvious: design and build the feature, and then locally test, review and roll it out on the staging server. This is where QA testing comes in, with comprehensive tools and task runners that attempt to crash the feature and interface30, potentially automated with monkey testing tools such as Gremlins.js3129.
The results are monitored and then fed back into the feedback loop for the next iteration of the feature. At some point, the developers will feel quite confident with the build: the change seems to be working as expected, and the requirements have been met. That’s when real user testing kicks in.
2. Test With Real Users in a Controlled Environment Link
When the first working prototype is finished, the feature is tested with actual users in interviews. Customers are invited to complete tasks, and as they do, the UX team monitors dead ends and issues that pop up and addresses them on spot.
However, not only is the new feature being tested; the usability test’s goal is to ensure that the new feature doesn’t affect critical components of the interface, which is why users complete routine tasks, such as composing a message and opening, replying to and browsing emails in their inbox. If both the new feature and the old features are well understood, the process can move on.
Obviously, the feedback from the usability test prompts developers to introduce changes, which then feed back to the usability testers, going back and forth until the result seems to hold value for a larger audience. The next step, then, is for the feature to be spotlighted within the company: A company-wide email is sent out encouraging all colleagues to check the feature and submit reports, bugs and suggestions in a tracker.
With testing, there isn’t a particularly big difference between users in “remote” departments within the company and users in the wild. Even internal users don’t know what changes to expect or know exactly what a feature does or how it’s supposed to work or look like. The only main difference is that colleagues can be prompted to quickly submit feedback or leave a comment. That’s when voting forms are introduced. Testers can not only play with the feature but also add a comment and upvote or downvote it. Voting has to be weighed against product strategy and business requirements, but if users clearly find a feature useless or helpful, that’s a simple and effective way to gather feedback and to test whether the product works as expected.
If a feature has passed a technical check, a usability check and review within the company, the next logical step is to introduce it to some segments of the audience. However, instead of rolling it out to a random segment of users, the team submits a feature for review among beta testers — users who have opted to participate in tests and submit feedback for experimental features. They can downvote or upvote a feature, as well as report bugs and commit pieces of code.
But how do you choose appropriate beta-testers? Well, if you want to encourage testers to break the interface, you might focus on advanced loyal users with technical skills — users who would be able to provide technical detail about a bug if necessary, and users who know the existing interface well enough to be able to anticipate problems that other users might have.
However, you need criteria to determine whether a user is advanced enough to be a beta tester. In the case of an email client, it could be someone who uses Chrome or Firefox (i.e. they know how to change their default browser), who has created more than three folders in their inbox and who has also installed the mobile app.
Up until this point, the tests have involved a manageable number of users, configurations and test reports. Yet the diversity of users, systems and configurations out in the wild, including operating system, browser, plugins, network settings, antivirus software and other locally installed applications, can be slightly more daunting in scale.
In Mail.ru’s case, the next step is to roll out the feature in a live interface, behind a flag, and to send out an email to this larger segment of active users, presenting the new feature and inviting them to activate it on their own in the interface, usually with a shiny “Update” button. To measure the value of the feature to actual users, the team again uses a voting system, with a few prompts here and there, basically asking users whether they find the feature helpful or useful. Notice that the difference between this level and the previous level is that the manual opt-in involves a much larger audience — many of whom aren’t technical at all, unlike beta testers.
So, timing and coordination matter. You probably wouldn’t pick a random day to send out the email to active users, because you’ll want the customer support team and developers to be available when the stream of bug reports starts coming in. That’s why the email is sent out at the beginning of the week, when all (or most) developers are available and the support team is ready to spring into action, having been briefed and actively connected with the developers via Skype or Slack. In a smaller company, you could even have developers sit in for a few hours at support desks to get to the core of a problem faster by speaking directly to customers.
In the steps thus far, except for usability testing, all testers have used the new feature voluntarily. However, if you enable the feature by default, suddenly users will have to use it, and this is a very different kind of group, one we haven’t tested at all.
To make sure you don’t break the habits of passive users, you could split-test with three small segments of users and measure retention. After all, you want to make sure that a new version works at least as well as the previous one. Identify the most important activity in the interface and measure not only how much time users spend on it before and after the roll-out, but also how much time passes until they return. In Mail.ru’s case, retention entails users checking their email and composing a message. The more often a user comes back, the higher the retention, which is an indicator of a better UX.
Each segment gets a slightly different view, which enables us test how to display the new feature to all users later. For the first segment, we add the new feature and provide a tutorial on how to use it. For the second segment, we just add the new feature. For the third segment, we could leave the feature as is. For all of these segments, we could implement the change at the same time, select a reasonable timeframe to run the test, measure retention and then compare results. The higher the retention of a segment, the more likely that design will be promoted to all users later on.
If a feature has made it all the way to this point, then it probably already works well for a large segment of the audience. This is when you could gradually roll it out to all users — with a voting prompt to gather feedback. If the feedback is mostly positive, you can keep rolling out the feature and it will eventually become an integral part of the interface. Otherwise, you would evaluate the feedback and return to the lab for the next iteration.
Rolling out the feature isn’t enough, though: It has to be communicated to users. A common way to do that is through email and social media. Still, a quick walkthrough tutorial explaining the value of the feature in real-life scenarios might be helpful, too. Also, don’t forget to integrate a suggestions box to gather feedback immediately.
Once the feature has been rolled out, we can monitor how it performs and try different methods to draw attention to it, so that users will be able to perform their tasks more efficiently. You could track the most common tasks or most visited pages and then display a little inline note recommending a slightly smarter and faster way for the user to achieve their goal, and then measure whether the user prefers this new feature or the usual method.
Don’t forget to bring the feedback back to the entire team, not only the developers or designers, so that they are motivated and engaged and see how people use a feature that was initially nothing more than a rough idea. Nothing is more motivating than seeing happy, delighted people using an application exactly the way you envisioned, or in entirely different ways. It will also nourish the growth of the team’s subsequent features.
The review process looks complex and thorough, but sometimes only time and a wide net for user testing will uncover a problem. For example, if a change affects what the overview of incoming messages looks like, no unit test could uncover difficulties that users of assistive software might encounter. In a mail interface, what do you want the accessibility device to read out first: the date, the sender, the subject line or the message itself? The way you rearrange the columns in the overview might change the way users choose to access the information; so, allowing them to turn off the new feature would be critical as well.
So, what does a roll-out strategy look like? You could start by exploring the graph of dependencies to understand how far-reaching your change might be. Then, you could test the feature with developers and with real users in a controlled environment. Then, you could ask colleagues to review the feature, before sending it to a select group of beta testers. Finally, you could make the feature available as an option to users. And, before enabling the feature for everybody, you could run a split-test to figure out the best way to introduce the feature, and then measure the retention rates for critical tasks.
Obviously, deployment is not a linear process. Throughout the process, you might need to take two steps back in order move one step forward — until you finally have a release candidate. The workflow covered above might seem to be quite slow and not particularly agile, but you drastically minimize the risk of users suddenly being confronted with an unexpected problem and having an inferior experience as a result. In some situations, it might be very well worth it.