How to Tune Up a Lawn Mower in Five Steps

Gas-powered lawn mowers, like any machine driven by an internal combustion engine, require maintenance over time to keep running smoothly. Here are the five things you should check to give your mower a tune up.

In this video from This Old House, Roger Cook shows us how to keep the mower running happily. If you’re familiar with engines, you’ve probably done most of this before—but personally, I need some guidance. First of all you should check the oil level (ideally before every use) and change it if it’s dirty. Then you should check the air filter, as a dirty, clogged filter will just make the engine work harder and less efficiently. Next up is the spark plug; Cook suggests you change it “every season” (i.e. a few times a year).

Of course, you should also check that the blade hasn’t dulled to much and sharpen it if necessary.

What I didn’t know is that old gasoline can go “stale” and you should only buy a small quantity (say a month’s worth) at a time. Otherwise the aging ethanol in the gas can damage the engine. You can add a fuel stabilizer to extend the life of the gas and prevent it from damaging the engine. Get those five items in check, then all you need is the lawn.

How to Tune Up Your Lawn Mower | This Old House

via Lifehacker
How to Tune Up a Lawn Mower in Five Steps

DataScience launches a service to easily query info and build models from anywhere

Many of the largest companies — like Facebook with FBLearner — in the world have built an edge for themselves by making the process of studying their users and data science ongoing, tracking closely how things change over time. But for other businesses, those studies might just end with a single chart or predictive analysis report.

Ian Swanson started DataScience about two and a half years ago in order to provide those businesses with that kind of information. The company employed data scientists internally that would work with outside businesses. But over the past year, DataScience has been working to build an array of tools — once that they’ve used internally — that businesses can hand to data scientists internally to essentially get the same thing done, which is launching today in the form of DataScience Cloud.

Here’s the short version: with DataScience Cloud, employees are able to look up a range of information across a wide array of sources — ranging from internal, unstructured databases to Salesforce accounts — with single SQL queries that have been optimized to work across all those buckets. They can then write predictive models for that data in the form of code, and deploy that code internally so other parts of the company can use to run simulations or additional tests against in order to predict better outcomes.

“Data scientist, and data science, is a pretty narrow focused term like a lot of roles: statistician, actuary, and so on,” Swanson said. “The common term we might hear is data science. They spend a lot of time performing engineering tasks, many times they fail to make an impact in their business. They might create from a simple standpoint some SQL queries, start to build a model using python or R, but once they create that model, one they can predict a user is leaving business, what do they do with it? You have to become algorithmic.”

Part of the challenge that led to DataScience was the process of actually building out those models. For example, it might be easier to put together a predictive model using Python, R or MATLAB for a data scientist, but it may need to be implemented in Java to be used across the organization. If the data scientist isn’t an expert in Java, that means handing the project over to an engineer to write through it in Java and not being able to make any adjustments to it without going through that engineer (or, of course, learning Java).

With this tool, data scientists can constantly tweak and update their models as new information comes in, as well as do it in the language they prefer. That keeps those teams more nimble and able to react more quickly to changes in the way people are using their tools. And it also means they have a better understanding of the scope of the models they can build and run, rather than there being a communicative disconnect between multiple departments in a company.

datascience

Another part of the problem was making sure that the query process was adaptable across multiple different skill sets, Swanson said. Because the role of “data scientist” is so overarching and becoming larger, there are a lot of roles that have expertise in small parts of the puzzle, and DataScience has to fill in the holes by making it easier to query the right information. Any queries done through DataScience can also operate on NoSQL databases as well. In the end, getting that data has to be fast, easy, and highly open to interpretation if it’s going to be a viable product for a large array of companies.

There are a couple risks when it comes to a business like DataScience and a product like this. The main one is whether a company with a similar tool — or others with similar tools — would simply widely open source their tools in order to speed up the rate of development on them. That would allow other companies to start implementing those tools, and even start to build additional businesses on top of them.

Swanson’s argument against that is DataScience is handling the infrastructure side of things as well. Data science models often exist on servers that are constantly online, but aren’t always running those operations, he said, so that’s why the company has a pay-per-compute pricing model similar to what Amazon Web Services has.

DataScience has raised $28 million in funding total from Crosscut Ventures, Greycroft Partners and Whitehart Ventures.

via TechCrunch
DataScience launches a service to easily query info and build models from anywhere

Own Your Own Humvee – Surplus Trucks Hitting Auction Market

616755_6002_159_0001I know as a little boy, I drooled over Hummers. Besides the fact that they were used by the military, the trucks were downright cool. Blocky, rugged, and all-terrain ready, the Hummer and by extension the military Humvee’s were a truck-fantasizing boy’s dream. However, as an adult, you realize that dreams often are not rooted […]

Read More …

The post Own Your Own Humvee – Surplus Trucks Hitting Auction Market appeared first on The Firearm Blog.


via The Firearm Blog
Own Your Own Humvee – Surplus Trucks Hitting Auction Market

T-Mobile tells iPhone owners not to install iOS 10 just yet (Updated)

T-Mobile issued a stern warning to its customers against downloading and installing the new iOS 10 update to their existing 6, 6 Plus and SE iPhone models. According to the T-Mobile website, doing so will, cause the handset to "lose connectivity [to the T-Mobile network] in certain circumstances." Once that happens, the user can only re-establish their network connection by fully powering down the phone and restarting it. That said, the company does expect Apple to push a corrective patch live within the next 48 hours.
Update: On Thursday night, T-Mobile announced that Apple had released its patch for the connectivity issue. If you had already downloaded iOS 10 (and were therefore at risk for this issue), go to Settings > General > About, to install the fix. If you hadn’t yet installed the new OS, feel free to do so without fear of having your cell service randomly drop.

Via: Verge

Source: T-Mobile

via Engadget
T-Mobile tells iPhone owners not to install iOS 10 just yet (Updated)

There’s Actual Hardcore Porn Hiding In iOS 10 [NSFW]

There's Actual Hardcore Porn Hiding In iOS 10 [NSFW]

Last night, we discovered that typing the word “butt” into iOS 10’s new, baked-in GIF search leads you to a certain My Little Pony in a fairly compromised position. Apple’s already corrected that particular oversight, but they’re not done yet—because the new Messages app is also hiding actual, very easy-to-find hardcore porn.

Type in the word “huge,” for instance, and you’ll find an unpixelated version of this:

There's Actual Hardcore Porn Hiding In iOS 10 [NSFW]

As we explained last night, Apple seems to be using search (in this case powered by Bing) to pull GIFs from a number of different sources. Its only censorship method thus far seems to be blocking potentially problematic words like “boobs” and “penis” and—as of this morning—“butt.” And there’s no reason for Apple to think that the word “huge” would bring up anything more than, say, a particularly large pillow or strawberry, except for the fact that of course it fucking would.

Now, it’s kind of understandable that a cartoon pony might slip through, but this is about as blatant as it gets. Apple is a massive tech company! You would think that, somewhere, in all their many departments, at least one person would be able to come up with an algorithm that knows when a dick is being sucked.

Because now, anytime anyone opens up their app to search “huge,” they will find the gif below.

Are you ready?

Like … really ready?

If you don’t want to see porn, close the page right now.

Because you’re about to see porn.

Alright, don’t say we didn’t warn you.

Here it is.

There's Actual Hardcore Porn Hiding In iOS 10 [NSFW]

Apple: It just works.

Update 1:04 p.m.:

Apple has now blocked the word “huge.”

There's Actual Hardcore Porn Hiding In iOS 10 [NSFW]

via Gizmodo
There’s Actual Hardcore Porn Hiding In iOS 10 [NSFW]

Agents of SHIELD Shows Off Its Badass Ghost Rider in This Fast, Arguably Furious New Promo


GIF

Want to see a flaming car ram another car? Or a car drive at a screaming, chained-up man? Or how about some graffiti touting the character’s legendary status? Well, it’s all here.

The new footage released on Twitter shows how much the new season of Agents of SHIELD is leaning on Ghost Rider. And it looks like they have a good reason to, since everything here is great:

We’ll see if Robbie Reyes (Gabriel Luna) lives up to the hype when AoS season four premieres on September 20.

via Gizmodo
Agents of SHIELD Shows Off Its Badass Ghost Rider in This Fast, Arguably Furious New Promo

Vectr Is a Free, Cross-Platform, Online Graphics Editor

Vectr is a new, free graphics editor that you can use on your desktop or in your web browser to create simple, clean vector graphics.

If you’ve ever used pretty much any photo-editing or illustration software, you’ll already be familiar with the straight-forward interface. The difference to keep in mind is that vector-based drawings are different from pixels; they’re more like polygons in a video game and are infinitely scalable. That makes them ideal for designing mockups of webpages and apps, logos, fonts, and other illustrations that aren’t ‘hand-drawn.’

It’s pleasingly intuitive. High-end vector graphics editors like Adobe Illustrator have an intimidating learning curve, but Vectr keeps things parsed down enough to be easily understandable without comprising utility. (If you’re a professional designer, Vectr is likely too parsed down—but for average people there are more than enough features to get started.) You can save all your work online, share your work with others, and can export to PNG, JPG, or SVG file formats.

The software has been in development for two years and has just now come out of beta. Desktop versions are available for Windows, Mac, Linux, and Chromebook, and are functionally identical to the web counterpart. It’s a pretty useful tool for most people who don’t really needall the features of professional design software.

Vectr

via Lifehacker
Vectr Is a Free, Cross-Platform, Online Graphics Editor

Black Friday and Cyber Monday: Best Practices for Your E-Commerce Database

E-Commerce Database

E-Commerce DatabaseThis blog post discusses how you can protect your e-commerce database from a high traffic disaster.

Databases power today’s e-commerce. Whether it’s listing items on your site, contacting your distributor for inventory, tracking shipments, payments, or customer data, your database must be up, running, tuned and available for your business to be successful.

There is no time that this is more important than high-volume traffic days. There are specific events that occur throughout the year (such as Black Friday, Cyber Monday, or Singles Day) that you know are going to put extra strain on your database environment. But these are the specific times that your database can’t go down – these are the days that can make or break your year!

So what can you do to guarantee that your database environment is up to the challenge of handling high traffic events? Are there ways of preparing for this type of traffic?

Yes, there are! In this blog post, we’ll look at some of the factors that can help prepare your database environment to handle large amounts of traffic.

Synchronous versus Asynchronous Applications

Before moving to strategies, we need to discuss the difference between synchronous and asynchronous applications.

In most web-based applications, user input starts a number of requests for resources. Once the server answers the requests, no communication stops until the next input. This type of communication between a client and server is called synchronous communication.

Restricted application updates limit synchronous communication. Even synchronous applications designed to automatically refresh application server information at regular intervals have consistent periods of delay between data refreshes. While usually such delays aren’t an issue, some applications (for example, stock-trading applications) rely on continuously updated information to provide their users optimum functionality and usability.

Web 2.0-based applications address this issue by using asynchronous communication. Asynchronous applications deliver continuously updated data to users. Asynchronous applications separate client requests from application updates, so multiple asynchronous communications between the client and server can occur simultaneously or in parallel.

The strategy you use to scale the two types of applications to meet growing user and traffic demands will differ.

Scaling a Synchronous/Latency-sensitive Application

When it comes to synchronous applications, you really have only one option for scaling performance: sharding. With sharding, the tables are divided and distributed across multiple servers, which reduces the total number of rows in each table. This consequently reduces index size, and generally improves search performance.

A shard can also be located on its own hardware, with different shards added to different machines. This database distribution over a large multiple of machines spreads the load out, also improving performance. Sharding allows you to scale read and write performance when latency is important.

Generally speaking, it is better to avoid synchronous applications when possible – they limit your scalability options.

Scaling an Asynchronous Application

When it comes to scaling asynchronous applications, we have many more options than with synchronous applications. You should try and use asynchronous applications whenever possible:

  • Secondary/Slave hosts. Replication can be used to add more hardware for read traffic. Replication usually employs a master/slave relationship between a designated “original” server and copies of the server. The master logs and then distributes the updates to the slaves. This setup allows you to distribute the read load across more than one machine.
  • Caching. Database caching (tables, data, and models – caching summaries of data) improves scalability by distributing the query workload from expensive (overhead-wise) backend processes to multiple cheaper ones. It allows more flexibility for data processing: for example premium user data can be cached, while regular user data isn’t.

    Caching also improves data availability by providing applications that don’t depend on backend services continued service. It also allows for improved data access speeds by localizing the data and avoiding roundtrip queries. There are some specific caching strategies you can use:
    • Pre-Emptive Caching. Ordinarily, an object gets cached the first time it is requested (or if cached data isn’t timely enough). Preemptive caching instead generates cached versions before an application requests them. Typically this is done by a cron process.
    • Hit/Miss Caching. A cache hit occurs when an application or software requests data. First, the central processing unit (CPU) looks for the data in its closest memory location, which is usually the primary cache. If the requested data is found in the cache, it is considered a cache hit. Cache miss occurs within cache memory access modes and methods. For each new request, the processor searched the primary cache to find that data. If the data is not found, it is considered a cache miss. A cache hit serves data more quickly, as the data can be retrieved by reading the cache memory. The cache hit also can be in disk caches where the requested data is stored and accessed by the first query. A cache miss slows down the overall process because after a cache miss, the central processing unit (CPU) will look for a higher level cache, such as random access memory (RAM) for that data. Further, a new entry is created and copied into cache before it can be accessed by the processor.
    • Client-side Caching. Client-side caching allows server data to be copied and cached on the client computer. Client side caching reduces load times by several factors
  • Queuing Updates. Queues are used to order queries (and other database functions) in a timely fashion. There are queues for asynchronously sending notifications like email and SMS in most websites. E-commerce sites have queues for storing, processing and dispatching orders. How your database handles queues can affect your performance:
    • Batching. Batch processing can be used for efficient bulk database updates and automated transaction processing, as opposed to interactive online transaction processing (OLTP) applications.
    • Fan-Out Updates. Fan-out duplicates data in the database. When data is duplicated it eliminates slow joins and increases read performance.

Efficient Usage of Data at Scale

As you scale up in terms of database workload, you need to be able to avoid bad queries or patterns from your applications.

  • Moving expensive queries out of the user request path. Even if your database server uses powerful hardware, its performance can be negatively affected by a handful of expensive queries. Even a single bad query can cause serious performance issues for your database. Make sure to use monitoring tools to track down the queries that are taking up the most resources.
  • Using caching to offload database traffic. Cache data away from the database using something like memcached. This is usually done at the application layer, and is highly effective.
  • Counters and In-Memory Stores. Use memory counters to monitor performance hits: pages/sec, faults/sec, available bytes, total server, target server memory, etc. Percona’s new in-memory storage engine for MongoDB also can help.
  • Connection Pooling. A connection pool made up of cached database connections, remembered so that the connections can be reused for future requests to the database. Connection pools can improve the performance of executing commands on a database.

Scaling Out (Horizontal) Tricks

Scaling horizontally means adding more nodes to a system, such as adding a new server to a database environment to a distributed software application. For example, scaling out from one Web server to three.

  • Pre-Sharding Data for Flexibility. Pre-sharding the database across the server instances allows you to have the entire environment resources available at the start of the event, rather than having to rebalance during peak event traffic.
  • Using “Kill Switches” to Control Traffic. The idea of a kill switch is a single point where you can stop the flow of data to a particular node. Strategically set up kill switches allow you to stop a destructive workload that begins to impact the entire environment.
  • Limiting Graph Structures. By limiting the size or complexity of graph structures in the database, you will simplify data lookups and data size.

Scaling with Hardware (Vertical Scaling)

Another option to handle the increased traffic load is adding more hardware to your environment: more servers, more CPUs, more memory, etc. This, of course, can be expensive. One option here is to pre-configure your testing environment to become part of the production environment if necessary. Another is to pre-configure more Database-as-a-Service (DaaS) instances for the event (if you are a using cloud-based services).

Whichever method, be sure you verify and test your extra servers and environment before your drop-dead date.

Testing Performance and Capacity

As always, in any situation where your environment is going to be stressed beyond usual limits, testing under real-world conditions is a key factor. This includes not only testing for raw traffic levels, but also the actual workloads that your database will experience, with the same volume and variety of requests.

Knowing Your Application and Questions to Ask at Development Time

Finally, it’s important that you understand what applications will be used and querying the database. This sort of common sense idea is often overlooked, especially when teams (such as the development team and the database/operations team) get siloed and don’t communicate.

Get to know who is developing the applications that are using the database, and how they are doing it. As an example, a while back I had the opportunity to speak with a team of developers, mostly to just understand what they were doing. In the process of whiteboarding the app with them, we discovered a simple query issue that – now that we were aware of it – took little effort to fix. These sorts of interactions, early in the process, can save a great deal of headache down the line.

Conclusion

There are many strategies that can help you prepare for high traffic events that will impact your database. I’ve covered a few here briefly. For an even more thorough look at e-commerce database strategies, attend my webinar “Black Friday and Cyber Monday: How to Avoid an E-Commerce Disaster” on Thursday, September 22, 2016 10:00 am Pacific Time.

Register here.

via Planet MySQL
Black Friday and Cyber Monday: Best Practices for Your E-Commerce Database

Latest ‘Guccifer 2.0’ leak drops Tim Kaine’s phone number

The hacker that pillaged the DNC’s computers known as "Guccifer 2.0" has released another collection of documents at a cybersecurity conference in London. While it doesn’t contain private emails this time around, it has what appears to be several members’ personal info, including the cellphone number of vice presidential candidate Tim Kaine. The collection also includes the finances, email ads, phone numbers and addresses of the party’s donors, along with the DNC’s network infrastructure. Take note, though, that this latest cache was uploaded on a file-sharing service instead of on Guccifer 2.0’s website or on Wikileaks, and the documents haven’t been verified yet. Wikileaks’ Twitter account shared the link where you can download the 670MB file, though, along with its password.

Interim DNC Chair Donna Brazile put the blame on the Russian government and its quest to influence the presidential elections in the US. According to Politico, she also said that if there’s anyone who’ll benefit from all these leaks, it’s Donald Trump, who has "embraced Russian President Vladimir Putin" and "publicly encouraged further Russian espionage to help his campaign."

Guccifer 2.0 got his name from the original Guccifer, who broke into Hillary Clinton’s private email server to steal and distribute her digital missives, medical and financial information. Unlike the first Guccifer who was already sentenced to a little over four years in prison, we’re still not sure who 2.0 really is. Security experts believe, however, that the persona is just a front for Russian cyberspies carrying out government-sanctioned attacks.

Source: NBC News, Politico, WikiLeaks (Twitter)

via Engadget
Latest ‘Guccifer 2.0’ leak drops Tim Kaine’s phone number