Agents of SHIELD Shows Off Its Badass Ghost Rider in This Fast, Arguably Furious New Promo


GIF

Want to see a flaming car ram another car? Or a car drive at a screaming, chained-up man? Or how about some graffiti touting the character’s legendary status? Well, it’s all here.

The new footage released on Twitter shows how much the new season of Agents of SHIELD is leaning on Ghost Rider. And it looks like they have a good reason to, since everything here is great:

We’ll see if Robbie Reyes (Gabriel Luna) lives up to the hype when AoS season four premieres on September 20.

via Gizmodo
Agents of SHIELD Shows Off Its Badass Ghost Rider in This Fast, Arguably Furious New Promo

Vectr Is a Free, Cross-Platform, Online Graphics Editor

Vectr is a new, free graphics editor that you can use on your desktop or in your web browser to create simple, clean vector graphics.

If you’ve ever used pretty much any photo-editing or illustration software, you’ll already be familiar with the straight-forward interface. The difference to keep in mind is that vector-based drawings are different from pixels; they’re more like polygons in a video game and are infinitely scalable. That makes them ideal for designing mockups of webpages and apps, logos, fonts, and other illustrations that aren’t ‘hand-drawn.’

It’s pleasingly intuitive. High-end vector graphics editors like Adobe Illustrator have an intimidating learning curve, but Vectr keeps things parsed down enough to be easily understandable without comprising utility. (If you’re a professional designer, Vectr is likely too parsed down—but for average people there are more than enough features to get started.) You can save all your work online, share your work with others, and can export to PNG, JPG, or SVG file formats.

The software has been in development for two years and has just now come out of beta. Desktop versions are available for Windows, Mac, Linux, and Chromebook, and are functionally identical to the web counterpart. It’s a pretty useful tool for most people who don’t really needall the features of professional design software.

Vectr

via Lifehacker
Vectr Is a Free, Cross-Platform, Online Graphics Editor

Black Friday and Cyber Monday: Best Practices for Your E-Commerce Database

E-Commerce Database

E-Commerce DatabaseThis blog post discusses how you can protect your e-commerce database from a high traffic disaster.

Databases power today’s e-commerce. Whether it’s listing items on your site, contacting your distributor for inventory, tracking shipments, payments, or customer data, your database must be up, running, tuned and available for your business to be successful.

There is no time that this is more important than high-volume traffic days. There are specific events that occur throughout the year (such as Black Friday, Cyber Monday, or Singles Day) that you know are going to put extra strain on your database environment. But these are the specific times that your database can’t go down – these are the days that can make or break your year!

So what can you do to guarantee that your database environment is up to the challenge of handling high traffic events? Are there ways of preparing for this type of traffic?

Yes, there are! In this blog post, we’ll look at some of the factors that can help prepare your database environment to handle large amounts of traffic.

Synchronous versus Asynchronous Applications

Before moving to strategies, we need to discuss the difference between synchronous and asynchronous applications.

In most web-based applications, user input starts a number of requests for resources. Once the server answers the requests, no communication stops until the next input. This type of communication between a client and server is called synchronous communication.

Restricted application updates limit synchronous communication. Even synchronous applications designed to automatically refresh application server information at regular intervals have consistent periods of delay between data refreshes. While usually such delays aren’t an issue, some applications (for example, stock-trading applications) rely on continuously updated information to provide their users optimum functionality and usability.

Web 2.0-based applications address this issue by using asynchronous communication. Asynchronous applications deliver continuously updated data to users. Asynchronous applications separate client requests from application updates, so multiple asynchronous communications between the client and server can occur simultaneously or in parallel.

The strategy you use to scale the two types of applications to meet growing user and traffic demands will differ.

Scaling a Synchronous/Latency-sensitive Application

When it comes to synchronous applications, you really have only one option for scaling performance: sharding. With sharding, the tables are divided and distributed across multiple servers, which reduces the total number of rows in each table. This consequently reduces index size, and generally improves search performance.

A shard can also be located on its own hardware, with different shards added to different machines. This database distribution over a large multiple of machines spreads the load out, also improving performance. Sharding allows you to scale read and write performance when latency is important.

Generally speaking, it is better to avoid synchronous applications when possible – they limit your scalability options.

Scaling an Asynchronous Application

When it comes to scaling asynchronous applications, we have many more options than with synchronous applications. You should try and use asynchronous applications whenever possible:

  • Secondary/Slave hosts. Replication can be used to add more hardware for read traffic. Replication usually employs a master/slave relationship between a designated “original” server and copies of the server. The master logs and then distributes the updates to the slaves. This setup allows you to distribute the read load across more than one machine.
  • Caching. Database caching (tables, data, and models – caching summaries of data) improves scalability by distributing the query workload from expensive (overhead-wise) backend processes to multiple cheaper ones. It allows more flexibility for data processing: for example premium user data can be cached, while regular user data isn’t.

    Caching also improves data availability by providing applications that don’t depend on backend services continued service. It also allows for improved data access speeds by localizing the data and avoiding roundtrip queries. There are some specific caching strategies you can use:
    • Pre-Emptive Caching. Ordinarily, an object gets cached the first time it is requested (or if cached data isn’t timely enough). Preemptive caching instead generates cached versions before an application requests them. Typically this is done by a cron process.
    • Hit/Miss Caching. A cache hit occurs when an application or software requests data. First, the central processing unit (CPU) looks for the data in its closest memory location, which is usually the primary cache. If the requested data is found in the cache, it is considered a cache hit. Cache miss occurs within cache memory access modes and methods. For each new request, the processor searched the primary cache to find that data. If the data is not found, it is considered a cache miss. A cache hit serves data more quickly, as the data can be retrieved by reading the cache memory. The cache hit also can be in disk caches where the requested data is stored and accessed by the first query. A cache miss slows down the overall process because after a cache miss, the central processing unit (CPU) will look for a higher level cache, such as random access memory (RAM) for that data. Further, a new entry is created and copied into cache before it can be accessed by the processor.
    • Client-side Caching. Client-side caching allows server data to be copied and cached on the client computer. Client side caching reduces load times by several factors
  • Queuing Updates. Queues are used to order queries (and other database functions) in a timely fashion. There are queues for asynchronously sending notifications like email and SMS in most websites. E-commerce sites have queues for storing, processing and dispatching orders. How your database handles queues can affect your performance:
    • Batching. Batch processing can be used for efficient bulk database updates and automated transaction processing, as opposed to interactive online transaction processing (OLTP) applications.
    • Fan-Out Updates. Fan-out duplicates data in the database. When data is duplicated it eliminates slow joins and increases read performance.

Efficient Usage of Data at Scale

As you scale up in terms of database workload, you need to be able to avoid bad queries or patterns from your applications.

  • Moving expensive queries out of the user request path. Even if your database server uses powerful hardware, its performance can be negatively affected by a handful of expensive queries. Even a single bad query can cause serious performance issues for your database. Make sure to use monitoring tools to track down the queries that are taking up the most resources.
  • Using caching to offload database traffic. Cache data away from the database using something like memcached. This is usually done at the application layer, and is highly effective.
  • Counters and In-Memory Stores. Use memory counters to monitor performance hits: pages/sec, faults/sec, available bytes, total server, target server memory, etc. Percona’s new in-memory storage engine for MongoDB also can help.
  • Connection Pooling. A connection pool made up of cached database connections, remembered so that the connections can be reused for future requests to the database. Connection pools can improve the performance of executing commands on a database.

Scaling Out (Horizontal) Tricks

Scaling horizontally means adding more nodes to a system, such as adding a new server to a database environment to a distributed software application. For example, scaling out from one Web server to three.

  • Pre-Sharding Data for Flexibility. Pre-sharding the database across the server instances allows you to have the entire environment resources available at the start of the event, rather than having to rebalance during peak event traffic.
  • Using “Kill Switches” to Control Traffic. The idea of a kill switch is a single point where you can stop the flow of data to a particular node. Strategically set up kill switches allow you to stop a destructive workload that begins to impact the entire environment.
  • Limiting Graph Structures. By limiting the size or complexity of graph structures in the database, you will simplify data lookups and data size.

Scaling with Hardware (Vertical Scaling)

Another option to handle the increased traffic load is adding more hardware to your environment: more servers, more CPUs, more memory, etc. This, of course, can be expensive. One option here is to pre-configure your testing environment to become part of the production environment if necessary. Another is to pre-configure more Database-as-a-Service (DaaS) instances for the event (if you are a using cloud-based services).

Whichever method, be sure you verify and test your extra servers and environment before your drop-dead date.

Testing Performance and Capacity

As always, in any situation where your environment is going to be stressed beyond usual limits, testing under real-world conditions is a key factor. This includes not only testing for raw traffic levels, but also the actual workloads that your database will experience, with the same volume and variety of requests.

Knowing Your Application and Questions to Ask at Development Time

Finally, it’s important that you understand what applications will be used and querying the database. This sort of common sense idea is often overlooked, especially when teams (such as the development team and the database/operations team) get siloed and don’t communicate.

Get to know who is developing the applications that are using the database, and how they are doing it. As an example, a while back I had the opportunity to speak with a team of developers, mostly to just understand what they were doing. In the process of whiteboarding the app with them, we discovered a simple query issue that – now that we were aware of it – took little effort to fix. These sorts of interactions, early in the process, can save a great deal of headache down the line.

Conclusion

There are many strategies that can help you prepare for high traffic events that will impact your database. I’ve covered a few here briefly. For an even more thorough look at e-commerce database strategies, attend my webinar “Black Friday and Cyber Monday: How to Avoid an E-Commerce Disaster” on Thursday, September 22, 2016 10:00 am Pacific Time.

Register here.

via Planet MySQL
Black Friday and Cyber Monday: Best Practices for Your E-Commerce Database

Latest ‘Guccifer 2.0’ leak drops Tim Kaine’s phone number

The hacker that pillaged the DNC’s computers known as "Guccifer 2.0" has released another collection of documents at a cybersecurity conference in London. While it doesn’t contain private emails this time around, it has what appears to be several members’ personal info, including the cellphone number of vice presidential candidate Tim Kaine. The collection also includes the finances, email ads, phone numbers and addresses of the party’s donors, along with the DNC’s network infrastructure. Take note, though, that this latest cache was uploaded on a file-sharing service instead of on Guccifer 2.0’s website or on Wikileaks, and the documents haven’t been verified yet. Wikileaks’ Twitter account shared the link where you can download the 670MB file, though, along with its password.

Interim DNC Chair Donna Brazile put the blame on the Russian government and its quest to influence the presidential elections in the US. According to Politico, she also said that if there’s anyone who’ll benefit from all these leaks, it’s Donald Trump, who has "embraced Russian President Vladimir Putin" and "publicly encouraged further Russian espionage to help his campaign."

Guccifer 2.0 got his name from the original Guccifer, who broke into Hillary Clinton’s private email server to steal and distribute her digital missives, medical and financial information. Unlike the first Guccifer who was already sentenced to a little over four years in prison, we’re still not sure who 2.0 really is. Security experts believe, however, that the persona is just a front for Russian cyberspies carrying out government-sanctioned attacks.

Source: NBC News, Politico, WikiLeaks (Twitter)

via Engadget
Latest ‘Guccifer 2.0’ leak drops Tim Kaine’s phone number

This Tiny House Costs $1,200 and Takes Just Three Hours to Build

This Tiny House Costs $1,200 and Takes Just Three Hours to Build
Image: Pin-Up Houses

As a general rule, I don’t believe in miniaturizing things for the sake of miniaturizing them, and that includes dogs and baked goods. (Golf is fine.) But this tiny house is an exception.

It’s called France, and it’s a prototype designed by Joshua Woodsman of Pin-Up Houses, which sells plans for sheds, cottages, and tiny houses. According to Woodsman, his latest creation only costs $1,200, and takes a team of three people about three hours to put together.

Woodsman says the prefabricated tiny house has 21 insulated panels connected by threaded rods. It comes with three sections—red, white, and blue—that feature a sleeping space, a “day zone” with a table and chairs, and a tiny kitchen. All in all, it comes in at 74 square feet, according to New Atlas. (There’s no bathroom, but that’s what the woods are for.)

This Tiny House Costs $1,200 and Takes Just Three Hours to Build
Image: Pin-Up Houses

It’s only a prototype, so unfortunately, you and I can’t buy one and plant it down in that empty lot down the block. But Woodsman, whose name is so apt it might be fake, wants to “spread our tiny-house movement around the world,” so perhaps France will one day be ours. At least it’s bigger than my bedroom now!

[New Atlas]

via Gizmodo
This Tiny House Costs $1,200 and Takes Just Three Hours to Build

The solar panels and inverter we’d buy

By Mark Smirniotis

This post was done in partnership with The Sweethome, a buyer’s guide to the best things for your home. Read the full article here.

With solar power, there’s no one-size-fits-all solution. If buying a home is the largest financial investment most people will make, installing solar could very well be the second. Every installation needs to take into account electricity consumption, geographic location, roof orientation, local permits, and a host of other considerations. Once you have a rough idea of how much power you’ll need, in most cases the first option you should consider is a grid-tied system made up of Suniva Optimus 335W monocrystalline solar panels paired with SolarEdge P400 power optimizers, plus a SolarEdge inverter at the heart of it all.

Who this is for

Scene from The Last Man on Earth.

Not everyone who goes solar will need to shop for their own equipment. Our picks are intended for people who will buy and install their systems alone, or with their own electrician or contractor. If you buy or lease your equipment from an installer, you may not have much choice in which equipment you get, but understanding our picks can help you evaluate quotes and proposals.

In the future we may consider looking at off-grid components such as purpose-made inverters, charge controllers, and batteries, but for now we’ve focused on the grid-tied equipment that’s most common.

Regardless, everyone who is thinking about solar needs to start with the basics of system sizing and purchase options, as well as answer some fundamental questions about financing and installation. We go into the details of how to shop for solar power in our full guide, but to get you started we’ve gathered the basics into this flowchart that will help you figure out where you need to focus.

How we picked

How power flows through a grid-tied system when the sun comes out.

Before deciding whether we could recommend any components for solar power, we spent weeks compiling statistics, reaching out to solar-industry representatives, wading through specifications, and getting expert input—and even so, the picks we make here represent only a starting point on the road to solar for most people. With that in mind, we didn’t just pick equipment for people already interested in self-installation; we also looked at the best ways to learn about and shop for solar.

If you’re comparing solar panels, your first consideration should be reputation and warranty, followed by price and, to a lesser extent, efficiency. In the past five years, solar panels have started to become a commodity item, with small technical differences that are immaterial to most homeowners.

Every solar-power system requires a second component, called an inverter. These devices turn the direct current (DC) that the solar panels produce into alternating current (AC), which is what your home operates on. You can determine a good inverter going by some of the same qualities you’ll find in a good solar panel, namely reliability, warranty coverage, and cost.

Our pick for solar panels

Made by a reputable firm with a strong warranty, this module provides good output without a premium cost.

Suniva panels are efficient, affordable, and backed by a reputable warranty from a company with manufacturing in Georgia and Michigan. These panels come with a 10-year warranty and a 25-year power guarantee, matching the coverage of most other top-tier manufacturers. Currently around $1 per watt, the price is competitive, too, but prices fluctuate, and a local installer may have competitive costs on a similar panel. The Suniva panels are right in the middle of the pack for efficiency, not so low as to require the extra space that cut-rate panels may need, but not so high that you’re paying 50 percent more for engineering prestige you’ll never notice. If you can find panels from a similarly reputable company with the same warranty and similar efficiency but a lower price tag, you’ll probably be just as happy with them. But the Suniva panels should be the bar that you try to clear as you shop.

Our pick for an inverter

Left: SolarEdge power optimizers installed on the racking, each waiting to be paired with a solar panel. Right: Two SolarEdge inverters at the heart of a large system turn DC power into AC power. Photo: SolarEdge

Even the best panels are only as good as the inverter you pair them with, so for most grid-tied systems we recommend looking at SolarEdge single-phase inverters and the company’s line of independent power optimizers before looking anywhere else. SolarEdge’s hybrid platform borrows the efficiency gains and individual panel management of microinverter systems yet avoids the extra costs and reliability issues that have kept microsystems from becoming mainstream. Think of the SolarEdge platform as being like a plug-in hybrid car, which has the low driving cost and emissions of an electric vehicle but the range and convenience of a combustion engine. Although the SolarEdge platform costs about the same as a traditional, top-of-the-line string-inverter system, it allows for more flexibility in roof planning, gains in power production, and reliable service with panel-level monitoring.

If you have no idea what we’re talking about

Solar power is full of brilliant engineering, and you really don’t need to understand most of it to make the switch from utility-based power. When the sun is out, you get free electricity; when it’s not, your power comes from the utility company just like always. If you produce more power than you need during the day, you may be able to sell it to the utility company for service credits or cash. In fact, with equipment costs as low as they are now, a properly sized solar installation will result in your net utility bill at the end of the year being zero. We go into more detail about how solar works in our full guide, but the benefits we’ve just described are what make solar such a great investment for so many people: Done right, solar will let you avoid a utility bill indefinitely.

This guide may have been updated by The Sweethome. To see the current recommendation, please go here.

via Engadget
The solar panels and inverter we’d buy

Apple’s kid-friendly iPad coding app arrives tomorrow

There are lots of initiatives to teach kids how to code, including ventures from Google, Minecraft and even the Star Wars franchise. However, with Swift Playground, Apple is actually prepping kids for a potential career at, well, Apple. The company has announced that the app, based on the Swift language used for iOS, OS X, WatchOS, tvOS and Linux, will arrive alongside iOS 10 tomorrow (September 13th).

As Engadget’s Nicole Lee discovered during a hands-on, it’s actually a nice way way to learn programming. It assumes that kids have zero knowledge, but produces actual Swift code that can be used to develop real apps. At the same time, it’s open-ended — young coders learn in a non-linear way, so enthusiastic kids can skip ahead if they want. It rewards students regardless of the quality of code, but gives extra kudos for well-optimized solutions.

Apple says there are over 100 schools and districts teaching the app this fall in the US, Europe and Africa. Apple will also offer its own "Get Started with Coding" workshops that will show the basics of Swift Playgrounds. It’ll also offer a drop-in hour for folks who want extra help with "challenging puzzles" in the app. If you want to get a head start on your kids (you’re gonna need it), the workshops and drop-in sessions will be available at select stores in the US, Canada, UK, Australia, UAE, Netherlands and Hong Kong.

via Engadget
Apple’s kid-friendly iPad coding app arrives tomorrow

Checking My Predictions About Clinton’s Health

In a blog post I wrote on December 27th, 2015, I said this…

Bonus Thought 1: One of the skills a hypnotist has to master is reading people’s inner thoughts based on their body language. That’s a common skill for people in the business world too, but hypnotists go deeper than looking at crossed arms and furrowed brows. We learn to look for subtle changes in breathing patterns, tiny changes in muscle tone, variations in skin color (blushing or not), word choice, pupil dilation, and more. I assume law enforcement people look for similar tells when doing interrogations.

As regular readers know, I’m a trained hypnotist. And to me, Hillary Clinton looks as if she is hiding a major health issue. If you read Malcolm Gladwell’s book, Blink, you know that so-called “experts” can sometimes instantly make decisions before they know why. In my case, I am going to make an “expert” hypnotist prediction about Hillary Clinton without knowing exactly which clues I am picking up, or whether I am hallucinating them.

Prediction: I’ll put the odds at 75% that we learn of an important Clinton health issue before the general election. That estimate is based on my own track record of guessing things about people without the benefit of knowing why. I think Trump is picking up the same vibe. He has already questioned Clinton’s “stamina.”

On December 29th, 2015 I blogged that Trump would be seen as “running unopposed” before election day. I mentioned Clinton’s health as a possible reason.

While I’m on the topic, I’ll add another prediction to the Master Persuader series. I predict that by the time Trump is in the general election and running against Clinton, you will start hearing that Trump (Lucky Hitler) is – for all practical purposes – “running unopposed” as Clinton’s poll numbers plummet.

That can happen in a variety of ways. One way is if Clinton’s health or legal issues rise to the point of being disqualifying, and Trump persuades us to think about those things more than we think about anything else. Once you imagine there is one candidate in the race who is eligible and one who might not survive the term, or might be in jail, you start to imagine it as a one-person race.

And you will. That’s how you get a landslide.

Look for the words “running unopposed” in pundit articles and quotes within a few months of election day. And it still counts if it started here, because it won’t catch on unless it actually fits.

On April 29th of 2016 I expanded on the thought in this post.

I have blogged and tweeted that Hillary Clinton looks unhealthy to me. And I have mentioned on Twitter that one of the skills of a hypnotist is identifying subtle bodily changes. Observation is a huge part of a hypnotist’s skill. You look for micro changes in muscle tone, breathing, posture, and anything else that can tell you whether your technique is working or you need to quickly pivot to a new approach. Think of it as rapid A-B testing on humans. And like any skill, one gets better with practice. I have more than three decades of practice for this specific skill.

What I see in Clinton’s health is an unusual level of variability. Sometimes her eyes bug out, sometimes they are tired and baggy. Sometimes she looks puffy, sometimes not. It would be easy to assume fatigue is the important variable. And that is clearly a big factor. But notice that the other candidates have little variability in their physicality. Trump always looks like Trump. Cruz always looks like Cruz, and so on. Sometimes we think we can detect fatigue in their answers, but visually the other candidates appear about the same every day.

Clinton, on the other hand, looks like an entirely different person every few days. That suggests some greater variability in her health. And that’s probably a tell for medications that are waxing and waning but rarely at the ideal levels. Or perhaps the underlying conditions have normal variability. Or both.

Under normal circumstances it would be deeply irresponsible for a cartoonist to give a medical diagnosis to a stranger he hasn’t met. I trust you to ignore my medical opinions. I do this to build a record of my persuasion-related predictions and to show you the method.

I give Clinton a 50% chance of making it to November with sufficiently good health to be considered a viable president. Judging from her performance on the campaign trail, she is managing her health effectively to get the job done. But I would think most people who run for president end up sacrificing their health in some measure. The big question is how much buffer she has left.

To be clear, there is no dependable evidence of Clinton having an undisclosed major health issue. But it looks that way to observers.

via Scott Adams’ Blog
Checking My Predictions About Clinton’s Health

Five lessons for founders

Fundraising is something all of us as entrepreneurs worry about constantly and discuss pretty much whenever we meet.

A lot of people think you should meet for the first time with a VC or a potential investor with your pitch in hand, ready to start the process. I disagree.

Raise Long Before You Raise

Scrap around and find a way to get warm intros early. It’s like picking a partner for anything important – tapping into your network and doing your homework on who is interested in your space, who’s actually insightful in the space and good to work with, means you’ll have a lay of the land before you desperately need money.

Why would they want to meet with you? Well, figure out what you know about the industry that’s interesting to them, what’s intrinsic to their portfolio companies, and make that the hook. Once they understand your capabilities, you’ve started the process.

Like any relationship, it takes some care and nurturing, but it’s worth it, because the relationship sets the foundation for them to take that leap of faith when you’re bringing them an idea to fund. Even if they’re interested in the space and the product is intriguing, the leap of faith is team x product x market.

It’s not additive, it’s multiplicative and if any of them are zero, it takes the entire equation down to zero. So, especially in this anemic macro-economic environment, it’s a lot less risky for them if they feel like they already have a sense of the team.

If you know them before you take money from them, it’s good after the money hits the bank too. You start with higher credibility and a better understanding of one another’s working styles before the first board meeting. You have a more acute understanding of their beliefs around the space and there’s a foundation of trust and benefit of the doubt in the face of any conflicts.

In short, if they know you and trust you, they’ll be a lot more open to your ideas after the funding hits and a lot more likely to make that leap and write you the check in the first place. It’s not rocket science, but it does take a good amount of elbow grease up front and a substantial amount of prep and time.

Fire Faster

I have yet to hear a CEO or founder say they regret firing someone too quickly. Seriously. We have a rough rule of thumb – if there’s a team member I’ve had a conversation five or more times in about the span of a month (and that includes other people coming and giving feedback, me having discussions on what to do for them with co-founders, etc), we know we have a serious problem.

The worst thing we can do is let that problem fester. We know we either have to resolve it fast, or let the person go.

There are a hundred reasons we wait too long. It’s the part of my job that I hate the most. I hate it so much it triggers a physical reaction for me. I know and care about every single member of our team. I know about their sick parent, their spouse that’s having trouble finding a job, all of it, and I take all of it home with me.

The decision to let someone go is never made lightly and it always costs something emotionally. The key indicator for me is if thoughts of the situation or person come flooding in right before bed or right when I get up, I already know what needs to be done, I just don’t want to do it. But that’s exactly when the team needs me to suck it up and get it done for them.

Walk and Talk

I get 6-8 miles a day by walking during my one-to-ones and pacing during calls. Headsets are a great thing. This is most of my exercise for the week, and my doctor says I’ve never been in better cardiovascular shape. Seriously, my resting heart rate is in the low 60’s now!

But, the biggest benefit is that people feel more free to talk about the hard things if you’re not sitting across the table, staring at them. If I really want to hear what someone thinks, I’ll ask them to walk with me.

Now the whole team does this with one another and it’s become part of our culture. I also think people have better ideas when they’re walking and the blood is flowing. I find that walking side-by-side, facing the same direction, it has the effect of making it feel more like we’re solving a problem together, or that the solution or idea is ours as opposed to mine or theirs.

The Hidden Benefits of Your Advisory Board

Most people think of an advisory board as just advising on functional areas along with some strategy. Of course, we’ve included domain experts to help us in areas where we need it – and there are plenty of those – but there are less obvious benefits to an advisory board, too.

The first is recruiting. The referral aspect of this is obvious, as advisors all have their own extensive networks. The bigger bonus is having them help you close on a candidate you really want. From the candidate’s point of view, they’re more objective than an employee or an investor in the company. They’re also a great conduit for ongoing correspondence with the candidate after you’ve made the offer.

You can have contact with a potential hire every 24-48 hours without making them feel like someone from the company is bugging them or pressuring them to make a decision.

The second benefit is on the fundraising side. Every investor has people that influence their opinions and if some of your advisors happen to be those people… That they’re willing to go to bat for you and support you is a massive reputational benefit, which is something that can’t be overstated when you’re asking a person to make that leap of faith. The third benefit is that they’re usually more than happy to spread the word when you go to press, a time when it’s important that all the talking doesn’t come from you. You need as many different types of advocates as you can get. The right group of advisors is chomping at the bit and raring to go at press time.

Don’t Discount the Kitchen Cabinet

There’s only one CEO but there are a lot of team members. There are days where you give and give and end up feeling a little emotionally tapped out.

That’s where my ladies come in, my Kitchen Cabinet. It’s a group of women that acts as an unofficial group of advisors. It’s really a safe place where we understand each other and have permission to be be vulnerable.

Problems become a lot less scary when you can talk them through. We’re available to each other by phone pretty much anytime and we get together for slumber parties once a quarter. We spend a whole day and night just kicking back. We eat, drink, talk, and decompress. We support each other.

These get-togethers aren’t just a couple of hours of light conversation – having a large chunk of time to relax into means you can really get into things and help each other.

I’d urge you to pick a handful of savvy, awesome, and trustworthy women you feel would gel together, kick your significant other and/or kids out of the house for a night, and slumber party it. Spending a day and night away is big investment but it has a massive payoff from an energy and happiness perspective.

Featured Image: a-image/Shutterstock

via TechCrunch
Five lessons for founders

Basic Housekeeping for MySQL Indexes

MySQL Indexes

MySQL IndexesIn this blog post, we’ll look at some of the basic housekeeping steps for MySQL indexes.

We all know that indexes can be the difference between a high-performance database and a bad/slow/painful query ride. It’s a critical part that needs deserves some housekeeping once in a while. So, what should you check? In no particular order, here are some things to look at:

1. Unused indexes

With sys schema, is pretty easy to find unused indexes: use the schema_unused_indexes view.

mysql> select * from sys.schema_unused_indexes;
+---------------+-----------------+-------------+
| object_schema | object_name     | index_name  |
+---------------+-----------------+-------------+
| world         | City            | CountryCode |
| world         | CountryLanguage | CountryCode |
+---------------+-----------------+-------------+
2 rows in set (0.01 sec)

This view is based on the performance_schema.table_io_waits_summary_by_index_usage table, which will require enabling the Performance Schema, the events_waits_current consumer and the wait/io/table/sql/handler instrument. PRIMARY (key) indexes are ignored.

If you don’t have them enabled, just execute these queries:

update performance_schema.setup_consumers set enabled = 'yes' where name = 'events_waits_current';
update performance_schema.setup_instruments set enabled = 'yes' where name = 'wait/io/table/sql/handler';

Quoting the documentation:

“To trust whether the data from this view is representative of your workload, you should ensure that the server has been up for a representative amount of time before using it.”

And by representative amount, I mean representative: 

  • Do you have a weekly job? Wait at least one week
  • Do you have monthly reports? Wait at least one month
  • Don’t rush!

Once you’ve found unused indexes, remove them.

2. Duplicated indexes

You have two options here:

  • pt-duplicate-key-checker
  • the schema_redundant_indexes view from sys_schema

The pt-duplicate-key-checker is part of Percona Toolkit. The basic usage is pretty straightforward:

[root@e51d333b1fbe mysql-sys]# pt-duplicate-key-checker
# ########################################################################
# world.CountryLanguage
# ########################################################################
# CountryCode is a left-prefix of PRIMARY
# Key definitions:
#   KEY `CountryCode` (`CountryCode`),
#   PRIMARY KEY (`CountryCode`,`Language`),
# Column types:
#      	  `countrycode` char(3) not null default ''
#      	  `language` char(30) not null default ''
# To remove this duplicate index, execute:
ALTER TABLE `world`.`CountryLanguage` DROP INDEX `CountryCode`;
# ########################################################################
# Summary of indexes
# ########################################################################
# Size Duplicate Indexes   2952
# Total Duplicate Indexes  1
# Total Indexes            37

Now, the schema_redundant_indexes view is also easy to use once you have sys schema installed. The difference is that it is based on the information_schema.statistics table:

mysql> select * from schema_redundant_indexesG
*************************** 1. row ***************************
              table_schema: world
                table_name: CountryLanguage
      redundant_index_name: CountryCode
   redundant_index_columns: CountryCode
redundant_index_non_unique: 1
       dominant_index_name: PRIMARY
    dominant_index_columns: CountryCode,Language
 dominant_index_non_unique: 0
            subpart_exists: 0
            sql_drop_index: ALTER TABLE `world`.`CountryLanguage` DROP INDEX `CountryCode`
1 row in set (0.00 sec)

Again, once you find the redundant index, remove it.

3. Potentially missing indexes

The statements summary tables from the performance schema have several interesting fields. For our case, two of them are pretty important: NO_INDEX_USED (means that the statement performed a table scan without using an index) and NO_GOOD_INDEX_USED (“1” if the server found no good index to use for the statement, “0” otherwise).

Sys schema has one view that is based on the performance_schema.events_statements_summary_by_digest table, and is useful for this purpose: statements_with_full_table_scans, which lists all normalized statements that have done a table scan.

For example:

mysql> select * from world.CountryLanguage where isOfficial = 'F';
55a208785be7a5beca68b147c58fe634  -
746 rows in set (0.00 sec)
mysql> select * from statements_with_full_table_scansG
*************************** 1. row ***************************
                   query: SELECT * FROM `world` . `Count ... guage` WHERE `isOfficial` = ?
                      db: world
              exec_count: 1
           total_latency: 739.87 us
     no_index_used_count: 1
no_good_index_used_count: 0
       no_index_used_pct: 100
               rows_sent: 746
           rows_examined: 984
           rows_sent_avg: 746
       rows_examined_avg: 984
              first_seen: 2016-09-05 19:51:31
               last_seen: 2016-09-05 19:51:31
                  digest: aa637cf0867616c591251fac39e23261
1 row in set (0.01 sec)

The above query doesn’t use an index because there was no good index to use, and thus was reported. See the explain output:

mysql> explain select * from world.CountryLanguage where isOfficial = 'F'G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: CountryLanguage
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 984
        Extra: Using where

Note that the “query” field reports the query digest (more like a fingerprint) instead of the actual query.

In this case, the CountryLanguage table is missing an index over the “isOfficial” field. It is your job to decide whether it is worth it to add the index or not.

4. Multiple column indexes order

It was explained before that Multiple Column index beats Index Merge in all cases when such index can be used, even when sometimes you might have to use index hints to make it work.

But when using them, don’t forget that the order matters. MySQL will only use a multi-column index if at least one value is specified for the first column in the index.

For example, consider this table:

mysql> show create table CountryLanguageG
*************************** 1. row ***************************
       Table: CountryLanguage
Create Table: CREATE TABLE `CountryLanguage` (
  `CountryCode` char(3) NOT NULL DEFAULT '',
  `Language` char(30) NOT NULL DEFAULT '',
  `IsOfficial` enum('T','F') NOT NULL DEFAULT 'F',
  `Percentage` float(4,1) NOT NULL DEFAULT '0.0',
  PRIMARY KEY (`CountryCode`,`Language`),
  KEY `CountryCode` (`CountryCode`),
  CONSTRAINT `countryLanguage_ibfk_1` FOREIGN KEY (`CountryCode`) REFERENCES `Country` (`Code`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

A query against the field “Language” won’t use an index:

mysql> explain select * from CountryLanguage where Language = 'English'G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: CountryLanguage
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 984
        Extra: Using where

Simply because it is not the leftmost prefix for the Primary Key. If we add the “CountryCode” field, now the index will be used:

mysql> explain select * from CountryLanguage where Language = 'English' and CountryCode = 'CAN'G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: CountryLanguage
         type: const
possible_keys: PRIMARY,CountryCode
          key: PRIMARY
      key_len: 33
          ref: const,const
         rows: 1
        Extra: NULL

Now, you’ll have to also consider the selectivity of the fields involved. Which is the preferred order?

In this case, the “Language” field has a higher selectivity than “CountryCode”:

mysql> select count(distinct CountryCode)/count(*), count(distinct Language)/count(*) from CountryLanguage;
+--------------------------------------+-----------------------------------+
| count(distinct CountryCode)/count(*) | count(distinct Language)/count(*) |
+--------------------------------------+-----------------------------------+
|                               0.2368 |                            0.4644 |
+--------------------------------------+-----------------------------------+

So in this case, if we create a multi-column index, the preferred order will be (Language, CountryCode).

Placing the most selective columns first is a good idea when there is no sorting or grouping to consider, and thus the purpose of the index is only to optimize where lookups. You might need to choose the column order, so that it’s as selective as possible for the queries that you’ll run most.

Now, is this good enough? Not really. What about special cases where the table doesn’t have an even distribution? When a single value is present way more times than all the others? In that case, no index will be good enough. Be careful not to assume that average-case performance is representative of special-case performance. Special cases can wreck performance for the whole application.

In conclusion, we depend heavily on proper indexes. Give them some love and care once in a while, and the database will be very grateful.

All the examples were done with the following MySQL and Sys Schema version:

mysql> select * from sys.version;
+-------------+-----------------+
| sys_version | mysql_version   |
+-------------+-----------------+
| 1.5.1       | 5.6.31-77.0-log |
+-------------+-----------------+

via MySQL Performance Blog
Basic Housekeeping for MySQL Indexes