Here’s the action-packed first trailer for John Wick: Chapter 3—Parabellum


Keanu Reeves is on the run with his trusty canine companion in first trailer for John Wick: Chapter 3—Parabellum.

Everyone’s favorite reluctant assassin is on the run with a $14 million bounty on his head, and few allies, in the action-packed first trailer for John Wick: Chapter 3—Parabellum.

(Spoilers for first two movies below.)

For those who missed the first two movies in the trilogy, John Wick (Keanu Reeves) is a legendary hitman (known as “Baba Yaga”) who tried to retire when he fell in love and got married. Unfortunately, he’s drawn back into the dark underground world by an act of senseless violence after his wife’s death. As Wick mourns Helen’s passing, Iosef Tarasov, the son of a Russian crime syndicate, breaks in, kicks him unconscious, and steals his classic 1969 Ford Mustang Mach 1. On top of all that, Tarasov kills the little dog, Daisy, that Helen gave to John to comfort him. From there, there’s really no hope for Iosef. Nothing will stop John Wick from seeking retribution.

The first John Wick doubled its original projected box office on opening weekend and went on to grow more than $88 million worldwide for a film that cost around $30 million to make. It received praise for its brisk pace, heart-stopping action sequences, and stylish noir feel. Reeves was perfectly cast in the lead role, and the fictional underground culture of assassins with their gold coins, markers, and almost medieval code of honor set the film apart from more typical entries in this genre.

So naturally, there was a sequel—which was still pretty good but not quite as riveting as its predecessor. John Wick: Chapter Two was set about four days after the events of the first film ended, with Wick exacting revenge and picking up a new pit bull puppy for good measure. In this edition, he wants his Mustang back, and who can blame him? He finds it at a Russian syndicate chop shop and ends up taking out a whole slew of Russian baddies (and badly damaging the car) before declaring a truce.

  • Where it all began: John’s late wife, Helen, is why he tried to retire.


    YouTube/Lionsgate

  • He’s still got his trusty pit bull with him, even on the run.


    YouTube/Lionsgate

  • Anjelica Huston is The Director, a member of the High Table.


    YouTube/Lionsgate

  • Even the administrative staff is on tenterhooks.


    YouTube/Lionsgate

  • “Here we go.” Winston (Ian McShane, owner of the Continental Hotel in New York, prepares to make John Wick “excommunicado.”


    YouTube/Lionsgate

  • Winston and Charon (Lance Reddick), concierge of the Continental Hotel, assess John’s odds. “I’d say they’re about even.”


    YouTube/Lionsgate

  • First assassin spotted in Grand Central Station.


    YouTube/Lionsgate

  • A motorcycle chase with sword-bearing assassins.


    YouTube/Lionsgate

  • Sorry, would-be assassin, John Wick on a horse is still better than you.


    YouTube/Lionsgate

  • Laurence Fishburne plays The Bowery King, an underground crime lord.


    YouTube/Lionsgate

  • Would-be assassins wisely on their guard.


    YouTube/Lionsgate

  • Shooting at a reflection isn’t the most effective defense.


    YouTube/Lionsgate

  • Battling it out in what amounts to a hall of mirrors.


    YouTube/Lionsgate

  • If you come at John Wick, you’d best not miss with that knife, bud.


    YouTube/Lionsgate

  • Halle Berry plays fellow assassin Sofia, a close friend of John Wick.


    YouTube/Lionsgate

  • Must Love Dogs. Sofia shares John’s affection for canine friends.


    YouTube/Lionsgate

  • Master assassin in action.


    YouTube/Lionsgate

  • Here’s hoping these two crazy kids can work things out.


    YouTube/Lionsgate

  • Fired up and ready to take on all comers.


    YouTube/Lionsgate

But there is no rest for a legendary hitman. An Italian crime lord, Santino D’Antonio (Riccardo Scamarcio), presents him with a “blood oath” marker Wick gave him in exchange for a favor before he retired. Under the assassin’s code of honor, Wick cannot refuse the request: to kill Santino’s sister, Gianna, so he can take her seat at the crime lord High Table. Santino then double-crosses him by opening a $7 million contract on his life, pretending it’s to avenge his sister (instead of trying to tie up loose ends).

Santino seeks sanctuary in the Continental Hotel, a safe space for assassins with a strict “no killing on the premises” policy. Wick kills Santino in the lounge anyway. And that puts hotel owner Winston (Ian McShane) on the spot. He has to enforce the policy, but he also likes John Wick and knows Santino had it coming. Winston thus gives Wick a one-hour head start before declaring him “excommunicado,” with no access to the hotel’s substantial underground resources. Since there’s also now a $14 million bounty on his head, the odds are heavily stacked against Wick’s survival.

This is where John Wick: Chapter 3—Parabellum picks up, with a countdown to Winston’s “excommunicado” declaration as Wick scrambles to find allies (and ammo). Based on the trailer, the third film looks likely to recapture the fast-paced, violent glory that made the original John Wick so irresistible. McShane is back as Winston, along with Lance Reddick as Charon, concierge of the Continental. You’ve got Anjelica Huston as The Director of the crime lord High Table—she’s a friend to Wick but unfortunately unable to offer much help since “the High Table wants your life.” Rounding out the cast, Halle Berry plays Sofia, another assassin and close friend of Wick’s, who might just be a potential love interest as well as a helpful ally.

If the film performs as well at the box office as its predecessors, who knows? We might just get a John Wick: Chapter 4. For now, fans of the first two will likely enjoy John Wick: Chapter 3—Parabellum when it hits theaters on May 17, 2019.

 

Listing image by YouTube/Lionsgate


via Ars Technica
Here’s the action-packed first trailer for John Wick: Chapter 3—Parabellum

The Best Duct Tape


Photo: Kyle Fitzgerald

Through our research, and backed up by our firsthand testing, we found that a good general use duct tape should be around 11 milli-inches (mil) thick, use a natural rubber-based adhesive, and be made using a co-extrusion process.

Before getting into these specifics, it helps to know a bit about the three ingredients that make up a piece of duct tape: a polyethylene sheet backing, a cloth grid, and a rubber-based adhesive.

The polyethylene sheet plays two roles: It serves as a bonding area for the other two ingredients and creates a waterproof backing. It’s basically a sheet of plastic.

The cloth grid (or scrim)—typically made of polyester or a cotton/polyester blend—decides the tape’s strength, flexibility, and tearability. The threads that run the length of the tape are what give it its material strength—how much weight the tape can hold before breaking. The threads that run across the tape determine its tearability. Duct tape tears along the thread line, so the smaller the space between threads, the cleaner the tear. If the threads are far apart, getting a straight, even tear is difficult. It’s like holding one edge of a piece of paper while trying to rip it down the center.

All true duct tape has a rubber-based adhesive, but each tape has its own adhesive recipe. Some are thicker and flow better in order to stick to rough porous surfaces, while others are stiffer, making them more stable for extreme temperatures and flat surfaces. As we found out in our testing, some adhesives are so gooey that they’ll melt in the hot sun. Some tapes are made with alternatives to rubber-based glues, but those tapes (often made with what’s known as hot melt adhesive) are less reliable in extreme temperatures, and they don’t have the strength of the rubber-based glues, so we dismissed those.

After many conversations with four prominent duct tape manufacturers (and confirmed through our testing), we’re convinced a process called coextrusion is the best way to assemble these three ingredients. The defining characteristic of coextrusion is that the polyethylene sheet enters the manufacturing process in molten form. This means that when the cloth grid is added, it melts directly into the plastic, forming a single, fully bonded piece. The rubber adhesive is then applied to one side, creating what we know as duct tape.

The other way to make duct tape is called lamination. It’s easier to do, but there are issues with the finished product. As Hillary DuMoulin, communications manager at Berry Plastics, explained to us, lamination involves pressing all three ingredients together. The cloth grid is held to the poly either by a separate laminating adhesive or the “squish-through” method, where the rubber-based adhesive holds everything together.

The problem with lamination is the poly/scrim connection is nowhere near as secure as it is with the coextrusion method. Air bubbles can form between the laminated layers. Over time, particularly during exterior use, the poly and scrim can come apart. If you’ve ever pulled off an old piece of duct tape and the cloth grid remained stuck in a crusty bed of adhesive, you’ve seen the major flaw of a laminated tape. With a co-extruded tape, the scrim is really an internal component of the poly, so this kind of separation doesn’t occur.

Two examples of delamination. The smaller pieces are an unknown tape that had been on the ends of two electrical conduits for the past three months. The larger piece is the tested Scotch All-Weather after two weeks on a plywood sample board. Photo: Doug Mahoney

There are visual ways to distinguish a laminated tape from a co-extruded one. The most telling is that co-extruded tapes have very small, clearly defined dimples on the exterior of the roll. These correspond with gaps in the cloth grid and represent all of the places where potential air bubbles could form if the tape were laminated. It’s harder to see on high-end tapes because as the tape quality increases, the grid gets smaller and dimples become difficult for the eye to pick up.

Another way to visually tell the difference is that laminated tapes (at least the two that we tested) have a wrinkled texture. On one tape, the ridges were so extreme that we were unable to get it to sit flat against any surface for more than a day or so. Co-extruded tapes have a nice smooth finish.

Notice the small dimples along the exterior of the co-extruded Duck Advanced (top). Also, the adhesive side of the laminated Scotch All-Weather (bottom) has an uneven adhesive coating and the scrim doesn’t appear to sit flat against the poly backing. Photo: Doug Mahoney

For thickness, we found that tapes around the 11-mil range offered the best compromise between strength and maneuverability. Thickness is measured in “mils” (or milli-inches, roughly .0254 millimeters) and tapes vary from as thin as 3 mils to as thick as 17 mils. Thinner tapes are too floppy to handle. Trying to wrap a thin, flimsy, wet noodle piece of duct tape can become a frustrating experience as it constantly folds over and sticks to itself. Once this happens, especially if it’s adhesive to adhesive, it’s pretty much a permanent bond, so you have to discard the piece and start over.

But thick tapes have their own drawbacks. The beefier they are, the less flexibility and conformity they have. During testing, we found that this negative outweighed the added strength of the really thick tapes, like the 17-mil Gorilla. Those tapes are extremely strong, but it’s very difficult to wrap a piece around a contoured surface (like the floppy sole on an old work boot, or the 90-degree elbow of a copper water pipe). This lack of flexibility also causes problems if you’re bundling something together with the tape. It’s much better if you can give the tape a strong pull and add a little stretch to it as you’re adhering it. This little bit of flexibility, which tapes around 11-mil usually have, can add just enough tension to secure the whole bundle together. This kind of stretching is nearly impossible to do with a thick 17-mil tape.

Other handling characteristics play a role in performance. Berry Plastics’ Hillary DuMoulin, told us that tearability, “finger tack,” and the ability to unwind are all important in evaluating duct tape. So during testing, we took these factors into account as well.

What’s not as important are width and length. The standard roll of duct tape is 2 inches wide (it actually measures around 1.88 inches but is always referred to as 2 inches). Other sizes are available, but even after 10 years in the construction industry, I’ve never needed anything more than a 2-inch roll. The lengths of the rolls are also standardized. The rolls I looked at were a variety of 35, 40, 45, and 60 yards. (45 and 60 yards are the most common lengths.)

Good, high-quality tapes designed for general purpose sit in the $8 to $13 range. You can get cheap and poorly made stuff for less, but it’s not worth saving the couple bucks unless it’s for a really simple and temporary job like sealing garbage bags or wrapping rolls of pulled-up carpeting. You can also get more aggressive tapes loaded with turbo strength and crazy adhesive that can cost over $20, but you probably don’t need that strength and you definitely don’t need the frustration that comes with working with tape that’s too thick and too sticky.

There is a dizzying variety of available duct tapes—Berry Plastics alone sells roughly 35 different types of duct tape under the Nashua and Polyken names—so after our research, we asked the major brands (Intertape, Duck, Scotch, Nashua, and Polyken) to suggest their best product for general all-around use. To give them a sense of what we had in mind, we gave the examples of patching a bike seat, repairing a backpack, and fixing a leak in a hose. Each company came back to us with a tape in the 9-to-11–mil range. In addition, we tested a number of tapes from these manufacturers based on reputation and customer feedback. These were typically thicker tapes and ones that tended to have specific strengths (and, as we discovered, specific weaknesses). We tested a total of 10 tapes.


via Wirecutter: Reviews for the Real World
The Best Duct Tape

Using Parallel Query with Amazon Aurora for MySQL

parallel query amazon aurora for mysqlParallel query execution is my favorite, non-existent, feature in MySQL. In all versions of MySQL – at least at the time of writing – when you run a single query it will run in one thread, effectively utilizing one CPU core only. Multiple queries run at the same time will be using different threads and will utilize more than one CPU core.

On multi-core machines – which is the majority of the hardware nowadays – and in the cloud, we have multiple cores available for use. With faster disks (i.e. SSD) we can’t utilize the full potential of IOPS with just one thread.

AWS Aurora (based on MySQL 5.6) now has a version which will support parallelism for SELECT queries (utilizing the read capacity of storage nodes underneath the Aurora cluster). In this article, we will look at how this can improve the reporting/analytical query performance in MySQL. I will compare AWS Aurora with MySQL (Percona Server) 5.6 running on an EC2 instance of the same class.

In Short

Aurora Parallel Query response time (for queries which can not use indexes) can be 5x-10x better compared to the non-parallel fully cached operations. This is a significant improvement for the slow queries.

Test data and versions

For my test, I need to choose:

  1. Aurora instance type and comparison
  2. Dataset
  3. Queries

Aurora instance type and comparison

According to Jeff Barr’s excellent article (https://aws.amazon.com/blogs/aws/new-parallel-query-for-amazon-aurora/) the following instance classes will support parallel query (PQ):

“The instance class determines the number of parallel queries that can be active at a given time:

  • db.r*.large – 1 concurrent parallel query session
  • db.r*.xlarge – 2 concurrent parallel query sessions
  • db.r*.2xlarge – 4 concurrent parallel query sessions
  • db.r*.4xlarge – 8 concurrent parallel query sessions
  • db.r*.8xlarge – 16 concurrent parallel query sessions
  • db.r4.16xlarge – 16 concurrent parallel query sessions”

As I want to maximize the concurrency of parallel query sessions, I have chosen db.r4.8xlarge. For the EC2 instance I will use the same class: r4.8xlarge.

Aurora:

mysql> show global variables like ‘%version%’;

+++

| Variable_name           | Value                        |

+++

| aurora_version          | 1.18.0                       |

| innodb_version          | 1.2.10                       |

| protocol_version        | 10                           |

| version                 | 5.6.10                       |

| version_comment         | MySQL Community Server (GPL) |

| version_compile_machine | x86_64                       |

| version_compile_os      | Linux                        |

+++

MySQL on ec2

mysql> show global variables like ‘%version%’;

+++

| Variable_name           | Value                                                |

+++

| innodb_version          | 5.6.4184.1                                          |

| protocol_version        | 10                                                   |

| slave_type_conversions  |                                                      |

| tls_version             | TLSv1.1,TLSv1.2                                      |

| version                 | 5.6.4184.1                                          |

| version_comment         | Percona Server (GPL), Release 84.1, Revision b308619 |

| version_compile_machine | x86_64                                               |

| version_compile_os      | debianlinuxgnu                                     |

| version_suffix          |                                                      |

+++

Table

I’m using the “Airlines On-Time Performance” database from http://www.transtats.bts.gov/DL_SelectFields.asp?Table_ID=236&DB_Short_Name=On-Time  (You can find the scripts I used here: https://github.com/Percona-Lab/ontime-airline-performance).

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

mysql> show table status like ‘ontime’\G

*************************** 1. row ***************************

          Name: ontime

        Engine: InnoDB

       Version: 10

    Row_format: Compact

          Rows: 173221661

Avg_row_length: 409

   Data_length: 70850183168

Max_data_length: 0

  Index_length: 0

     Data_free: 7340032

Auto_increment: NULL

   Create_time: 20180926 02:03:28

   Update_time: NULL

    Check_time: NULL

     Collation: latin1_swedish_ci

      Checksum: NULL

Create_options:

       Comment:

1 row in set (0.00 sec)

The table is very wide, 84 columns.

Working with Aurora PQ (Parallel Query)

Documentation: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-mysql-parallel-query.html

Aurora PQ works by doing a full table scan (parallel reads are done on the storage level). The InnoDB buffer pool is not used when Parallel Query is utilized.

For the purposes of the test I turned PQ on and off (normally AWS Aurora uses its own heuristics to determine if the PQ will be helpful or not):

Turn on and force:

mysql> set session aurora_pq = 1;

Query OK, 0 rows affected (0.00 sec)

mysql> set aurora_pq_force = 1;

Query OK, 0 rows affected (0.00 sec)

Turn off:

mysql> set session aurora_pq = 0;

Query OK, 0 rows affected (0.00 sec)

The EXPLAIN plan in MySQL will also show the details about parallel query execution statistics.

Queries

Here, I use the “reporting” queries, running only one query at a time. The queries are similar to those I’ve used in older blog posts comparing MySQL and Apache Spark performance (https://www.percona.com/blog/2016/08/17/apache-spark-makes-slow-mysql-queries-10x-faster/ )

Here is a summary of the queries:

  1. Simple queries:
    • select count(*) from ontime where flightdate > ‘2017-01-01’
    • select avg(DepDelay/ArrDelay+1) from ontime
  2. Complex filter, single table:

select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest

FROM ontime

WHERE

  DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

  and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

  and flightdate > ‘2015-01-01’

   and ArrDelay < 15

and cancelled = 0

and Diverted = 0

and DivAirportLandings = 0

  ORDER by DepDelay DESC

LIMIT 10;

3. Complex filter, join “reference” table

select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

FROM ontime_ind o

JOIN carriers c on o.carrier = c.carrier_code

WHERE

  (carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

  and ArrDelay > 30

  ORDER by DepDelay DESC

LIMIT 10\G

4. select one row only, no index

Query 1a: simple, count(*)

Let’s take a look at the most simple query: count(*). This variant of the “ontime” table has no secondary indexes.

select count(*) from ontime where flightdate > ‘2017-01-01’;

Aurora, pq (parallel query) disabled:

I disabled the PQ first to compare:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (8 min 25.49 sec)

 

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (2 min 48.81 sec)

 

mysql> mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (2 min 48.25 sec)

 

Please note: the first run was cold run; data was read from disk. The second and third run used the cached data.

 

Now lets enable and force Aurora PQ:

 

mysql> set session aurora_pq = 1;

Query OK, 0 rows affected (0.00 sec)

mysql> set aurora_pq_force = 1; 

Query OK, 0 rows affected (0.00 sec)

 

mysql> explain select count(*) from ontime where flightdate > ‘2017-01-01’\G

*************************** 1. row ***************************

          id: 1

 select_type: SIMPLE

       table: ontime

        type: ALL

possible_keys: NULL

         key: NULL

     key_len: NULL

         ref: NULL

        rows: 173706586

       Extra: Using where; Using parallel query (1 columns, 1 filters, 0 exprs; 0 extra)

1 row in set (0.00 sec)

(from the EXPLAIN plan, we can see that parallel query is used).

Results:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;                                                                                                                          

++

| count(*) |

++

|  5660651 |

++

1 row in set (16.53 sec)

 

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (16.56 sec)

 

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (16.36 sec)

 

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (16.56 sec)

 

mysql> select count(*) from ontime where flightdate > ‘2017-01-01’;

++

| count(*) |

++

|  5660651 |

++

1 row in set (16.36 sec)

As we can see the results are very stable. It does not use any cache (ie: innodb buffer pool) either. The result is also interesting: utilizing multiple threads (up to 16 threads) and reading data from disk (using disk cache, probably) can be ~10x faster compared to reading from memory in a single thread.

Result: ~10x performance gain, no index used

Query 1b: simple, avg

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

set aurora_pq = 1; set aurora_pq_force=1;

 

select avg(DepDelay) from ontime;

++

| avg(DepDelay) |

++

|        8.2666 |

++

1 row in set (1 min 48.17 sec)

 

 

set aurora_pq = 0; set aurora_pq_force=0;  

 

select avg(DepDelay) from ontime;

++

| avg(DepDelay) |

++

|        8.2666 |

++

 

1 row in set (2 min 49.95 sec)

 

Here we can see that PQ gives use ~2x performance increase.

Summary of simple query performance

Here is what we learned comparing Aurora PQ performance to native MySQL query execution:

  1. Select count(*), not using index: 10x performance increase with Aurora PQ.
  2. select avg(…), not using index: 2x performance increase with Aurora PQ.

Query 2: Complex filter, single table

The following query will always be slow in MySQL. This combination of the filters in the WHERE condition makes it extremely hard to prepare a good set of indexes to make this query faster.

select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest

FROM ontime

WHERE

  DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

  and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

  and flightdate > ‘2015-01-01’

  and ArrDelay < 15

and cancelled = 0

and Diverted = 0

and DivAirportLandings = ‘0’

ORDER by DepDelay DESC

LIMIT 10;

Let’s compare the query performance with and without PQ.

PQ disabled:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

mysql> set aurora_pq_force = 0;

Query OK, 0 rows affected (0.00 sec)

mysql> set aurora_pq = 0;                                                                                                                                                                  

Query OK, 0 rows affected (0.00 sec)

 

mysql> explain select SQL_CALC_FOUND_ROWS FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest FROM ontime WHERE    DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and flightdate > ‘2015-01-01’     and ArrDelay < 15 and cancelled = 0 and Diverted = 0 and DivAirportLandings = 0 ORDER by DepDelay DESC LIMIT 10\G

*************************** 1. row ***************************

          id: 1

 select_type: SIMPLE

       table: ontime

        type: ALL

possible_keys: NULL

         key: NULL

     key_len: NULL

         ref: NULL

        rows: 173706586

       Extra: Using where; Using filesort

1 row in set (0.00 sec)

 

mysql> select SQL_CALC_FOUND_ROWS FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest FROM ontime WHERE    DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and flightdate > ‘2015-01-01’     and ArrDelay < 15 and cancelled = 0 and Diverted = 0 and DivAirportLandings = 0 ORDER by DepDelay DESC LIMIT 10;

++++++

| FlightDate | carrier | FlightNum | Origin | Dest |

++++++

| 20171009 | OO      | 5028      | SBP    | SFO  |

| 20151103 | VX      | 969       | SAN    | SFO  |

| 20150529 | VX      | 720       | TUL    | AUS  |

| 20160311 | UA      | 380       | SFO    | BOS  |

| 20160613 | DL      | 2066      | JFK    | SAN  |

| 20161114 | UA      | 1600      | EWR    | LAX  |

| 20161109 | WN      | 2318      | BDL    | LAS  |

| 20161109 | UA      | 1652      | IAD    | LAX  |

| 20161113 | AA      | 23        | JFK    | LAX  |

| 20161112 | UA      | 800       | EWR    | SFO  |

++++++

10 rows in set (3 min 42.47 sec)

/* another run */

10 rows in set (3 min 46.90 sec)

This query is 100% cached. Here is the graph from PMM showing the number of read requests:

  1. Read requests: logical requests from the buffer pool
  2. Disk reads: physical requests from disk

Buffer pool requests:

Buffer pool requests from PMM

Now let’s enable and force PQ:

PQ enabled:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

mysql> set session aurora_pq = 1;

Query OK, 0 rows affected (0.00 sec)

 

mysql> set aurora_pq_force = 1;                                                                                                                              Query OK, 0 rows affected (0.00 sec)

 

mysql> explain select SQL_CALC_FOUND_ROWS FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest FROM ontime WHERE    DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’) and flightdate > ‘2015-01-01’     and ArrDelay < 15 and cancelled = 0 and Diverted = 0 and DivAirportLandings = 0 ORDER by DepDelay DESC LIMIT 10\G

*************************** 1. row ***************************

          id: 1

 select_type: SIMPLE

       table: ontime

        type: ALL

possible_keys: NULL

         key: NULL

     key_len: NULL

         ref: NULL

        rows: 173706586

       Extra: Using where; Using filesort; Using parallel query (12 columns, 4 filters, 3 exprs; 0 extra)

1 row in set (0.00 sec)

 

 

mysql> select SQL_CALC_FOUND_ROWS                                                                                                                                                                      -> FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest -> FROM ontime

   -> WHERE

   ->  DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

   ->  and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

   ->  and flightdate > ‘2015-01-01’

   ->   and ArrDelay < 15

   -> and cancelled = 0

   -> and Diverted = 0

   -> and DivAirportLandings = 0

   ->  ORDER by DepDelay DESC

   -> LIMIT 10;

++++++

| FlightDate | carrier | FlightNum | Origin | Dest |

++++++

| 20171009 | OO      | 5028      | SBP    | SFO  |

| 20151103 | VX      | 969       | SAN    | SFO  |

| 20150529 | VX      | 720       | TUL    | AUS  |

| 20160311 | UA      | 380       | SFO    | BOS  |

| 20160613 | DL      | 2066      | JFK    | SAN  |

| 20161114 | UA      | 1600      | EWR    | LAX  |

| 20161109 | WN      | 2318      | BDL    | LAS  |

| 20161109 | UA      | 1652      | IAD    | LAX  |

| 20161113 | AA      | 23        | JFK    | LAX  |

| 20161112 | UA      | 800       | EWR    | SFO  |

++++++

10 rows in set (41.88 sec)

/* run 2 */

10 rows in set (28.49 sec)

/* run 3 */

10 rows in set (29.60 sec)

Now let’s compare the requests:

InnoDB Buffer Pool Requests

As we can see, Aurora PQ is almost NOT utilizing the buffer pool (there are a minor number of read requests. Compare the max of 4K requests per second with PQ to the constant 600K requests per second in the previous graph).

Result: ~8x performance gain

Query 3: Complex filter, join “reference” table

In this example I join two tables: the main “ontime” table and a reference table. If we have both tables without indexes it will simply be too slow in MySQL. To make it better, I have created an index for both tables and so it will use indexes for the join:

CREATE TABLE `carriers` (

 `carrier_code` varchar(8) NOT NULL DEFAULT ,

 `carrier_name` varchar(200) DEFAULT NULL,

 PRIMARY KEY (`carrier_code`),

 KEY `carrier_name` (`carrier_name`)

) ENGINE=InnoDB DEFAULT CHARSET=latin1

 

mysql> show create table ontime_ind\G

...

 PRIMARY KEY (`id`),

 KEY `comb1` (`Carrier`,`Year`,`ArrDelayMinutes`),

 KEY `FlightDate` (`FlightDate`)

) ENGINE=InnoDB AUTO_INCREMENT=178116912 DEFAULT CHARSET=latin1

Query:

select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

FROM ontime_ind o

JOIN carriers c on o.carrier = c.carrier_code

WHERE

  (carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

  and ArrDelay > 30

  ORDER by DepDelay DESC

LIMIT 10\G

PQ disabled, explain plan:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

mysql> set aurora_pq_force = 0;

Query OK, 0 rows affected (0.00 sec)

 

mysql> set aurora_pq = 0;                                                                                                                                                                  

Query OK, 0 rows affected (0.00 sec)

 

mysql> explain

   -> select SQL_CALC_FOUND_ROWS

   -> FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

   -> FROM ontime_ind o

   -> JOIN carriers c on o.carrier = c.carrier_code

   -> WHERE

   ->  (carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

   ->  and ArrDelay > 30

   ->  ORDER by DepDelay DESC

   -> LIMIT 10\G

*************************** 1. row ***************************

          id: 1

 select_type: SIMPLE

       table: c

        type: range

possible_keys: PRIMARY,carrier_name

         key: carrier_name

     key_len: 203

         ref: NULL

        rows: 3

       Extra: Using where; Using index; Using temporary; Using filesort

*************************** 2. row ***************************

          id: 1

 select_type: SIMPLE

       table: o

        type: ref

possible_keys: comb1

         key: comb1

     key_len: 3

         ref: ontime.c.carrier_code

        rows: 2711597

       Extra: Using index condition; Using where

2 rows in set (0.01 sec)

As we can see MySQL uses indexes for the join. Response times:

/* run 1 – cold run */

10 rows in set (29 min 17.39 sec)

/* run 2  – warm run */

10 rows in set (2 min 45.16 sec)

PQ enabled, explain plan:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

mysql> explain

   -> select SQL_CALC_FOUND_ROWS

   -> FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

   -> FROM ontime_ind o

   -> JOIN carriers c on o.carrier = c.carrier_code

   -> WHERE

   ->  (carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

   ->  and ArrDelay > 30

   ->  ORDER by DepDelay DESC

   -> LIMIT 10\G

*************************** 1. row ***************************

          id: 1

select_type: SIMPLE

       table: c

        type: ALL

possible_keys: PRIMARY,carrier_name

         key: NULL

     key_len: NULL

         ref: NULL

        rows: 1650

       Extra: Using where; Using temporary; Using filesort; Using parallel query (2 columns, 0 filters, 1 exprs; 0 extra)

*************************** 2. row ***************************

          id: 1

select_type: SIMPLE

       table: o

        type: ALL

possible_keys: comb1

         key: NULL

     key_len: NULL

         ref: NULL

        rows: 173542245

       Extra: Using where; Using join buffer (Hash Join Outer table o); Using parallel query (11 columns, 1 filters, 1 exprs; 0 extra)

2 rows in set (0.00 sec)

As we can see, Aurora does not use any indexes and uses a parallel scan instead.

Response time:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

mysql> select SQL_CALC_FOUND_ROWS

   -> FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

   -> FROM ontime_ind o

   -> JOIN carriers c on o.carrier = c.carrier_code

   -> WHERE

   ->  (carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

   ->  and ArrDelay > 30

   ->  ORDER by DepDelay DESC

   -> LIMIT 10\G

...

 

*************************** 4. row ***************************

 

   FlightDate: 20170504

UniqueCarrier: UA

      TailNum: N68821

    FlightNum: 1205

       Origin: KOA

OriginCityName: Kona, HI

         Dest: LAX

 DestCityName: Los Angeles, CA

     DepDelay: 1457

     ArrDelay: 1459

*************************** 5. row ***************************

   FlightDate: 19910312

UniqueCarrier: DL

      TailNum:

    FlightNum: 1118

       Origin: ATL

OriginCityName: Atlanta, GA

         Dest: STL

 DestCityName: St. Louis, MO

...

 

10 rows in set (28.78 sec)

 

 

mysql> select found_rows();

 

++

| found_rows() |

++

|      4180974 |

++

 

1 row in set (0.00 sec)

Result: ~5x performance gain

(this is actually comparing the index cached read to a non-index PQ execution)

Summary

Aurora PQ can significantly improve the performance of reporting queries as such queries may be extremely hard to optimize in MySQL, even when using indexes. With indexes, Aurora PQ response time can be 5x-10x better compared to the non-parallel, fully cached operations. Aurora PQ can help improve performance of complex queries by performing parallel reads.

The following table summarizes the query response times:

Query Time, No PQ, index Time, PQ
select count(*) from ontime where flightdate > ‘2017-01-01’ 2 min 48.81 sec 16.53 sec
select avg(DepDelay) from ontime; 2 min 49.95 sec 1 min 48.17 sec
select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier as carrier, FlightNum, Origin, Dest

FROM ontime

WHERE

DestState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

and OriginState not in (‘AK’, ‘HI’, ‘PR’, ‘VI’)

and flightdate > ‘2015-01-01’

and ArrDelay < 15

and cancelled = 0

and Diverted = 0

and DivAirportLandings = 0

ORDER by DepDelay DESC

LIMIT 10;

3 min 42.47 sec 28.49 sec
select SQL_CALC_FOUND_ROWS

FlightDate, UniqueCarrier, TailNum, FlightNum, Origin, OriginCityName, Dest, DestCityName, DepDelay, ArrDelay

FROM ontime_ind o

JOIN carriers c on o.carrier = c.carrier_code

WHERE

(carrier_name like ‘United%’ or carrier_name like ‘Delta%’)

and ArrDelay > 30

ORDER by DepDelay DESC

LIMIT 10\G

2 min 45.16 sec 28.78 sec


Photo by Thomas Lipke on Unsplash

Related

via MySQL Performance Blog
Using Parallel Query with Amazon Aurora for MySQL

AWS launches Backup, a fully-managed backup service for AWS

Amazon’s AWS cloud computing service today launched Backup, a new tool that makes it easier for developers on the platform to back up their data from various AWS services and their on-premises apps. Out of the box, the service, which is now available to all developers, lets you set up backup policies for services like Amazon EBS volumes, RDS databases, DynamoDB tables, EFS file systems and AWS Storage Gateway volumes. Support for more services is planned, too. To back up on-premises data, businesses can use the AWS Storage Gateway.

The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets.

Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that the pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge). It’s worth noting that you’ll also pay a per-GB fee for restoring data from EFS file systems and DynamoDB backups.

Currently, Backup’s scope is limited to a given AWS region, but the company says that it plans to offer cross-region functionality later this year.

“As the cloud has become the default choice for customers of all sizes, it has attracted two distinct types of builders,” writes Bill Vass, AWS’s VP of Storage, Automation, and Management Services. “Some are tinkerers who want to tweak and fine-tunee the full range of AWS services into a desired architecture, and other builders are drawn to the same breadth and depth of functionality in AWS, but are willing to trade some of the service granularity to start at a higher abstraction layer, so they can build even faster. We designed AWS Backup for this second type of builder who has told us that they want one place to go for backups versus having to do it across multiple, individual services.”

Early adopters of AWS Backup are State Street Corporation, Smile Brands and Rackspace, though this is surely a service that will attract its fair share of users as it makes the life of admins quite a bit easier. AWS does have quite a few backup and storage partners, though, who may not be all that excited to see AWS jump into this market, too, though they often offer a wider range of functionality — including cross-region and offsite backups — than AWS’s service.

 


via TechCrunch
AWS launches Backup, a fully-managed backup service for AWS

27 Free Alternatives to Adobe’s Expensive App Subscriptions


Adobe appears to have upset a number of users with another price increase for its app subscriptions. While the hit only appears to be targeting specific countries at this point—you’re spared, North American users—there’s no reason to think that you won’t have to pay more to subscribe to an Adobe app (or its whole suite of creative apps) at some future point. That’s business, folks.

As you can imagine, Adobe’s price increase has set off a flurry of activity on the internet, with many annoyed users jumping onto Twitter threads and blog posts to suggest alternatives to Adobe’s ever-more-expensive subscription apps.

I ran through @burgerdrome’s Twitter thread, as well as an excellent software-recommendations thread started by @TubOfCoolWhip and this handy image of recommendations from “Cullen,” who I would link to if I knew who they were. From there, I created this list of 27 good alternatives to Adobe’s Creative Cloud apps based on what people appeared to be excited about (or recommend in droves).

I haven’t tried out all of these apps myself, nor am I the target audience for them—as I don’t really dabble in 3D animation, alas. While we normally recommend apps we’ve used at Lifehacker, in this case, I’ve included recommendations from the various Twitter users who have suggested them when applicable. (It’s tough, as some apps just got called out by name, which is great for making a list, but not very helpful when describing an app’s features.)

If you don’t like any of these picks, you can always try befriending an educator (or a student) to score that sweet $20/month pricing for Adobe’s full subscription. A word of caution, however: That only works for the first year. After that, you’ll get charged the full, standard rate.

Apps for painting, graphic design, or photo editing

Krita (Windows / Mac)

“I can personally recommend Krita as a viable open illustration program. On the commercial side, I’ve heard good things of Clip Studio Paint and Paint Tool SAI. Krita also has re-editable file layers, filter/effect layers and layer styles.”@AwrySquare

Sketchbook (Windows / Mac)

“I use Sketchbook with my pen display and I can recommend it. It has a decently easy-to-navigate UI and allows you to save in a .psd format for an easy transfer. The only thing it really needs is clipping groups.”@xx_unsung_xx

MediBang Paint (Windows / Mac / Mobile)

“a really good free ipad app for art is Medibang paint. It’s just as simple to use as procreate, and has all the features and more :)!!!”@1lonelyegg

Paint.net (Windows)

“Getpaint.net is a great free Photoshop alternative and @inkscape is a great free Illustrator alternative. Been using those for years, and I have all the Adobe products, but those are still my go to’s. I basically only use my Adobe subscription for Premiere and AE.”@alexcchichester

Pixlr X (Web)

“Pixlr is a personal favorite! I believe they just added a paid option to get rid of ads but there’s still a fully functioning free version”@notjoykeenan

GIMP (Windows / Mac)

“For photo editing,GIMP is pretty much Photoshop but free ! While the UI may be less user-friendly,it can give out nice results !” @FarowlHd

Photoscape X (Windows / Mac)

“Photoscape is free and provides a pretty basic photo editing software! you can do a lot with it like make gifs and batch-edit photos in addition to your basics. been using it for 5+ years and have rarely needed something more” @trisk_philia

FireAlpaca (Windows / Mac)

“well now im glad i stick to sai and firealpaca. at least they arent laggy as shit and confusing to look at” @finnifinite

If you need a little more than that to consider FireAlpaca for your setup, the app comes with plenty of standard and quirky brushes for digitally painting your next great masterpiece (or comic). You can even make your own, if you’re feeling especially creative. For those looking to draw some comics, built-in templates make it easy to create specific layouts for a strip. The app’s “Onion Skin” mode also makes it easy to draw animations, as you’ll be creating new layers, or frames, while viewing the previous frame as a reference point.


Apps for creating storyboards

Storyboarder (Windows / Mac)

“The features on this are pretty good, and you DO NOT have to be able to render/draw well to use this! It can create shot types from key words, which is…. wild. :o” @TubOfCoolWhip


Apps for editing videos or creating video effects

HitFilm Express (Windows / Mac)

“Somone may have already mentioned these two but VSDC Editor and Hitflim are neat free editing softwares.”@NotQueenly

If I’m correct, Hitfilm Express an excellent tool for creating special effects—much more so than your standard video editing app, which might not be quite as fully featured for this kind of work. If you’re just looking to edit and trim videos, and maybe add a simple text overlay, other video apps on this list might be a better fit.

Shotcut (Windows / Mac)

“I found [Shotcut] to be a very good free editor for video editing. It’s worked very well for me and i still use it for smaller things.”@Monkeygameal

If you’re trying to get crazy, like edit 360-degree videos—as PCMag notes—this might not be the app for you. But for basic video editing with a reasonably uncluttered interface, you can’t go wrong with this free app.

DaVinci Resolve (Windows / Mac)

“Because of Davinci resolve I only have the photoshop/ light room bundle. Once I can find a better alternative to photoshop and light room. Im going to ditch that too.” @Breonnick_5

Kdenlive (Windows / Mac—sort-of)

Although this multi-track video editor is mainly for Linux users, you’ll still find some slightly older Windows and Mac builds to experiment with. Since the app uses FFmpeg libraries, you can import any video or audio file you want—pretty much. You also get a healthy number of transformations and effects to play with, which you can keyframe for greater precision.


Apps for 3D modeling, animation, or vector graphics

Blender (Windows / Mac)

“I hate Maya for similar reasons and stick to blender whenever I can.”@IRBlayne

Blender is the big-guns 3D modeling tool that you dabble with when you don’t want to pay for something like 3DS or Maya. The learning curb is steep, but it’s worth mastering if you’re serious about exploring the space. Once you get good, you can do a lot of amazing things with this free app:

Lumion (Windows, free for students)

“If you are a student, the student version of Lumion is FREE. It is an architecture program that renders reeaal fast and does all kinds of neat stuff such as automatic sites, insertable animations of people doing stuff, you can set things on fire, weather settings, and more.”@samanthagiford8

Synfig (Windows / Mac)

“Synfig for animation! it’s vector-based and works similar to Flash, it can’t do interactive stuff but Flash games are kind of dying anyway”@ljamesart

Anything that’s similar to Adobe Flash, but isn’t Adobe Flash, is a win in our book.

SketchUp (Web)

You’ll find this recommendation on the aforementioned “Cullen” list, which indicates it’s a great program for basic 3D modeling. Since it’s (now) completely web-based, you can use it right in your favorite browser on Windows or Mac—or on a Chromebook, I suppose. And, yes, everything you do automatically saves to the cloud, don’t worry.

MagicaVoxel (Windows / Mac)

Here’s another entry on the “Cullen” list—this time, their recommendation for a voxel/brick 3D modeling program. I’m not much of an artist, nor am I a Minecraft wizard (but I do love amazing pixel art), so I’ll instead leave you with a comment from this inspiring 2015 blog post: “I started with [MagicaVoxel]5 months ago and feel like I have really mastered the tool. I saw a Tweet of voxel art image made on Magica Voxel from Ephtracy. That was when I just finished Monument Valley, which I loved. I had to try that tool and fell in love with it right away.”

MakeHuman (Windows / Mac) 

The mysterious “Cullen” also recommends MakeHuman if you want to fiddle around with creating digital characters in three dimensions. If I’m correct, you can import your creations into another app on our list—Blender—to animate them, which is as close as you’ll get to full-featured rendering software like 3DS or Maya without plunking down a ton of change.

Inkscape (Windows / Mac)

“The vector program Inkscape is a wonderful free alternative to Adobe Illustrator”@GrimdorkDesign

I consistently see Inkscape mentioned as an alternative to Adobe Illustrator around the web. I don’t use Illustrator myself, but if I did, this would be the first app I installed to escape Adobe’s subscription fees.


Apps for editing audio and creating music

LMMS (Windows / Mac)

“If we’re including music/audio editing software, LMMS and Cakewalk by BandLab are both good free DAWs!”@MystSaphyr

“DAWs,” for those not in the know, is short for “Digital Audio Workstations.” If you’re making music, go with LMMS (or Cakewalk, below.) If you need to cut audio or convert something to an MP3, you’ll want an app like Audacity.

Cakewalk (Windows)

(See previous recommendation. Thanks, @MystSaphyr!)

ocenaudio (Windows / Mac)

“I’d like to add Ocean Audio [sic] as a simple audio editor as well as REAPER as an inexpensive & extremely powerful DAW (with infinite trial period)“ @fuzzblob

Audacity (Windows / Mac)

I almost shouldn’t need to say anything about Audacity at this point, as it’s been one of the best free audio editors around for years. It’s my go-to app whenever I need to cut and rearrange audio super-quick.


Apps for desktop publishing

Scribus (Windows / Mac)

In response to a question about InDesign alternatives: “Affinity has one coming/out already. But yeah, only other thing I’ve found is Scribus.” @dukiswa

Canva (Web)

“Pixlr was a really great place to start as an alternative to Photoshop and Canva works well as an alternative to InDesign !!”@lexgts


via Gizmodo
27 Free Alternatives to Adobe’s Expensive App Subscriptions

This “Impossible Screw” Has Mysterious Behavioral Properties

All of us understand how screws and bolts work. So imagine if you encountered a screw that you could advance–but not retract. I.e. you can screw it in, but it won’t unscrew…unless you turn it from the other side. If you’re confused by what I mean, watch this "impossible screw" video and see if you can figure out what the hell is going on, before he reveals the secret:

I can’t think of any practical applications, beyond bringing this to a bar and using it to trick people into buying you drinks.


via Core77
This “Impossible Screw” Has Mysterious Behavioral Properties

Gun Review: Kel-Tec CP33 .22LR Pistol


Thirty-three rounds. That’s 33. The Kel-Tec CP33 holds 33 rounds of .22 LR. Unless, of course, you slap on a little magazine extension and load up 50. Fifty rounds in a pistol magazine. A factory magazine. You have my attention.

Boy oh boy is it easy and fun to go through all of that .22 LR, too!

courtesy Oleg Volk

There’s plenty to discuss when it comes to the CP33, but at first glance it’s all about that magazine capacity, right?

Whether it’s the flush-fitting, standard 33-round clear plastic magazines seen above…

Or extended to 50 rounds with the optional extra capacity baseplate. The 50-round extensions we used were 3D-printed prototypes, but they functioned just as well as the 33-round jobs.

Which is to say perfectly fine as long as you don’t rim lock the ammo. You see, .22 LR is a rimmed cartridge and in order to slide forward out of a magazine and feed into the chamber, the rim of the round being fed must be in front of the rim of the round that’s on deck.

Loading the CP33’s magazines is simple, though it takes some time. The feed lip design is such that avoiding rimlock is taken care of by the magazine and there’s little for the person loading the mag to worry about. However, it makes you slide each round in deliberately, one at a time, pushed to the rear, and it’s a process.

The CP33 is so dang fun to shoot (and shoot FAST) and the magazines are slow enough to load that I [jokingly] told Kel-Tec they should start a core exchange program. I’d love to purchase loaded mags, fire them empty, then ship them back for a refund of my core fee. We’ll just send mags back and forth — full from Kel-Tec to me, empty from me to Kel-Tec.

By the way, no, your eyes do not deceive you. The CP33’s magazines really are quadruple-stacked. There’s a clear polymer rib at rear that keeps each double-stack rim side from interfering with the other, and a stainless steel bar that keeps each crossed bullets side from interfering with the other. Somehow the feeding at the top just magically happens.

Good news; should you manage to rimlock a couple rounds — this happened to me two times over the course of about 30 magazines — it’s usually fixable through the skeletonized sides of the magazine. Unloading the fruits of your labor isn’t typically necessary. Or, just ignore a rimlocked round and sometimes it’ll feed anyway or cause a stoppage that you can clear easily enough.

But enough about the magazine capacity. On to the gun . . .

Courtesy Oleg Volk

Holy cow 33 rounds, though, right? That’s wild. Sorry, sorry . . .

But seriously 50 rounds!?! That’s a lot of pew in a pistol. Especially in a standard-format (i.e. not a drum) magazine. In a caliber that’s affordable to shoot in quantity.

Okay so actually over to the pistol and it’s a big ol’ thing, right? It shares a significant amount of frame or “lower receiver” design with Kel-Tec’s CMR-30 carbine (possibly identical), which has lots of shared features with their PMR-30 pistol.

There’s much more rear overhang than a typical pistol. Which is usually a good thing for reliability; plenty of bolt travel provides fudge factor in design and timing.

It also means an extremely long sight radius. I was pinging steel targets past 100 yards with impressive reliability.

Some of that is the extra long sight radius and some of it is the really clear, bright sights. A swappable/removable green fiber optic front sight really pops.

And a fully adjustable orange fiber optic sight is at the rear.

My only real gripe about the feel or impression of quality on the CP33 is also visible in the photo above: I don’t like seeing those injection molding marks in the charging handle, and wasn’t a big fan of how the handle felt in my hand, either.

Excepting Kel-Tec’s standard assembly process of bolting together two clamshell halves to build their firearm frames, this is the only aspect of the CP33’s fit, finish, or overall quality that was below my expectations.

Also helping in the accuracy department is a stellar trigger. I mean truly great. It’s light — I didn’t get to measure it, but I’d guess 3.5 lbs. — extremely smooth, and it breaks crisply and cleanly. Crisp, short reset, too. The Kel-Tec CP33’s trigger is probably better than that of 95% of the sub-$750, .22LR pistols on the market.

As you’d expect, it’s a fixed barrel in the CP33. This, too, contributes to solid accuracy.

Courtesy Oleg Volk

But the accuracy of the semi-auto CP33 exceeded expectations. We shot many examples of the gun and they were all incredible tack drivers. They made every shooter look far better than usual, as we unrelentingly drilled distant targets at a much faster rate than anyone was used to.

Between the great trigger, great sights, and shockingly high mechanical accuracy plus the size, grip, and rimfire chambering that result in a most incredibly flat- and soft-shooting gun, the CP33 hits the mark. Fast.

Though who relies on iron sights? Thanks to the full-length top rail, the CP33 is ready for a red dot.

Thanks to threaded holes in its aluminum upper receiver, it’s also ready for other bolt-ons like thumb rests.

With a threaded barrel, the CP33 is ready to accept muzzle devices like compensators and suppressors. It suppressed extremely nicely — quiet, with no gas blockback or functional issues.

Cut into the CP33’s dust cover is a single M-LOK slot. Judging by the bolt at front and rear of this aluminum section, it looks like Kel-Tec has kept its options open for different dust cover designs, too.

An M-LOK slot provides plenty of options in and of itself, of course. Like a section of Picatinny rail or any number of direct-to-M-LOK accessories.

At rear, Kel-Tec included a steel tab on either side below the charging handle. These could be used for a sling or possibly bolting on some sort of pistol brace in the future.

In the meantime, Kel-Tec will be offering a QD sling socket attachment. Nice.

But the accessory you really want is a suppressor. It’s the best muzzle device possible, right? Hit up Silencer Shop and they’ll walk you through the process. It ain’t that hard!

And a suppressor fits so very nicely on the CP33, too. That barrel shoulder is just the teeniest hair proud of the front of the upper receiver, so there’s no gap. It looks great with a can on it.

Functionally, the CP33 runs as you’d expect. Ambidextrous manual thumb safeties in the normal location — they snick cleanly on and off — and a bolt stop where it should be.

Only one control is located outside the norm — at least for us ‘Muricans — and that’s the heel magazine release. I happen to like this quite a bit, as it’s perfect for grabbing with your thumb as you strip the magazine out of the grip. It’s very natural when done that way, and I appreciated it on the CP33 just as I did on my PMR-30 and just as I always have on my HK P7.

Though I didn’t see this occur with any shooters and have never experienced it myself, shooters with large, meaty hands could accidentally depress the magazine release with their strong hand palm. At least according to the interwebs.

So it was extremely cold in Gillette, Wyoming, in mid-November when I shot the CP33 from Florida-based Kel-Tec CNC, Inc.

The guns were cold.

I was cold.

But things ran pretty well. Not perfectly, but pretty well.

Some of the lubrication was gummy in the single-digit temps. There were a few instances where the first two or three rounds didn’t want to feed from the magazine because the bolt was slow — just sticky in the thick lube.

There were a few mid-magazine feeding issues caused by a rimlocked round. Those may or may not have resulted in a stoppage if bolt speeds were higher along with the temperatures.

And by “a few,” I do mean just a few. Maybe 10 total stoppages out of a couple thousand rounds. Overall the CP33 pistol ran pretty freakin’ well. At least as well as most rimfires might in these conditions. Unquestionably better than any .22LR handgun I’ve ever shot with a capacity over 30 rounds.

And it’s fun. Oh man, is it fun. Throw a suppressor on and the CP33 is a hearing-safe, high-capacity, rapid-fire little laser beam of a smile generator. It’s the kind of gun that reminds me how much fun target shooting is.

Kel-Tec’s CP33 pistol was so fun I needed a smoke. Plus it was cold in freaking Wyoming in freaking mid-November.

Kel-Tec’s new gun, the CP33, was so fun I changed my religion. Or it was just really, really cold for this now-Texan.

Kel-Tec’s CP33 was so fun I made reliable hits at 800 yards with a Kel-Tec RFB looking through a 6x Vortex scope with an off-kilter reticle shooting plain ol’ American Eagle ammo. In hindsight this may not be related to the CP33, but I was already in a good mood and determined to keep that going, so missing wasn’t an option.

Why a Florida gun in Wyoming? That I can’t answer. But I’m glad the question was asked, because shooting the CP33 made the cold and snow all worth it. It’s a 10 out of 10 on the fun scale. And did I mention it holds 33 rounds? Or 50!?

Specifications: Kel-Tec CP33 .22 LR Pistol

Caliber: .22 LR
Capacity: 33 rounds. 50 rounds with extension (!).
Weight: 24 ounces
Length: 10.6 inches
Height: 6 inches
Barrel Length: 5.55 inches
Sight Radius: 8.64 inches
MSRP: $475

Ratings (out of five stars):

Reliability * * *
I experienced a few hitches, but overall these early production models ran pretty darn well with a few brands of ammo. Feeding issues were primarily due to the cold temperatures and thick lube, but there were a couple stoppages due to rimlock.

Accuracy * * * * *
Shockingly accurate. Not only mechanically accurate, but incredibly easy to shoot to its potential.

Ergonomics * * *
The grip feels good in my hands, but there’s room for ergonomic improvement. I’m not a big fan of the Steyr-style rear charging handle, but it’s perfectly functional. The CP33 is a large thing, but once you’re shooting it all clicks.

Customize This * * * * *
Suppressor-friendly threaded barrel, rear accessory attachment point, M-LOK section under the barrel, full Picatinny optics rail, threaded attachment points on either side of the aluminum receiver, and fully-adjustable sights. This is a heck of a lot of customization potential right out of the box.

On The Range * * * * *
Fun in a gun. Have some gun fun with a fun gun. The CP33 is a rapid-fire smile machine.

Overall * * * *
As much as I love the CP33 — and I do! And I’ll be buying one — it isn’t quite a five-star gun. Maybe when Kel-Tec ramps up that loaded magazine core exchange program. Or if the rear charging handle sees some design tweaks or the aftermarket steps in. But if they make enough that they’re widely available and they can be found for the MSRP price or less, I’m all-in and this thing earns a rock solid four stars all day long.


via The Truth About Guns
Gun Review: Kel-Tec CP33 .22LR Pistol

Protect your cores portably: ‘Defense Grid 2’ is coming to Nintendo Switch next month


(Official Hidden Path image)

Hidden Path Entertainment, headquartered in Bellevue, Wash., announced on Wednesday that it will release a remastered edition of Defense Grid 2 on the Nintendo Switch on February 7th.

Defense Grid 2 is a “tower defense” game, where players must protect vulnerable assets from enemy attack by setting up layers of fortifications, obstacles, and automated weapon systems. You rarely if ever take direct offensive action. Instead, you build up a potentially elaborate series of defenses that slow, stop, redirect, or destroy the enemy’s onslaught.

In DG2 in particular, your goal is to keep hordes of invading aliens away from your outposts. You can set up systems that shoot down, reroute, block, or slowly whittle away at the advancing aliens before they’re able to reach, grab, and escape with the floating cores that power your defenses. The game features “couch co-op” on the Switch, with each player using a Joy-Con controller.

In its press release, Hidden Path wrote that the Switch port “marks two longtime dreams our team has had: to bring you a Defense Grid game you can take on the go, and offering the game on a Nintendo console.”

In 2013 through 2014, the making of Defense Grid 2 was chronicled by Russ Pitts in a series for Polygon, beginning from its earliest planning stages. The game was initially funded via a successful Kickstarter campaign, but when that proved insufficient, an angel investor came aboard to bring the game closer to completion. The result, in 2014, was critically and commercially successful, and led to a VR port two years later.

The 2019 Defense Grid 2 features the original 26-mission campaign, as well as Aftermath, an additional chapter with five extra missions that was formerly exclusive to the game’s VR release. As with the other versions, it will ship with platform-specific leaderboards, which let you track your performance on each stage against other members of the game’s Switch community.

Hidden Path Entertainment was founded in 2006 by a group of ex-Microsoft developers, and made its debut with 2008’s Defense Grid: The Awakening. Its other releases include Counter-Strike: Global Offensive, the updated 2013 Steam port of Age of Empires II, and last year, the real-time strategy VR game Brass Tactics.

Defense Grid 2 will be available digitally for $19.99 via the Nintendo eShop, and those who pre-order the game will receive a 10% discount. DG2 is also available via Steam for Windows, Linux, and OS X; on PlayStation 4 and Xbox One via both consoles’ online marketplaces; and has been released for virtual reality on the Oculus Rift and Samsung GearVR.


via GeekWire
Protect your cores portably: ‘Defense Grid 2’ is coming to Nintendo Switch next month

Spider-Man’s European vacation gets cut short in ‘Far From Home’ trailer

Even your friendly neighborhood Spider-Man needs a vacation. Between all of the mild-mannered studenting and Avenger-style world saving (not mention what transpired during Infinity War), Peter Parker could clearly use a break.

The first trailer for July’s Far From Home finds Parker going Griswold, for a little European vacation, sans-suit (and Lindsey Buckingham soundtrack). But a surprise visit from Howling Commando Nick Fury, naturally, turns things on their head [implied record scratch sound effect].

This time, Spider-Man does battle with a suitably emo Jake Jake Gyllenhaal as the globe-headed Mysterio, with help from some new suits — including what appears to be an homage to Steve Ditko’s original underarm webbing.

Far From Home has a tough act to follow after the absurdly wonderful Spider-Verse — not to mention some explaining to do following the events of the last Avengers. Though we should be up to speed by the time it rolls around. Endgame is due out in April, with the new Spider-Man arriving on July 5.


via TechCrunch
Spider-Man’s European vacation gets cut short in ‘Far From Home’ trailer