S&W M&P M2.0 Metal: Aluminum-Framed, Optics-Ready Carry Machine

https://cdn.athlonoutdoors.com/wp-content/uploads/sites/8/2022/08/MP-M2.0-Metal-Left.jpg

Smith & Wesson launched a really interesting update to the M&P line today. The S&W M&P M.20 Metal brings an aluminum-framed, optics-ready variant into the vaunted duty/carry pistol series. The T6 aluminum frame should spark loads of interest with shooters that prefer a lightweight, rigid metal frame over plastics.

S&W M&P M2.0 Metal Details

The M2.0 Metal features an 18-degree grip angle for point of aim, combined with four interchangeable palmswell grip inserts for a custom fit. A textured polymer front strap, wide slide stop and reversible magazine release adds familiar feel and controls.

Keeping with current trends, the M2.0 Metal comes pre-cut for slide-mounted optics. Meanwhile, a Picatinny rail forward of the trigger guard enables attachment of lights/lasers. Forward slide serrations, a low bore axis and M2.0 flat-face trigger blend an assortment of proven component choices.

The two-tone M&P M2.0 Metal comes with a Cerakote finish.

Obviously, what’s cool here is the metal frame. But the pistol also sports an equally appealing Tungsten Gray Cerakote finish to the frame and stainless steel slide. Combined with the black accents of the controls and palmswells, and the M2.0 Metal delivers a really distinctive style and appearance.

More importantly, the 9mm pistol utilizes a 4.25-inch stainless steel barrel with an Armornite finish. The M2.0 Metal comes with a 17-round magazine, seated into a package that weighs just 30 ounces overall. The pistol rides in standard M&P9 holsters, and it also accepts and 17-round M2.0 magazine. Win-win all the way around.

Smith & Wesson is excited to introduce a new addition to the M&P family! Our new all Metal Pistol is chambered in 9MM has a 4.25in barrel, T6 Aluminum frame, M2.0 flat face trigger, wide slide stop, reversible magazine release, slide cut for optics and has the M&P’s patented take-down lever and sear deactivation systems to allow for disassembly without pulling the trigger!

A smart choice, the Metal series comes with a pre-cut optic channel in the slide.

The S&W M&P M2.0 Metal retails for $899. For even more info, please visit smith-wesson.com.

Smith & Wesson M&P M2.0 Metal

  • Optimal 18-degree grip angle for natural point of aim
  • Four interchangeable palmswell grip inserts for optimal hand fit and trigger reach – S, M, ML, L
  • Textured polymer front strap
  • Wide slide stop
  • Reversible magazine release
  • Slide cut for optics
  • M2.0 flat face trigger for consistent finger placement that allows for more accurate and repeatable shooting
  • Picatinny-style rail
  • Forward slide serrations
  • Low barrel bore axis makes the M&P pistol comfortable to shoot, reducing muzzle rise and allowing for faster aim recovery
  • Enhanced sear for lighter, crisper trigger let-off
  • Accurate 1 in 10 ̋ twist barrel
  • M&P’s patented take-down lever and sear deactivation systems allow for disassembly without pulling the trigger
  • Accepts any 17 round M2.0™ magazine
  • Comes with two 17-round magazines
  • Fits standard M&P9 holster

Smith & Wesson M&P M2.0 Metal Specifications

  • Model: M&P9 M2.0 METAL
  • Caliber: 9mm Luger
  • Overall Capacity: 17+1
  • Optics: Yes
  • Color: Two-Tone
  • Safety: No Thumb Safety
  • Overall Length: 7.4 inches
  • Front Sight: Steel White Dot
  • Rear Sight: Steel White 2-Dot
  • Action: Striker-Fire
  • Grip: Interchangeable Palmswell Inserts (4)
  • Barrel Material: Stainless Steel with Armornite Finish
  • Slide Material: Stainless Steel
  • Frame Material: T6 Aluminum
  • Slide Finish: Tungsten Gray Cerakote
  • Frame Finish: Tungsten Gray Cerakote
  • Barrel Twist: 1:10˝ RH
  • Barrel Length: 4.25 inches
  • Overall Weight: 30 ounces

Editor’s Take:

Yes, the polymer-framed, striker-fired 9mm pistol still reigns supreme. Nevertheless, interest remains for metal-framed semi-autos. It appears S&W put some thought into this approach, a nice blend of contemporary components and style. Optics-ready right out of the box, you can carry this bad boy on duty, shoot a match, or carry for defense. We will be especially interested how the Metal series builds out down the road, like a 5-inch competition model hopefully. In the meantime, stay tuned for a full range review coming real soon.

Didn’t find what you were looking for?

The post S&W M&P M2.0 Metal: Aluminum-Framed, Optics-Ready Carry Machine appeared first on Tactical Life Gun Magazine: Gun News and Gun Reviews.

Tactical Life Gun Magazine: Gun News and Gun Reviews

The problem with homeowners being more likely to vote

https://www.futurity.org/wp/wp-content/uploads/2022/08/home_ownership_voting_1600.jpgkey ring with keys and red house-shaped keychain

Homeownership boosts voter turnout. But is that a good thing?

Buying a home is a cornerstone of the American Dream, and the US government has long encouraged it with generous subsidies. As far back as the early 20th century, defenders of this policy have argued that it fosters a contented middle class and an engaged electorate.

Indeed, it’s a truism of American politics that homeowners turn out to vote in higher numbers than people who rent their homes. It’s no surprise, then, that politicians tend to court their favor.

But is the link between homeownership and electoral participation one of causation or correlation? Does buying real estate cause a person to plod through the voter guide? Or could it be that those who are more likely to own a home—perhaps due to age, education, or other demographic factors—are simply more prone to vote?

This actually matters. For one thing, subsidizing homeownership to foster a healthy democracy makes sense only if it does so; otherwise, it’s a giveaway to those least in need. And on the flip side, if homeownership does prompt people to vote, that might suggest they’re driven partly by self-interest—specifically, the desire to boost their property values.

In that case, those voters will presumably favor things like restrictive zoning laws that exclude lower-income people and prevent new construction, thus making it harder for others to break into the housing market and worsening social inequalities.

To get to the bottom of this, Andrew B. Hall, a professor of political economy at Stanford Graduate School of Business, and Jesse Yoder collected two decades’ worth of election records on 18 million people in Ohio and North Carolina. Then they combined that with deed data to see if people’s behavior changed when they became homeowners.

The result? They found that buying a home really did cause people to vote substantially more in local elections—and the bump in turnout was almost twice as big when zoning issues were on the ballot. What’s more, the effect increased with the purchase price. The greater the asset value, the more likely people were to vote.

“Taken together, these findings strongly suggest that the increase in voting is driven, at least in part, by economic considerations,” Hall says. “People are clearly paying more attention and turning out in larger numbers to weigh in on policies that affect their investment.”

Their findings appear in the Journal of Politics.

Voting rates of homeowners and renters

The idea that voters are spurred by pecuniary motives might seem obvious in these disenchanted times. But it’s hardly in line with our ideals as a nation, nor is it universally accepted by scholars of politics or psychology.

For one thing, Hall points out, voting is costly. Researching the issues, watching debates, suffering doorstep canvassers, getting to the polling station on a workday, standing in line—it’s kind of a pain. (If it were fun, there’d be no distinction in sporting “I voted” buttons.)

And the kicker is that your effort has an infinitesimally tiny effect on the outcome. Don’t tell the kids, but no election is decided by one person’s vote. On a narrow, individual cost-benefit basis, voting is not a great value proposition.

Yet the fact remains that homeowners vote at higher rates than renters. A different theory, Hall says, is that both homeownership and voting rates reflect preexisting differences. People who own their homes are wealthier and more educated, and are therefore more likely to identify with social and governmental institutions—and less likely, perhaps, to be alienated and disengaged.

“Both of those stories are plausible,” Hall says, “and you can’t disentangle them with cross-sectional data. You need to look at individuals’ behavior over time.”

Also, although wealthy people vote more, it’s important to be clear that buying a home does not instantly raise someone’s net worth, so any immediate effect would not be driven by wealth. But does converting assets from cash or stocks to real estate increase one’s interest in local politics?

Apparently so. Using data on all registered voters and homeowners in Ohio, Hall and Yoder found that buying a home boosted an individual’s turnout rate in local general elections by 5 percentage points on average, to 31%. Since the baseline turnout rate in the sample was only 26%, that’s an almost 20% increase in propensity to vote.

The researchers then sliced up the data by home price and found that people with more expensive homes increased their turnout even more. The effect rose sharply as home prices increased, being more than twice as large in the top decile than in the bottom decile.

“This certainly suggests, though it doesn’t prove, that the increase in voting by homeowners is driven by economic self-interest,” Hall says, “because the motivation to protect and enhance an investment would naturally tend to increase with the value at stake.”

Why homeowners vote

The researchers next gathered data on local ballot initiatives in Ohio to see what kind of issues galvanized homeowners. Here, they found that the increase in turnout was greatest when zoning measures were being decided—the effect of home ownership on election turnout was nearly doubled in such cases.

Of course, the data doesn’t show how people voted, but it seems unlikely that they made a special effort to vote against their own interests. And that raises a troubling issue. “This tells us that encouraging homeownership doesn’t just incentivize people to vote,” Hall says, “it also likely changes how they vote and the kinds of policies they support.”

In particular, he says, buying a home may instill a preference for restrictive housing policies, and that may help explain the chronic housing shortage that has plagued the US for decades.

“We don’t have direct evidence of this in the paper, but the results are consistent with the story that homeowners capture the local political process and resist the building of more residences,” Hall says. “Depressing the local supply of homes raises prices, and it makes it harder for people to move to where there are economic opportunities.”

That reduces economic growth and deepens social inequalities by making it harder for low-income families to break into the housing market and accumulate wealth of their own. The interests of homeowners can also result in NIMBYism—resistance to public works and job-creating investments that might alter their neighborhood’s character or demographics.

Support for home ownership is deeply ingrained in American politics, partly because it’s believed to give people an investment in the democratic process. President George W. Bush made this explicit when he extolled the benefits of an “ownership society” in 2004. “When citizens become homeowners, they become stakeholders as well,” the White House declared.

But the ownership society contains a paradox: While the possession of property does encourage civic participation—as this study uses large-scale data to document for the first time—it may do so in a way that is ultimately self-limiting, by excluding those who failed to get in on the ground floor.

Source: Lee Simmons from Stanford University

The post The problem with homeowners being more likely to vote appeared first on Futurity.

Futurity

NEW Smith & Wesson M&P9 M2.0 METAL Pistol

https://www.thefirearmblog.com/blog/wp-content/uploads/2022/08/NEW-Smith-Wesson-MP9-M2.0-METAL-Pistol-3-180×180.jpg

NEW Smith & Wesson M&P9 M2.0 METAL Pistol (3)Smith & Wesson has just announced the release of a new iteration of their M&P9 M2.0 pistol dubbed M&P9 M2.0 METAL. As the model name suggests, the new pistol features a metal frame. The aluminum frame of the new M&P9 M2.0 METAL pistol is externally similar to its polymer counterpart and is compatible with standard M&P9 M2.0 […]

Read More …

The post NEW Smith & Wesson M&P9 M2.0 METAL Pistol appeared first on The Firearm Blog.

The Firearm Blog

Laravel: Parallel Testing Is Now Available

We’re excited to announce that Parallel Testing is now available in Laravel. Starting Laravel v8.25, you may use the built-in `test` Artisan command to run your tests simultaneously across multiple processes to significantly reduce the time required to run the entire test suite.Laravel

Deploying Soketi to Laravel Forge – Part 2

In Part I of this tutorial we learnt how to install and deploy Soketi to our Laravel Forge servers.
Currently, Soketi is accessible over our server’s IP address, behind port 6001. In this post we’re going to modify our setup so that we can access our socket server via socket.my-domain.com. We’ll do this by using an Nginx reverse proxy.Laravel

Deploying Soketi to Laravel Forge

Soketi is a simple, fast and resilient open-source WebSockets server written in Typescript. It’s fully compatible with the Pusher v7 protocol which makes it a great replacement to Pusher when using Laravel Echo.Laravel

Efficient Data Archiving in MySQL

https://www.percona.com/blog/wp-content/uploads/2022/08/Ra_ignore_t.pngEfficient Data Archiving in MySQL

Efficient Data Archiving in MySQLRecently I have been working with a few customers with multiple terabytes of transactional data on their MySQL clusters. These very large datasets are not really needed for their daily operations but they are very convenient because they allow them to query historical data easily. However the convenience comes at a high price, you pay a lot more for the storage, backup and restoration take much longer, and are, of course, much larger. So, the question is: how can they perform “efficient data archiving”?

Let’s try to define what would be an efficient data archiving architecture. We can layout some key requirements:

  • The archive should be on an asynchronous replica
  • The archive replica should be using a storage configuration optimized for large dataset
  • The regular cluster should just be deleting data normally
  • The archiving system should remove delete statements from the replication stream and keep only the inserts and updates
  • The archiving system should be robust and able to handle failures and resume replication

Key elements

Our initial starting point is something like this:

The cluster is composed of a source (S) and two replicas (R1 and R2) and we are adding a replica for the archive (RA). The existing cluster is pretty much irrelevant in all the discussions that will follow, as long as the row-based replication format is used with full-row images.

The above setup is in theory sufficient to archive data but in order to do so, we must not allow the delete statements on the tables we want to archive to flow through the replication stream. The deletions must be executed with sql_log_bin = 0 on all the normal servers. Although this may look simple, it has a number of drawbacks. A cron job or a SQL event must be called regularly on all the servers. These jobs must delete the same data on all the production servers. Likely this process will introduce some differences between the tables. Verification tools like pt-table-checksum may start to report false positives.  As we’ll see, there are other options.

Capturing the changes (CDC)

An important component we need is a way to capture the changes going to the table we want to archive. The MySQL binary log, when used with the row-based format and full row image, is perfect for the purpose.  We need a tool that can connect to a database server like a replica, convert the binary log event into a usable form, and keep track of its position in the binary log.

For this project, we’ll use Maxwell, a tool developed by Zendesk. Maxwell connects to a source server like a regular replica and outputs the row-based events in JSON format. It keeps track of its replication position in a table on the source server.

Removing deletions

Since the CDC component will output the events in JSON format, we just need to filter for the tables we are interested in and then ignore the delete events. You can use any programming language that has decent JSON and MySQL support. In this post, I’ll be using Python.

Storage engine for the archives

InnoDB is great for transactional workload but far less optimal for archiving data. MyRocks is a much better option, as it is write-optimized and is much more efficient at data compression.

Architectures for efficient data archiving

Shifted table

We have a few architectural options for our archiving replica. The first architecture, shown below, hooks the CDC to the archiving replica. This means if we are archiving table t, we’ll need to have on the archiving replica both the production t, from which data is deleted, and the archived copy tA, which keeps its data long term.

Efficient Data Archiving in MySQL

The main advantage of this architecture is that all the components related to the archiving process only interact with the archiving replica. The negative side is, of course, the presence of duplicate data on the archiving replica as it has to host both t and tA. One could argue that the table t could be using the blackhole storage engine but let’s not dive down such a rabbit hole.

Ignored table

Another architectural option is to use two different replication streams from the source. The first stream is the regular replication link but the replica has the replication option replicate-ignore-table=t. The replication events for table t are handled by a second replication link controlled by Maxwell. The deletions events are removed and the inserts and updates are applied to the archiving replica.

Efficient Data Archiving in MySQL Maxwell

While this later architecture stores only a single copy of t on the archiving replica, it needs two full replication streams from the source.

Example

The application

My present goal is to provide an example as simple as possible while still working.  I’ll be using the Shifted table approach with the Sysbench tpc-c script. This script has an option, enable_purge, that removes old orders that have been processed. Our goal is to create the table tpccArchive.orders1 which contains all the rows, even the deleted ones, while the table tpcc.orders1 is the regular orders table. They have the same structure but the archive table is using MyRocks.

Let’s first prepare the archive table:

mysql> create database tpccArchive;
Query OK, 1 row affected (0,01 sec)

mysql> use tpccArchive;
Database changed

mysql> create table orders1 like tpcc.orders1;
Query OK, 0 rows affected (0,05 sec)

mysql> alter table orders1 engine=rocksdb;
Query OK, 0 rows affected (0,07 sec)
Records: 0  Duplicates: 0  Warnings: 0

Capturing the changes

Now, we can install Maxwell. Maxwell is a Java-based application so a compatible JRE is needed. It will also connect to MySQL as a replica so it needs an account with the required grants.  It also needs its own maxwell schema in order to persist replication status and position.

root@LabPS8_1:~# apt-get install openjdk-17-jre-headless 
root@LabPS8_1:~# mysql -e "create user maxwell@'localhost' identified by 'maxwell';"
root@LabPS8_1:~# mysql -e 'create database maxwell;'
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON maxwell.* TO maxwell@localhost;'
root@LabPS8_1:~# mysql -e 'grant SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO maxwell@localhost;'
root@LabPS8_1:~# curl -sLo - https://github.com/zendesk/maxwell/releases/download/v1.37.6/maxwell-1.37.6.tar.gz| tar zxvf -
root@LabPS8_1:~# cd maxwell-1.37.6/
root@LabPS8_1:~/maxwell-1.37.6# ./bin/maxwell -help
Help for Maxwell:

Option                   Description                                                                       
------                   -----------                                                                       
--config <String>        location of config.properties file                                                
--env_config <String>    json object encoded config in an environment variable                             
--producer <String>      producer type: stdout|file|kafka|kinesis|nats|pubsub|sns|sqs|rabbitmq|redis|custom
--client_id <String>     unique identifier for this maxwell instance, use when running multiple maxwells   
--host <String>          main mysql host (contains `maxwell` database)                                     
--port <Integer>         port for host                                                                     
--user <String>          username for host                                                                 
--password <String>      password for host                                                                 
--help [ all, mysql, operation, custom_producer, file_producer, kafka, kinesis, sqs, sns, nats, pubsub, output, filtering, rabbitmq, redis, metrics, http ]


In our example, we’ll use the stdout producer to keep things as simple as possible. 

Filtering script

In order to add and update rows to the tpccArchive.orders1 table, we need a piece of logic that will identify events for the table tpcc.orders1 and ignore the delete statements. Again, for simplicity, I chose to use a Python script. I won’t present the whole script here, feel free to download it from my GitHub repository.  It is essentially a loop on line written to stdin. The line is loaded as a JSON string and then some decisions are made based on the values found.  Here’s a small section of code at its core:

...
for line in sys.stdin:
    j = json.loads(line)
    if j['database'] == dbName and j['table'] == tableName:
        debug_print(line)
        if j['type'] == 'insert':
            # Let's build an insert ignore statement
             sql += 'insert ignore into ' + destDbName + '.' + tableName
...

The above section creates an “insert ignore” statement when the event type is ‘insert’. The script connects to the database using the user archiver and the password tpcc and then applies the event to the table tpccArchive.orders1.

root@LabPS8_1:~# mysql -e "create user archiver@'localhost' identified by 'tpcc';"
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON tpccArchive.* TO archiver@localhost;'

All together

Just to make it easy to reproduce the steps, here’s the application (tpcc) side:

yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 prepare
yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 run

The database is running a VM whose IP is 10.0.4.158.  The enable_purge option causes old orders1 to be deleted. For the archiving side, running on the database VM:

root@LabPS8_1:~/maxwell-1.37.6# bin/maxwell --user='maxwell' --password='maxwell' --host='127.0.0.1' \
        --producer=stdout 2> /tmp/maxerr | python3 ArchiveTpccOrders1.py

After the two hours tpcc run we have:

mysql> select  TABLE_SCHEMA, TABLE_ROWS, DATA_LENGTH, INDEX_LENGTH, ENGINE from information_schema.tables where table_name='orders1';
+--------------+------------+-------------+--------------+---------+
| TABLE_SCHEMA | TABLE_ROWS | DATA_LENGTH | INDEX_LENGTH | ENGINE  |
+--------------+------------+-------------+--------------+---------+
| tpcc         |      48724 |     4210688 |      2310144 | InnoDB  |
| tpccArchive  |    1858878 |    38107132 |     14870912 | ROCKSDB |
+--------------+------------+-------------+--------------+---------+
2 rows in set (0,00 sec)

A more realistic architecture

The above example is, well, an example. Any production system will need to be hardened much more than my example. Here are a few requirements:

  • Maxwell must be able to restart and continue from the correct replication position
  • The Python script must be able to restart and continue from the correct replication position
  • The Python script must be able to reconnect to MySQL and retry a transaction if the connection is dropped.

Maxwell already takes care of the first point, it uses the database to store its current position.

The following logical step would be to add a more robust queuing system than a simple process pipe between Maxwell and the Python script. Maxwell supports many queuing systems like kafka, kinesis, rabbitmq, redis and many others. For our application, I tend to like a solution using kafka and a single partition.  kafka doesn’t manage the offset of the message, it is up to the application. This means the Python script could update a row of a table as part of every transaction it is applying to keep track of its position in the kafka stream. If the archive tables are using RocksDB, the queue position tracking table should also use RocksDB so the database transaction is not across storage engines.

Conclusion

In this post, I provided a solution to archive data using the MySQL replication binary logs. Archiving fast-growing tables is a frequent need and hopefully, such a solution can help. It would be great to have a MySQL plugin on the replica able to filter the replication events directly.  This would remove the need for an external solution like Maxwell and my python script. Generally speaking, however, this archiving solution is just a specific case of a summary table. In a future post, I hope to present a more complete solution that will also maintain a summary.

Percona Database Performance Blog