Gun Review: Diamondback Sidekick .22 Revolver

https://cdn0.thetruthaboutguns.com/wp-content/uploads/2021/11/Diamondback-Sidekick-22LR-revolver.jpeg

Next Post Coming Soon…▶

The last few years have seen the introduction of a number of interesting .22 caliber revolvers. Among them are the Ruger Wrangler and the Heritage Barkeep. These affordable wheel guns are well suited to general carry and recreational use.

Diamondback Sidekick 22 revolver rimfire double action

The latest entrant in the affordable .22 revolver race is slightly more expensive, but this one is a double action revolver with an interchangeable swing out cylinder.

Diamondback Sidekick 22 revolver rimfire double action
The Sidekick’s cylinder release is incorporated into the ejection rod.

The Diamondback Sidekick was announced in August. It appears to be a clone of the High Standard Double Nine. It will probably also remind many of the old H&R 929 Sidekick.

When I was growing up it seemed almost everyone including my grandfather owned a Double Nine. When you wanted protection, but didn’t want a center fire with its greater expense and recoil, the High Standard Double Nine was a popular choice.

Diamondback Sidekick 22 revolver rimfire double action
The Diamondback Sidekick (top) and the Ruger Single Six

Designed to look like a traditional single action or cowboy gun with its plow handle grip and large trigger spur, the Double Nine was a double action revolver with a swing out cylinder. It was immensely popular and missed by old time shooters.

Diamondback Sidekick 22 revolver rimfire double action

Today we have an alternative that may be a better gun. Modern manufacturing has given us an improved .22 revolver with much to recommend it.

Diamondback Sidekick 22 revolver rimfire double action

The Diamondback Sidekick may be a clone, but it stands strong on its own merits. The revolver features a swing out cylinder with nine chambers. The cylinder release doubles as the ejector rod. Pull the ejector rod forward to release the cylinder. Load, close the cylinder, and you are ready to fire.

Diamondback Sidekick 22 revolver rimfire double action

The double action revolver may be fired double action with a simple pull of the trigger or in single action by cocking the hammer and applying a light trigger press.

The Sidekick is smooth enough in double action for an economy revolver. The best means of managing the double action pull is to stage he trigger; press until the hammer almost falls, pause to get a solid sight picture, and then fire.

The single action trigger pull breaks at a very clean, crisp four pounds. That invites single action shooting and most shots fired with a Sidekick will probably be while plinking or informal target practice. The double action trigger is pleasant enough to make for good double action training.

Diamondback Sidekick 22 revolver rimfire double action

The traditional plow handled grip with GFN checkered scales fits most hands well. There is no step in the handle required to stabilize the hand for double action fire with the .22’s modest recoil. The hammer spur allows for easy thumb cocking.

The barrel is 4.5 inches long, but expect other options to be offered down the road. The sights are the usual post front blade and grooved rear sight as you’d expect on a six nine shooter like this. The sights are well regulated for the six o’clock hold at ten yards. The finish is Cerakote.

A great option the Sidekick gives you is the use of interchangeable cylinders, one in .22 long rifle and one in .22 Magnum. Both will ship with the revolver. This isn’t something that’s been offered often with double action revolvers as fitting the crane is more difficult than simply using a base pin in a single action revolver.

The bolt holding the cylinder crane is spring-loaded. I used an old pen shaft to depress the latch and pull the cylinder away. Depress the shaft again and snap the other cylinder in place. The system is simple. After changing the cylinders headspace remains tight.

A simple groove in the top strap and a post front sight may not makes for gilt edged accuracy, but the sights are properly regulated for 40 grain loads. I used a mix of various makers 40 grain RNL loads to test the wheel gun. Five Remington Thunderbolts produced a 2.0-inch group at 15 yards. The Sidekick is more than accurate enough for informal target practice, plinking, and small game hunting.

The .22 Magnum cylinder offers a crackerjack option for larger pests. I wont get into the .22 Magnum for personal defense debate, but if you want a rimfire for easy critter control at a relatively low expense, the Sidekick is as good as any.

A natural comparison most will make here is the Ruger Wrangler, but the comparison isn’t really fair. The Wrangler and the Sidekick are about equally accurate. The Ruger, however, doesn’t have a .22 Magnum option. It’s also a single action gun with a six shot cylinder that loads via a loading gate.

The question then becomes, are those difference worth the extra outlay for the Diamondback revolver. I would gladly pay the difference for the Sidekick. They won’t hit retailers until next week, but I think Diamondback has a winner in this revolver.

Caliber: .22 LR/.22 Mag convertible
Action: Single/Double
Grips: Checkered, glass-filled polymer
Capacity: 9 rounds
Front Sight: Blade
Rear Sight: Integral
Overall Barrel Length: 4.5 inches
Overall Length: 9.875 inches
Frame & Handle Finish: Black Cerakote
Overall Weight: 32.5 ounces
MSRP: $320 (expect about $290 retail)

Ratings (out of five stars):

Ergonomics * * * * *
The heft and balance are excellent. This classic revolver handles well and the grip is comfortable. There’s a reason the Colt SAA has been so popular for the last century and a half.

Accuracy * * * * *
For the price and compared to the Ruger Wrangler and Heritage Rough Rider the Sidekick is quite accurate. Soda cans and milk jugs should be afraid. Very afraid.

Reliability * * * * *
The Sidekick never failed to crack off 240 .22 Long Rifle cartridges and 27 .22 Magnum. The only problem you may have in terms of reliability with this gun will be due to the rimfire ammo that goes into it.

Value * * * * ½
There are less expensive similar guns that are also fine for plinking and taking small game. But they don’t have all the features of the Sidekick. You pays your money and takes your choice.

Overall * * * * *
I love the Sidekick. It’s a fun gun that will take game and guard the homestead quite well and it’s very high on the fun-to-shoot scale.

 

Next Post Coming Soon…▶

The Truth About Guns

12 must-see talks if you want to become a better Laravel developer

https://jcergolj.me.uk/assets/img/me.jpg

In my opinion, at least. 🙂

As a Laravel developer, I’ve spent a lot of time learning from some of the best Laravel developers. Do names such as Adam Wathan, Colin DeCarlo, Jason McCreary ring a bell? They should. If they don’t, here is a quick fix. My list of 12 fantastic talks that you could learn a ton from.

Testing

Test-Driven Laravel

by Adam Wathan

An excellent intro into TDD. TDD seems easy until you need to talk or test DB queries, generate PDF, deal with APIs and so on. He will lear you how to do all those stuff. Even better and more it depth is his course. A must-watch for every developer who doesn’t know where to start practising TDD.

Lies You’ve Been Told About Testing

by Adam Wathan

Jet another great talk from Adam about testing. Stop worrying about architecture. Start emphasising the details.

Code refactoring, patterns

Patterns That Pay Off

by Matt Stauffer

Matt talks about different patterns that we don’t think about when building an application. Then, he dives into picking better code patterns by reviewing code bases.

Writing code that lasts

by Raphael Dohms

Writing code that survives the test of time and self-judgment is a matter of clarity and simplicity. This is a talk about growing, learning and improving our code with calisthenics, readability and good design.

Everything I Ever Needed To Know About Web Dev, I Learned From My Twitter Timeline

by Colin DeCarlo

Somehow lengthy title, however still worth watching. Colin DeCarlo talks about some ideas on cleaning up code in your application gained from “fire tweets” on twitter.

Cruddy by Design

by Adam Wathan

There is never enough controllers. From @dhh tweet: More controllers doing less work obviates the need for any other fancy patterns. In this talk, Adam shows how you can move code from one controller into multiple ones.

Design Patterns with Laravel

by Colin DeCarlo

Colin talks entertainingly about three patterns: adapter, strategy and factory one.

Resisting Complexity

by Adam Wathan

Why is it OK that a User can be saved? Because according to Adam, methods are affordances.
Furthermore, don’t be afraid of facades, he says. But, to be fair, I think this statement was more relevant in 2018 than today. I think facades are well accepted nowadays.

LaraconUs 2018

by Colin DeCarlo

Having the correct tools is not the same as using the tools correctly. Learn how to use Laravel tools correctly. Some more tips on how your code could be better and more readable.

Practicing YAGNI

by Jason McCreary

This talk is about how to avoid overengineering and why you ain’t gonna need it is good.
Jason also has a BaseCode course. It is about code refactoring. Do check it out.
A field guide containing real-world practices to help you write code that’s less complex and more readable.

Laracon US

by Sandi Metz

She is one of the gurus in the Rails world. Different language, same rules. Do you know what a code smell is?
Really, name 5? Spoiler, nobody can.

MySql

Eloquent Performance Patterns

by Jonathan Reinink

If MySql and not your thing, you can learn a lot here. Even better, Jonathan shows how to write more complex and optimised queries with Laravel Eloquent. Finally, if you like the talk, here is an entire course dedicated to it.

Happy watching & learning.

Disclaimer: nobody paid me to promote those courses here. I bought and watched all of them. And I learned a lot. I generally think those courses are worth paying for.

Laravel News Links

‘This is a Left-Wing Cult’: Joe Rogan UNLOADS on Dishonest Media Coverage of Kyle Rittenhouse Trial

https://www.louderwithcrowder.com/media-library/image.png?id=27984877&width=980

Day Two of Rittenhouse jury deliberation has begun. As we wait, we reflect on what garbage human beings the mainstream media has been throughout this entire thing. The main reason he’s on trial is because of lies and untruths the media has spread. And this is just one man’s opinion, but I’m sure being attacked by media is on the mind of at last a few jurors if they’re thinking about voting to acquit Rittenhouse. Joe Rogan is someone who knows firsthand how much the media lies and swears by it. Once people discover this rant by Dr. Joe, MD, the media will proclaim "who, us?" Brian Stelter is probably crying into his breakfast cheesecake as we speak.

"This information is not based on reality. This is a left-wing cult. They’re pumping stuff out and then they are confirming this belief. They are all getting together, and they are ignoring contrary evidence. They are ignoring any narrative that challenges their belief about what happened, and they are not looking at it realistically. They are only looking at it like you would if you were in a f*cking cult."

As an aside, what a cast of characters! Drew Hernandez, Tim Pool, Blaire White, AND Alex Jones.

More people need to be exposed to who the media is. They won’t just go after you if you hold a different opinion than them or if they think they can use you to advance their leftist narrative. They’ll go after you if they even assume you have a different opinion. Rittenhouse is only one of the most extreme examples of it.


SNL Propaganda Isn’t Even Trying Anymore | Louder With Crowder

youtu.be

Louder With Crowder

How Triggers May Significantly Affect the Amount of Memory Allocated to Your MySQL Server

https://www.percona.com/blog/wp-content/uploads/2021/11/Triggers-Affect-Memory-Allocated-to-Your-MySQL-Server-300×157.pngTriggers Affect Memory Allocated to Your MySQL Server

Triggers Affect Memory Allocated to Your MySQL ServerMySQL stores active table descriptors in a special memory buffer called the table open cache. This buffer is controlled by configuration variables table_open_cache that hold the maximum number of table descriptors that MySQL should store in the cache, and table_open_cache_instances that stores the number of the table cache instances. With default values of table_open_cache=4000 and table_open_cache_instances=16, MySQL will create 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other.

If you use only tables, the table cache does not require a lot of memory because descriptors are lightweight, and even if you significantly increase the value of the table_open_cache, the required memory amount would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of tables.

However, if your tables have triggers, it changes the game.

For the test I created a table with a single column and inserted a row into it:

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

Then I flushed the table cache and measured how much memory it uses:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,02 sec)mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

Then I accessed the table to put it into the cache.

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,01 sec)

16 table descriptors took less than 16 KiB in the cache.

Now let’s try to create some triggers on this table and see if it changes anything.

mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW 
    -> BEGIN 
    ->   SIGNAL SQLSTATE '45000' SET message_text='Very long string. 
    ->     MySQL stores table descriptors in a special memory buffer, called table open cache. 
    ->     This buffer could be controlled by configuration variables table_open_cache that 
    ->     holds how many table descriptors MySQL should store in the cache and table_open_cache_instances 
    ->     that stores the number of the table cache instances. So with default values of table_open_cache=4000 
    ->     and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 
    ->     table descriptors each. These table cache instances could be accessed concurrently, allowing DML 
    ->     to use cached table descriptors without locking each other. If you use only tables, the table cache 
    ->     does not require a lot of memory, because descriptors are lightweight, and even if you significantly 
    ->     increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take 
    ->     up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge 
    ->     number for this number of open tables. However, if your tables have triggers, it changes the game.'; 
    -> END|

Then let’s flush the table cache and test memory usage again.

Initial state:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

After I put the tables into the cache:

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|    611.12 KiB |
+---------------+
1 row in set (0,00 sec)

As a result, in addition to 75.17 KiB in the table cache, 611.12 KiB is occupied by the memory/sql/sp_head::main_mem_root. That is the "Mem root for parsing and representation of stored programs."

This means that each time when the table is put into the table cache, all associated triggers are put into the memory buffer, storing their definitions.

FLUSH TABLES command clears the stored programs cache as well as the table cache:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

More triggers increase memory usage when put into the cache.

For example, if we create five more triggers and repeat our test we will see the following numbers:

mysql> \d |
mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances that stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other. If you use only tables, the table cache does not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of open tables. However, if your tables have triggers, it changes the game.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed ta a
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed table open cache. This buffer could be controlled by configuration variables table_open_cache tha
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
| 446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 3.58 MiB |
+---------------+
1 row in set (0,00 sec)

Numbers for the event memory/sql/sp_head::main_mem_root differ six times:

mysql> SELECT 3.58*1024/611.12;
+------------------+
| 3.58*1024/611.12 |
+------------------+
|         5.998691 |
+------------------+
1 row in set (0,00 sec)

Note that the length of the trigger definition affects the amount of memory allocated by the memory/sql/sp_head::main_mem_root.

For example, if we define the triggers as follow:

mysql> DROP TABLE tc_test;
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

mysql> \d |
mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,04 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> \d ;
mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|    446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|      1.89 MiB |
+---------------+
1 row in set (0,00 sec)

The resulting amount of memory is 1.89 MiB instead of 3.58 MiB for the longer trigger definition.

Note that having a single table cache instance requires less memory to store trigger definitions. E.g. for our small six triggers, it will be 121.12 KiB instead of 1.89 MiB:

mysql> SHOW GLOBAL VARIABLES LIKE 'table_open_cache_instances';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| table_open_cache_instances |     1 |
+----------------------------+-------+
1 row in set (0,00 sec)

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 121.12 KiB |
+---------------+
1 row in set (0,00 sec)

Conclusion

When you access tables that have associated triggers, their definitions are put into the stored programs cache even when not fired. This was reported at MySQL Bug #86821 and closed as “Not a Bug” by Oracle. This is, certainly, not a bug, but the table and stored routines cache design. Still, it is good to be prepared, so you are not surprised when you run short of memory faster than you expect. Especially if you have many triggers with long definitions.

Percona Database Performance Blog

BREAKING: Mark McCloskey Argues with BLM Protestors While Rittenhouse Jury Deliberates [VIDEO]

https://cdn0.thetruthaboutguns.com/wp-content/uploads/2021/11/2021-11-16_17-02-30.png

Mark McCloskey kenosha protestor
Mark McCloskey gets into it verbally with a protestor outside the courthouse in Kenosha. (Photo credit: Fox News)

Next Post Coming Soon…▶

While jurors deliberate inside the courthouse in Kenosha, Wisconsin in the Kyle Rittenhouse case – the outcome of which promises to be important for gun owners everywhere – Mark McCloskey is outside arguing with protestors. Why? Because there’s never a dull moment in what has been a circus of a two-week trial.

Fox News reports:

Mark McCloskey, the St. Louis lawyer who made national headlines last year when he carried a gun on his property near a social justice protest in his neighborhood, argued with a protester outside the Kenosha County Courthouse on Tuesday afternoon. 

“It really hurts me that you would have that much hatred,” the protester told McCloskey. 

“There is absolutely no hatred involved in what I did,” McCloskey responded. “They came in, storming through my gate, broke down my gate, stormed toward my house, and I was afraid for my life.”

If the jurors can’t reach a decision in the case, Judge Bruce Schroeder will be polling them to find out if they want to continue deliberating. Is McCloskey helping by grabbing another fifteen minutes of fame – or would that be infamy? – on the courthouse steps? Seems unlikely.

Here’s part of the exchange between McCloskey and a protestor.

Next Post Coming Soon…▶

The Truth About Guns

How to add Server Timing Header information for Laravel Application

https://postsrc.com/storage/images/snippets/how-to-add-server-timing-header-information-for-laravel-application.jpeg

To get the server timing information and pass it in the response header in the Laravel application you can make use “

laravel-server-timing

” package by beyondcode. This package allows you to get the total number of milliseconds the application takes to bootstrap, execute and run. 

Add Server-Timing header information from within your Laravel apps.

Step 1: Install the package using Composer

First, you will have to install it using composer using the command below.

composer require beyondcode/laravel-server-timing

Step 2: Add the Middleware Class

To add server-timing header information, you need to add the \BeyondCode\ServerTiming\Middleware\ServerTimingMiddleware::class, middleware to your HTTP Kernel. In order to get the most accurate results, put the middleware as the first one to load in the middleware stack.

\BeyondCode\ServerTiming\Middleware\ServerTimingMiddleware::class

Step 3: View the timing from the browser

To view the timing you can view your browser inspect window and see the Server Timing tabs.

Server Timing tabs

Adding Additional Requirements

Sometimes you might also add additional requirements by using the code below. By doing so you will be able to know how long the codes take to execute and whether it requires more optimization for speedy performance.

ServerTiming::start('Running expensive task');

// do something

ServerTiming::stop('Running expensive task');

Optional: Publish the configuration

You can also publish the configuration by running the code below.

php artisan vendor:publish --tag=server-timing-config

In addition to the default configuration, you can add the value of the timing.php is as follows to ensure it syncs with the .env variable.

<?php

return [
    'enabled' => env('SERVER_TIMING', false),
];

Laravel News Links