The 9 Best Data Visualization Methods That Adds Value to Any Reports

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2021/11/Data-Visualization-Methods-Featured-Image.jpeg

Professional data analysts use data visualization techniques like graphs, charts, and maps to create reports from numerical data. These visual elements help others understand the patterns, trends, and outliers in any data set.

By knowing and applying the best data visualization techniques, you can also enhance your project reports. Read on to become your own data analyst or learn more about the mathematical visualization of data.

1. Choropleth Map

You can use a choropleth map to visualize geographical location-related data. You’ll need to use a color gradient to show the decreasing or increasing data values as the color fades or deepens.

This data visualization technique enables you to see the change of data variables from one region to another. You can use this technique to visualize population density, employment rate, etc. If you want to find out the country-wise consumption of your services or content, this technique is also useful.

You’ll find the Google Data Studio Geo Chart element handy for creating choropleth maps. You can add your data to Google Sheets and then import that to the Google Data Studio to create different views of your choropleth map. Furthermore, you’ll also get an interactive graphic that shows the underlying data when you hover your mouse cursor over any region of the map.

MAKEUSEOF VIDEO OF THE DAY

2. Bar Chart

Bar charts are crucial to visualize data comparisons between multiple components of the data. For example, comparing revenue from financial quarters or years is the most common application.

You need to place the categories that you want to compare on one axis. On the other axis, you need to keep the measured values. Depending on the values of this second axis, the length of the bars will vary.

You can make your bar charts more informative by adding different colors, 3D effects, legends, interactive data views, etc. Google Data Studio offers you multiple forms of bar charts. You can choose from column/stacked column, bar/stacked bar, etc.

3. 3D Maps

3D maps are essentially charts made of bars or columns on a geographical map. You could use this type of chart to flaunt your knack for tech when presenting reports to the audience. 3D maps are ideal for data sets related to products, services, or population data that links a specific region on the globe.

Microsoft Excel has an elaborate feature for creating such cool pictorial data visualizations. It’s called Microsoft 3D Maps. You’ll find it on the Insert tab of the Excel ribbon.

4. Sankey Diagram

This data visualization technique is appropriate to show how data flows. A plain text or rectangle box represents the entities or nodes of your data set. Arcs or arrows of different widths represent the link between the above nodes.

The width of the arrows or arcs is directly proportional to the importance of the data flow. Therefore, this chart is helpful to visualize large data sets related to the flow of resources like energy, money, time, etc., in a project.

5. Network Graphs

You can show the relationship between multiple entities by creating a network graph. On a network graph, nodes or vertices represent each entity. Edges or links represent the connection between these nodes.

You can use network graphs to find patterns, anomalies, and insights from large data sets that connect multiple entities. Alternatively, you can use a network graph to resolve any bottleneck around project tasks and subtasks by linking all resources and statuses.

6. Timeline Charts

Timeline charts help you visualize events or tasks in chronological order. It has wide applications in managing multiple tasks and projects. The involved persons can effortlessly identify project progress or bottlenecks when you visualize task-related data in timeline charts.

Typically, timeline charts are linear. The key events, tasks, and subtasks show up on the axis. You can also make your timelines more attractive by inserting graphics, thumbnail-view of documents, linked resources, milestones, deadlines, etc.

Gantt charts are streamlined timeline charts. You’ll find many free-to-use Gantt chart templates in the Microsft Excel templates library. You may select any template that matches your project and tweak that a bit to make comprehensive project timelines.

Related: How to Make a Scatter Plot in Excel and Present Your Data

7. Treemap Chart

Treemap chart comes from the renowned information visualization and computing method treemapping. Visualization of large hierarchical data sets takes place on nested rectangles. Your data looks highly organized in branches and sub-branches.

You need a data set with two quantitative values for each product or service because the rectangles visualize two data values. A treemap chart is ideal for visualizing a massive data set with many attributes on one screen.

The color and dimension of each rectangle are directly related to the underlying values. Therefore, spotting anomalies or patterns becomes truly easy if you create treemaps for large data sets.

8. Spiral Chart

Spiral charts efficiently visualize large data sets on a single screen. You can show the underlying data as points, columns, or lines. You can also color code the data for easy visualization.

Typically, you need to put the start point of your data at the center of the spiral and move outward as your data grows. Spiral plots are ideal for visualizing the following data: yearly student attendance, employee attendance, products sold, website traffic, etc.

9. Pyramid Chart

Pyramid charts are ideal if you need to visualize data sets in a hierarchy. You’ll see different pyramid charts with a lot of visual effects. However, it’s simply a triangle with different sections separated by lines.

These separated sections of the triangle usually have different heights and widths. The value of the underlying data defines the volume of the separated sections. Usually, the high volume or wider sections should contain a general topic.

The section with the least volume should contain a niche topic from the general topic sitting at the bottom. You can create pyramid charts to better understand your business model, products sold, customer segment, etc.

Get Valuable Insights From Your Data

These are the commonly used data visualization techniques that cover a wide variety of data sets. You can easily create any of the above charts or graphs by practicing a few times. If you’re a learner, these are the appropriate mathematical data interpretation techniques to get started.

While you’re applying the above techniques in your work or school, also know that Google Data Studio helps you make great data visualizations in a few clicks.

The 8 Best Features of Google Data Studio for Data Analysis and Visualization

Want to impress your audience with actionable data insights and compelling visualizations? Check out these Google Data Studio features.

Read Next

About The Author

Tamal Das
(211 Articles Published)

Tamal is a freelance writer at MakeUseOf. After gaining substantial experience in technology, finance, and business processes in his previous job in an IT consulting company, he adopted writing as a full-time profession 3 years ago. While not writing about productivity and the latest tech news, he loves to play Splinter Cell and binge-watch Netflix/ Prime Video.

More
From Tamal Das

Subscribe to our newsletter

Join our newsletter for tech tips, reviews, free ebooks, and exclusive deals!

Click here to subscribe

MUO – Feed

Gun Review: Diamondback Sidekick .22 Revolver

https://cdn0.thetruthaboutguns.com/wp-content/uploads/2021/11/Diamondback-Sidekick-22LR-revolver.jpeg

Next Post Coming Soon…▶

The last few years have seen the introduction of a number of interesting .22 caliber revolvers. Among them are the Ruger Wrangler and the Heritage Barkeep. These affordable wheel guns are well suited to general carry and recreational use.

Diamondback Sidekick 22 revolver rimfire double action

The latest entrant in the affordable .22 revolver race is slightly more expensive, but this one is a double action revolver with an interchangeable swing out cylinder.

Diamondback Sidekick 22 revolver rimfire double action
The Sidekick’s cylinder release is incorporated into the ejection rod.

The Diamondback Sidekick was announced in August. It appears to be a clone of the High Standard Double Nine. It will probably also remind many of the old H&R 929 Sidekick.

When I was growing up it seemed almost everyone including my grandfather owned a Double Nine. When you wanted protection, but didn’t want a center fire with its greater expense and recoil, the High Standard Double Nine was a popular choice.

Diamondback Sidekick 22 revolver rimfire double action
The Diamondback Sidekick (top) and the Ruger Single Six

Designed to look like a traditional single action or cowboy gun with its plow handle grip and large trigger spur, the Double Nine was a double action revolver with a swing out cylinder. It was immensely popular and missed by old time shooters.

Diamondback Sidekick 22 revolver rimfire double action

Today we have an alternative that may be a better gun. Modern manufacturing has given us an improved .22 revolver with much to recommend it.

Diamondback Sidekick 22 revolver rimfire double action

The Diamondback Sidekick may be a clone, but it stands strong on its own merits. The revolver features a swing out cylinder with nine chambers. The cylinder release doubles as the ejector rod. Pull the ejector rod forward to release the cylinder. Load, close the cylinder, and you are ready to fire.

Diamondback Sidekick 22 revolver rimfire double action

The double action revolver may be fired double action with a simple pull of the trigger or in single action by cocking the hammer and applying a light trigger press.

The Sidekick is smooth enough in double action for an economy revolver. The best means of managing the double action pull is to stage he trigger; press until the hammer almost falls, pause to get a solid sight picture, and then fire.

The single action trigger pull breaks at a very clean, crisp four pounds. That invites single action shooting and most shots fired with a Sidekick will probably be while plinking or informal target practice. The double action trigger is pleasant enough to make for good double action training.

Diamondback Sidekick 22 revolver rimfire double action

The traditional plow handled grip with GFN checkered scales fits most hands well. There is no step in the handle required to stabilize the hand for double action fire with the .22’s modest recoil. The hammer spur allows for easy thumb cocking.

The barrel is 4.5 inches long, but expect other options to be offered down the road. The sights are the usual post front blade and grooved rear sight as you’d expect on a six nine shooter like this. The sights are well regulated for the six o’clock hold at ten yards. The finish is Cerakote.

A great option the Sidekick gives you is the use of interchangeable cylinders, one in .22 long rifle and one in .22 Magnum. Both will ship with the revolver. This isn’t something that’s been offered often with double action revolvers as fitting the crane is more difficult than simply using a base pin in a single action revolver.

The bolt holding the cylinder crane is spring-loaded. I used an old pen shaft to depress the latch and pull the cylinder away. Depress the shaft again and snap the other cylinder in place. The system is simple. After changing the cylinders headspace remains tight.

A simple groove in the top strap and a post front sight may not makes for gilt edged accuracy, but the sights are properly regulated for 40 grain loads. I used a mix of various makers 40 grain RNL loads to test the wheel gun. Five Remington Thunderbolts produced a 2.0-inch group at 15 yards. The Sidekick is more than accurate enough for informal target practice, plinking, and small game hunting.

The .22 Magnum cylinder offers a crackerjack option for larger pests. I wont get into the .22 Magnum for personal defense debate, but if you want a rimfire for easy critter control at a relatively low expense, the Sidekick is as good as any.

A natural comparison most will make here is the Ruger Wrangler, but the comparison isn’t really fair. The Wrangler and the Sidekick are about equally accurate. The Ruger, however, doesn’t have a .22 Magnum option. It’s also a single action gun with a six shot cylinder that loads via a loading gate.

The question then becomes, are those difference worth the extra outlay for the Diamondback revolver. I would gladly pay the difference for the Sidekick. They won’t hit retailers until next week, but I think Diamondback has a winner in this revolver.

Caliber: .22 LR/.22 Mag convertible
Action: Single/Double
Grips: Checkered, glass-filled polymer
Capacity: 9 rounds
Front Sight: Blade
Rear Sight: Integral
Overall Barrel Length: 4.5 inches
Overall Length: 9.875 inches
Frame & Handle Finish: Black Cerakote
Overall Weight: 32.5 ounces
MSRP: $320 (expect about $290 retail)

Ratings (out of five stars):

Ergonomics * * * * *
The heft and balance are excellent. This classic revolver handles well and the grip is comfortable. There’s a reason the Colt SAA has been so popular for the last century and a half.

Accuracy * * * * *
For the price and compared to the Ruger Wrangler and Heritage Rough Rider the Sidekick is quite accurate. Soda cans and milk jugs should be afraid. Very afraid.

Reliability * * * * *
The Sidekick never failed to crack off 240 .22 Long Rifle cartridges and 27 .22 Magnum. The only problem you may have in terms of reliability with this gun will be due to the rimfire ammo that goes into it.

Value * * * * ½
There are less expensive similar guns that are also fine for plinking and taking small game. But they don’t have all the features of the Sidekick. You pays your money and takes your choice.

Overall * * * * *
I love the Sidekick. It’s a fun gun that will take game and guard the homestead quite well and it’s very high on the fun-to-shoot scale.

 

Next Post Coming Soon…▶

The Truth About Guns

12 must-see talks if you want to become a better Laravel developer

https://jcergolj.me.uk/assets/img/me.jpg

In my opinion, at least. 🙂

As a Laravel developer, I’ve spent a lot of time learning from some of the best Laravel developers. Do names such as Adam Wathan, Colin DeCarlo, Jason McCreary ring a bell? They should. If they don’t, here is a quick fix. My list of 12 fantastic talks that you could learn a ton from.

Testing

Test-Driven Laravel

by Adam Wathan

An excellent intro into TDD. TDD seems easy until you need to talk or test DB queries, generate PDF, deal with APIs and so on. He will lear you how to do all those stuff. Even better and more it depth is his course. A must-watch for every developer who doesn’t know where to start practising TDD.

Lies You’ve Been Told About Testing

by Adam Wathan

Jet another great talk from Adam about testing. Stop worrying about architecture. Start emphasising the details.

Code refactoring, patterns

Patterns That Pay Off

by Matt Stauffer

Matt talks about different patterns that we don’t think about when building an application. Then, he dives into picking better code patterns by reviewing code bases.

Writing code that lasts

by Raphael Dohms

Writing code that survives the test of time and self-judgment is a matter of clarity and simplicity. This is a talk about growing, learning and improving our code with calisthenics, readability and good design.

Everything I Ever Needed To Know About Web Dev, I Learned From My Twitter Timeline

by Colin DeCarlo

Somehow lengthy title, however still worth watching. Colin DeCarlo talks about some ideas on cleaning up code in your application gained from “fire tweets” on twitter.

Cruddy by Design

by Adam Wathan

There is never enough controllers. From @dhh tweet: More controllers doing less work obviates the need for any other fancy patterns. In this talk, Adam shows how you can move code from one controller into multiple ones.

Design Patterns with Laravel

by Colin DeCarlo

Colin talks entertainingly about three patterns: adapter, strategy and factory one.

Resisting Complexity

by Adam Wathan

Why is it OK that a User can be saved? Because according to Adam, methods are affordances.
Furthermore, don’t be afraid of facades, he says. But, to be fair, I think this statement was more relevant in 2018 than today. I think facades are well accepted nowadays.

LaraconUs 2018

by Colin DeCarlo

Having the correct tools is not the same as using the tools correctly. Learn how to use Laravel tools correctly. Some more tips on how your code could be better and more readable.

Practicing YAGNI

by Jason McCreary

This talk is about how to avoid overengineering and why you ain’t gonna need it is good.
Jason also has a BaseCode course. It is about code refactoring. Do check it out.
A field guide containing real-world practices to help you write code that’s less complex and more readable.

Laracon US

by Sandi Metz

She is one of the gurus in the Rails world. Different language, same rules. Do you know what a code smell is?
Really, name 5? Spoiler, nobody can.

MySql

Eloquent Performance Patterns

by Jonathan Reinink

If MySql and not your thing, you can learn a lot here. Even better, Jonathan shows how to write more complex and optimised queries with Laravel Eloquent. Finally, if you like the talk, here is an entire course dedicated to it.

Happy watching & learning.

Disclaimer: nobody paid me to promote those courses here. I bought and watched all of them. And I learned a lot. I generally think those courses are worth paying for.

Laravel News Links

‘This is a Left-Wing Cult’: Joe Rogan UNLOADS on Dishonest Media Coverage of Kyle Rittenhouse Trial

https://www.louderwithcrowder.com/media-library/image.png?id=27984877&width=980

Day Two of Rittenhouse jury deliberation has begun. As we wait, we reflect on what garbage human beings the mainstream media has been throughout this entire thing. The main reason he’s on trial is because of lies and untruths the media has spread. And this is just one man’s opinion, but I’m sure being attacked by media is on the mind of at last a few jurors if they’re thinking about voting to acquit Rittenhouse. Joe Rogan is someone who knows firsthand how much the media lies and swears by it. Once people discover this rant by Dr. Joe, MD, the media will proclaim "who, us?" Brian Stelter is probably crying into his breakfast cheesecake as we speak.

"This information is not based on reality. This is a left-wing cult. They’re pumping stuff out and then they are confirming this belief. They are all getting together, and they are ignoring contrary evidence. They are ignoring any narrative that challenges their belief about what happened, and they are not looking at it realistically. They are only looking at it like you would if you were in a f*cking cult."

As an aside, what a cast of characters! Drew Hernandez, Tim Pool, Blaire White, AND Alex Jones.

More people need to be exposed to who the media is. They won’t just go after you if you hold a different opinion than them or if they think they can use you to advance their leftist narrative. They’ll go after you if they even assume you have a different opinion. Rittenhouse is only one of the most extreme examples of it.


SNL Propaganda Isn’t Even Trying Anymore | Louder With Crowder

youtu.be

Louder With Crowder

How Triggers May Significantly Affect the Amount of Memory Allocated to Your MySQL Server

https://www.percona.com/blog/wp-content/uploads/2021/11/Triggers-Affect-Memory-Allocated-to-Your-MySQL-Server-300×157.pngTriggers Affect Memory Allocated to Your MySQL Server

Triggers Affect Memory Allocated to Your MySQL ServerMySQL stores active table descriptors in a special memory buffer called the table open cache. This buffer is controlled by configuration variables table_open_cache that hold the maximum number of table descriptors that MySQL should store in the cache, and table_open_cache_instances that stores the number of the table cache instances. With default values of table_open_cache=4000 and table_open_cache_instances=16, MySQL will create 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other.

If you use only tables, the table cache does not require a lot of memory because descriptors are lightweight, and even if you significantly increase the value of the table_open_cache, the required memory amount would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of tables.

However, if your tables have triggers, it changes the game.

For the test I created a table with a single column and inserted a row into it:

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

Then I flushed the table cache and measured how much memory it uses:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,02 sec)mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

Then I accessed the table to put it into the cache.

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,01 sec)

16 table descriptors took less than 16 KiB in the cache.

Now let’s try to create some triggers on this table and see if it changes anything.

mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW 
    -> BEGIN 
    ->   SIGNAL SQLSTATE '45000' SET message_text='Very long string. 
    ->     MySQL stores table descriptors in a special memory buffer, called table open cache. 
    ->     This buffer could be controlled by configuration variables table_open_cache that 
    ->     holds how many table descriptors MySQL should store in the cache and table_open_cache_instances 
    ->     that stores the number of the table cache instances. So with default values of table_open_cache=4000 
    ->     and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 
    ->     table descriptors each. These table cache instances could be accessed concurrently, allowing DML 
    ->     to use cached table descriptors without locking each other. If you use only tables, the table cache 
    ->     does not require a lot of memory, because descriptors are lightweight, and even if you significantly 
    ->     increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take 
    ->     up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge 
    ->     number for this number of open tables. However, if your tables have triggers, it changes the game.'; 
    -> END|

Then let’s flush the table cache and test memory usage again.

Initial state:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

After I put the tables into the cache:

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|    611.12 KiB |
+---------------+
1 row in set (0,00 sec)

As a result, in addition to 75.17 KiB in the table cache, 611.12 KiB is occupied by the memory/sql/sp_head::main_mem_root. That is the "Mem root for parsing and representation of stored programs."

This means that each time when the table is put into the table cache, all associated triggers are put into the memory buffer, storing their definitions.

FLUSH TABLES command clears the stored programs cache as well as the table cache:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

More triggers increase memory usage when put into the cache.

For example, if we create five more triggers and repeat our test we will see the following numbers:

mysql> \d |
mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances that stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other. If you use only tables, the table cache does not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of open tables. However, if your tables have triggers, it changes the game.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed ta a
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed table open cache. This buffer could be controlled by configuration variables table_open_cache tha
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
| 446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 3.58 MiB |
+---------------+
1 row in set (0,00 sec)

Numbers for the event memory/sql/sp_head::main_mem_root differ six times:

mysql> SELECT 3.58*1024/611.12;
+------------------+
| 3.58*1024/611.12 |
+------------------+
|         5.998691 |
+------------------+
1 row in set (0,00 sec)

Note that the length of the trigger definition affects the amount of memory allocated by the memory/sql/sp_head::main_mem_root.

For example, if we define the triggers as follow:

mysql> DROP TABLE tc_test;
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

mysql> \d |
mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,04 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> \d ;
mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|    446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|      1.89 MiB |
+---------------+
1 row in set (0,00 sec)

The resulting amount of memory is 1.89 MiB instead of 3.58 MiB for the longer trigger definition.

Note that having a single table cache instance requires less memory to store trigger definitions. E.g. for our small six triggers, it will be 121.12 KiB instead of 1.89 MiB:

mysql> SHOW GLOBAL VARIABLES LIKE 'table_open_cache_instances';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| table_open_cache_instances |     1 |
+----------------------------+-------+
1 row in set (0,00 sec)

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 121.12 KiB |
+---------------+
1 row in set (0,00 sec)

Conclusion

When you access tables that have associated triggers, their definitions are put into the stored programs cache even when not fired. This was reported at MySQL Bug #86821 and closed as “Not a Bug” by Oracle. This is, certainly, not a bug, but the table and stored routines cache design. Still, it is good to be prepared, so you are not surprised when you run short of memory faster than you expect. Especially if you have many triggers with long definitions.

Percona Database Performance Blog

BREAKING: Mark McCloskey Argues with BLM Protestors While Rittenhouse Jury Deliberates [VIDEO]

https://cdn0.thetruthaboutguns.com/wp-content/uploads/2021/11/2021-11-16_17-02-30.png

Mark McCloskey kenosha protestor
Mark McCloskey gets into it verbally with a protestor outside the courthouse in Kenosha. (Photo credit: Fox News)

Next Post Coming Soon…▶

While jurors deliberate inside the courthouse in Kenosha, Wisconsin in the Kyle Rittenhouse case – the outcome of which promises to be important for gun owners everywhere – Mark McCloskey is outside arguing with protestors. Why? Because there’s never a dull moment in what has been a circus of a two-week trial.

Fox News reports:

Mark McCloskey, the St. Louis lawyer who made national headlines last year when he carried a gun on his property near a social justice protest in his neighborhood, argued with a protester outside the Kenosha County Courthouse on Tuesday afternoon. 

“It really hurts me that you would have that much hatred,” the protester told McCloskey. 

“There is absolutely no hatred involved in what I did,” McCloskey responded. “They came in, storming through my gate, broke down my gate, stormed toward my house, and I was afraid for my life.”

If the jurors can’t reach a decision in the case, Judge Bruce Schroeder will be polling them to find out if they want to continue deliberating. Is McCloskey helping by grabbing another fifteen minutes of fame – or would that be infamy? – on the courthouse steps? Seems unlikely.

Here’s part of the exchange between McCloskey and a protestor.

Next Post Coming Soon…▶

The Truth About Guns