Donald The Orange Returns Triumphantly As Donald The White

Donald The Orange Returns Triumphantly As Donald The White

https://ift.tt/34rk59J


WASHINGTON, D.C.—While battling the darkest monster from the pit of hell, known as “COVID,” Donald the Orange fell to his doom several days ago, sacrificing himself to save America from the deadly demon.

So Americans were ecstatic to learn that Donald the Orange had returned in a new, better form, now known as Donald the White. A brilliant white light shone from Walter Reed Medical Center as Donald the White emerged just in time to save America from COVID, Antifa, and the Deep State.

“I come back to you now at the turn of the tide!” he cried as he rode triumphantly out in Shadowfax, cutting right through the ravenous hordes of Antifa counterprotesters blocking the way. 

“Donald! Donald the Orange!” cried his supporters outside Walter Reed Medical Center.

“Yes…” he said as he sat in the back of the presidential limousine, codenamed “Shadowfax.” “Yes… Donald the Orange… that is what they used to call me.” The newly revived president has a newfound passion for life and even more energy than before, though sources say he’s also taken up smoking.

Donald the White says he will use his newfound powers to patrol the U.S.-Mexico border, shouting at would-be immigrants, “You shall not pass!”


Babylon Bee subscriber Mitchell Moody contributed to this report. If you want to get involved with the staff writers at The Babylon Bee, check out our membership options

here

!

Get Free Access To Our Brand New Site: Not the Bee

After creating The Babylon Bee in six literal days, Adam Ford rested. But he rests no longer. Introducing Not the Bee — a brand new humor-based news site run by Adam himself. It’s loaded with funny content and all the best features of a social network. And the best part? Everyone with a subscription to The Bee gets full access at no extra cost.


fun

via The Babylon Bee https://babylonbee.com

October 5, 2020 at 04:57PM

Dangy Dagger 9MM Round Defeats Level 3A Body Armor

Dangy Dagger 9MM Round Defeats Level 3A Body Armor

https://ift.tt/3ljwFOW

U.S.A.-(Ammoland.com)- Austin Jones and Atlas Arms announced that they have successfully defeated level 3A body with their Dangy Dagger round.

The video released by the non-profit research group shows the founder of Atlas Arms fires their 9mm round at level 3A soft body made by AR500. The round blew through the panel and continued into the backstop of the range. What makes their round so revolutionary is that it can penetrate armor without violating the federal armor-piercing ammunition ban.

Much like Congress passing the Undetectable Firearms Act of 1988 because of a misconception about Glocks due to Hollywood, they also acted on bad information to ban so-called “armored piercing” bullets. In 1982, NBC ran a story on Teflon-coated bullets. Various gun control groups called these rounds “cop killers.” The report claimed that the bullets could defeat police body armor. The news organization’s claims were fake.

Hollywood helped spread the anti-gun propaganda about the bullets in movies such as Ronin and Lethal Weapon. Companies using Teflon to coat their bullets was just a marketing scheme. In actuality, the coating did nothing to increase the penetration of the bullets. Congress took the word of the anti-gun groups, the fake news, and Hollywood on the ammunition’s deadliness and enacted a ban.

Austin Jones decided that he needed to use the skills he developed in making durable inflation sections for the International Space Station to defeat an unjust gun law. The engineer and his team decided to take a multifaceted approach to defeat the ban.

One of the best determinations on if a bullet can penetrate armor is the speed of the round. The law doesn’t restrict how fast a bullet can move, so the team shed some weight from the bullet and found the optimum amount of powder to use in the round. They developed a 9mm round that shooters can fire at 2200ft/sec.

The ban also restricts jacketed bullets, so Atlas Arms made their rounds hollow points with a spike in the center. The hollow point causes a massive cavity when it hits soft tissue. Atlas Arms uses a center spike is to penetrate the armor.

Dangy Dagger 9MM Round
Dangy Dagger 9MM Round
Dangy Dagger 9MM Round
Dangy Dagger 9MM Round

Atlas Arms uses a still yet revealed material for the spike. By keeping it under wraps, the organization hopes to improve its design to defeat even tough armor and avoid federal and state governments’ legal actions. Even if the government does ban the ammunition material, the organization has contingency plans.

The round is not cheap. Jones believes the round will run customers between $2 and $4. Atlas Arms will offer a more affordable alternative that will sacrifice some penetration. Since customers do have a legitimate concern of over-penetration, these cheaper alternatives might be a happy medium.

Atlas Arms plans to release the bullets’ plans so anyone can make rounds utilizing the Dangy Dagger bullets. They also plan to release a cutting code for the Ghost Gunner 3 to make round casings. The company is also filing a patent on the design so anyone will be able to look up how to make the rounds even if individual states try to make sharing the plans illegal. It would make the federal government the ones sharing the designs online.

Readers can find out more about the Dangy Dagger at https://www.atlasarms.org.


About John Crump

John is also an Amazon best selling author and investigative firearms journalist. He lives in Northern Virginia with his wife and two sons. He can be followed on Twitter at @crumpyss, or at www.crumpy.com.

John-Crump-Headshot

The post Dangy Dagger 9MM Round Defeats Level 3A Body Armor appeared first on AmmoLand.com.

guns

via AmmoLand.com https://ift.tt/2okaFKE

October 5, 2020 at 06:00PM

The first ‘Monster Hunter’ movie teaser sets up an enormous battle

The first ‘Monster Hunter’ movie teaser sets up an enormous battle

https://ift.tt/2F0Xok4

After several years of development, you’re finally getting your first glimpse of the Monster Hunter movie in action. Sony and IGN have shared a teaser trailer (via Polygon) for Paul W.S. Anderson’s Monster Hunter production that gives a hint of what to expect. It’s very brief, but promises a climactic fight. Milla Jovovich (as Artemis) and T.I. (Link) are about to face off against a gigantic Black Diablos catching them by surprise as it erupts from the desert sand.

There aren’t any of the game series’ signature swords in the teaser, although promo photos (such as the one above) make clear they’ll show up at some stage.

The movie also stars action legend Tony Jaa, Ron Perlman, Diego Boneta and Meagan Good. It’s slated to debut “only in theaters” this December, although we wouldn’t be surprised if the pandemic changes the timing and availability like it has with other movies.

This looks like it might be a classic video game adaptation, for better or for worse. However, there are some aspects that work in its favor. Anderson and Jovovich are well-known for the Resident Evil movies (one of the few enduring game-based franchises), and there’s clearly talent attached beyond one or two recognizable names. It probably won’t be a timeless classic, but it might just be enjoyable.

geeky,Tech,Database

via Engadget http://www.engadget.com

October 3, 2020 at 04:18PM

I gotta move to Polk County

I gotta move to Polk County

https://ift.tt/30tDiqh

Get yourself an adult beverage and enjoy not only the description that Sheriff Grady Judd does about the chain of felonies committed by a felon, but the verbal butt whipping he applied to a journalist.

.

guns

via https://gunfreezone.net

October 2, 2020 at 04:17PM

Exciting and New Features in MariaDB 10.5

Exciting and New Features in MariaDB 10.5

https://ift.tt/2HQprUq

New Features in MariaDB 10.5

New Features in MariaDB 10.5MariaDB 10.5 was released in June 2020 and it will be supported until June 2025. This is the current stable version and comes with more exciting new features. In this blog, I am going to explain the new and exciting features involved in MariaDB 10.5. 

  • Amazon S3 engine
  • Column Store
  • INET 6 data type
  • Binaries name changed to mariadb
  • More granular privileges
  • Galera with full GTID support
  • InnoDB refactoring

Amazon S3 Engine

S3 engine is a nice feature in MariaDB 10.5. Now, you can directly move your table from a local device to Amazon S3 using the ALTER. Still, your data is accessible from MariaDB clients using the standard SQL commands. This is a great solution to those who are looking to archive data for future references at a low cost. I have written a blog about this feature – MariaDB S3 Engine: Implementation and Benchmarking – which has more insights on this. 

#Installation

MariaDB [(none)]> install soname 'ha_s3';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> select * from information_schema.engines where engine = 's3'\G
*************************** 1. row ***************************
      ENGINE: S3
     SUPPORT: YES
     COMMENT: Read only table stored in S3. Created by running ALTER TABLE table_name ENGINE=s3
TRANSACTIONS: NO
          XA: NO
  SAVEPOINTS: NO
1 row in set (0.000 sec)

#Implementation

MariaDB [s3_test]> alter table percona_s3 engine=s3;
Query OK, 0 rows affected (1.934 sec)              
Records: 0  Duplicates: 0  Warnings: 0

  • The S3 engine tables are completely read-only.
  • COUNT(*) is pretty fast on s3 engine tables.

ColumnStore

MariaDB ColumnStore 1.5 is available with MariaDB 10.5 community server. It brings a high-performance, open source, distributed, SQL compatible analytics solution. Before MariaDB 10.5, ColumnStore was available as a separate fork of MariaDB. But with MariaDB 10.5, ColumnStore is now completely integrated. All you need to do is install the package for ColumnStore “MariaDB-columnstore-engine.x86_64”.

[root@mariadb ~]# yum list installed | grep -i columnstore
MariaDB-columnstore-engine.x86_64   10.5.5-1.el7.centos         @mariadb-main

MariaDB [jesus]> select plugin_name,plugin_status,plugin_library,plugin_version from information_schema.plugins where plugin_name like 'columnstore%'; 
+---------------------+---------------+-------------------+----------------+
| plugin_name         | plugin_status | plugin_library    | plugin_version |
+---------------------+---------------+-------------------+----------------+
| Columnstore         | ACTIVE        | ha_columnstore.so | 1.5            |
| COLUMNSTORE_COLUMNS | ACTIVE        | ha_columnstore.so | 1.5            |
| COLUMNSTORE_TABLES  | ACTIVE        | ha_columnstore.so | 1.5            |
| COLUMNSTORE_FILES   | ACTIVE        | ha_columnstore.so | 1.5            |
| COLUMNSTORE_EXTENTS | ACTIVE        | ha_columnstore.so | 1.5            |
+---------------------+---------------+-------------------+----------------+
5 rows in set (0.002 sec)

MariaDB [jesus]> create table hercules(id int, name varchar(16)) engine = ColumnStore;
Query OK, 0 rows affected (0.503 sec)

MariaDB [jesus]> show create table hercules\G
*************************** 1. row ***************************
       Table: hercules
Create Table: CREATE TABLE `hercules` (
  `id` int(11) DEFAULT NULL,
  `name` varchar(16) DEFAULT NULL
) ENGINE=Columnstore DEFAULT CHARSET=latin1
1 row in set (0.000 sec)

MariaDB ColumnStore 1.5 comes with two .xml utilities, which greatly helps with configuration management.

  • mcsGetConfig : Used to display the current configurations
  • mcsSetConfig : Used to change the configuration
[root@mariadb vagrant]# mcsGetConfig -a | grep CrossEngineSupport.Pass
CrossEngineSupport.Password = 
[root@mariadb vagrant]# mcsSetConfig CrossEngineSupport Password "hercules7sakthi"
[root@mariadb vagrant]# mcsGetConfig -a | grep CrossEngineSupport.Pass
CrossEngineSupport.Password = hercules7sakthi

INET6 Data Type

Usually, INET6 refers to the IPv6 family.

  • INET6 data type is introduced to store the IPv6 addresses.
  • INET6 data type also can be used to store the IPv4 addresses assuming conventional mapping of IPv4 addresses into IPv6 addresses.
  • Internally storage engine see the INET6 as BINARY(16) and clients see the INET6 as CHAR(39)
  • Values are stored as a 16-byte fixed-length binary string

Example:

MariaDB [jesus]> create table inet6test (id int primary key auto_increment, ipaddresses INET6);
Query OK, 0 rows affected (0.005 sec)

MariaDB [jesus]> insert into inet6test (ipaddresses) values ('2001:0db8:85b3:0000:0000:8a2e:0370:7334');
Query OK, 1 row affected (0.001 sec)

MariaDB [jesus]> insert into inet6test (ipaddresses) values ('::172.28.128.12');
Query OK, 1 row affected (0.002 sec)

MariaDB [jesus]> select * from inet6test;
+----+------------------------------+
| id | ipaddresses                  |
+----+------------------------------+
|  1 | 2001:db8:85b3::8a2e:370:7334 |
|  2 | ::172.28.128.12              |
+----+------------------------------+
2 rows in set (0.000 sec)

Binaries Name Changed to mariadb

All binaries are now changed to “mariadb” from “mysql”, with symlinks for the corresponding mysql command.

Example:

  • “mysql” is now “mariadb”
  • “mysqldump” is now “mariadb-dump”
  • “mysqld” is now “mariadbd”
  • “mysqld_safe” is now “mariadbd-safe”

Using “mariadb” client:

[root@mariadb ~]# mariadb -e "select @@version, @@version_comment"
+----------------+-------------------+
| @@version      | @@version_comment |
+----------------+-------------------+
| 10.5.5-MariaDB | MariaDB Server    |
+----------------+-------------------+

Using “mariadb-dump”:

[root@mariadb ~]# mariadb-dump mysql > mysql.sql
[root@mariadb ~]# less mysql.sql | head -n5
-- MariaDB dump 10.17  Distrib 10.5.5-MariaDB, for Linux (x86_64)
--
-- Host: localhost    Database: mysql
-- ------------------------------------------------------
-- Server version 10.5.5-MariaDB

MariaDB server startup via systemd service will be started using the mariadbd binary. This is applicable for mariadbd-safe wrapper script as well. Even when called via the mysqld_safe symlink, it will start the actual server process as mariadbd, not mysqld.

Example:

Using startup service:

[root@mariadb ~]# service mysql start
Redirecting to /bin/systemctl start mysql.service
[root@mariadb ~]# ps -ef | grep -i mysql
mysql     9002     1  1 01:23 ?        00:00:00 /usr/sbin/mariadbd
root      9021  8938  0 01:23 pts/0    00:00:00 grep --color=auto -i mysql

Using mariadbd-safe:

[root@mariadb ~]# mariadbd-safe --user=mysql &
[root@mariadb ~]# 200806 01:30:43 mysqld_safe Logging to '/var/lib/mysql/mariadb.err'.
200806 01:30:43 mysqld_safe Starting mariadbd daemon with databases from /var/lib/mysql
[root@mariadb ~]# 
[root@mariadb ~]# ps -ef | grep -i mysql
root      9088  8938  0 01:30 pts/0    00:00:00 /bin/sh /bin/mariadbd-safe --user=mysql
mysql     9162  9088  1 01:30 pts/0    00:00:00 //sbin/mariadbd --basedir=/ --datadir=/var/lib/mysql --plugin-dir=//lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/mariadb.err --pi

Using mysqld_safe:

[root@mariadb ~]# mysqld_safe --user=mysql &
[root@mariadb ~]# 200806 01:31:40 mysqld_safe Logging to '/var/lib/mysql/mariadb.err'.
200806 01:31:40 mysqld_safe Starting mariadbd daemon with databases from /var/lib/mysql
[root@mariadb ~]# ps -ef | grep -i mysql
root      9179  8938  0 01:31 pts/0    00:00:00 /bin/sh /bin/mysqld_safe --user=mysql
mysql     9255  9179  0 01:31 pts/0    00:00:00 //sbin/mariadbd --basedir=/ --datadir=/var/lib/mysql --plugin-dir=//lib64/mysql/plugin --user=mysql --log-error=/var/lib/mysql/mariadb.err --pid-file=mariadb.pid

From the above examples, you can see that all the MariaDB server startup is using the “mariadbd”.

More Granular Privileges

Privileges are more granular now. SUPER privilege is split now with more small privileges, similar to MySQL 8 dynamic privileges.  Security-wise this is a very good implementation to avoid unwanted privileges allocation to users.

  • BINLOG ADMIN – Enables administration of the binary log, including the PURGE BINARY LOGS
  • BINLOG REPLAY – Enables replaying the binary log with the BINLOG statement
  • CONNECTION ADMIN – Enables administering connection resource limit options. This includes ignoring the limits specified by max_connections, max_user_connections, and max_password_errors
  • FEDERATED ADMIN – Execute CREATE SERVER, ALTER SERVER, and DROP SERVER statements. Added in MariaDB 10.5.2.
  • READ_ONLY ADMIN – User can set the read_only system variable and allows the user to perform write operations, even when the read_only option is active. Added in MariaDB 10.5.2.
  • REPLICATION MASTER ADMIN – Permits administration of primary servers, including the SHOW REPLICA HOSTS statement, and setting the gtid_binlog_state, gtid_domain_id, master_verify_checksum, and server_id system variables. Added in MariaDB 10.5.2.
  • REPLICATION SLAVE ADMIN – Permits administering replica servers, including START SLAVE, STOP SLAVE, CHANGE MASTER, SHOW SLAVE STATUS, SHOW RELAYLOG EVENTS statements (new in MariaDB 10.5.2).
  • SET USER – Enables setting the DEFINER when creating triggers, views, stored functions, and stored procedures (new in MariaDB 10.5.2).

And:

  • “REPLICATION CLIENT” is renamed to “BINLOG MONITOR”
  • “SHOW MASTER STATUS” command is now renamed to “SHOW BINLOG STATUS”
MariaDB [jesus]> show binlog status;
+-------------+----------+--------------+------------------+
| File        | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+-------------+----------+--------------+------------------+
| herc.000003 |      525 |              |                  |
+-------------+----------+--------------+------------------+
1 row in set (0.000 sec)

Galera With Full GTID Support

Galera is now completely supported with GTID from MariaDB 10.5. It will greatly help the cluster + Async replication environment. With this feature, all nodes in a cluster will have the same GTID for replicated events originating from the cluster.

MariaDB 10.5 also has the new SESSION variable “wsrep_gtid_seq_no”. With this variable, we can manually update the WSREP GTID sequence number in the cluster ( like gtid_seq_no for non WSREP transactions ).

MariaDB [jesus]> show variables like 'wsrep_gtid_seq_no';        
+-------------------+-------+
| Variable_name     | Value |
+-------------------+-------+
| wsrep_gtid_seq_no | 0     |
+-------------------+-------+
1 row in set (0.001 sec)

InnoDB Refactoring

There are some notable changes in InnoDB engine, which makes MariaDB more divergent from MySQL.

Apart from this, MariaDB 10.5 has more improvements on the following topics as well.

  • INFORMATION_SCHEMA
  • PERFORMANCE_SCHEMA
  • JSON
  • Query Optimizer
  • Binary logs with more metadata

I am looking forward to experimenting with the new MariaDB 10.5 features and how they are going to help in the production environments. I am also planning to write blogs on some of these topics, so stay tuned! 

Your mission-critical applications depend on your MariaDB database environment. What happens if your database goes down? Contact Percona MariaDB Database Support! Percona is the premier support provider for open source databases, including MariaDB, the most well-known fork of MySQL.

technology

via MySQL Performance Blog https://ift.tt/1znEN8i

October 2, 2020 at 11:49AM

MySQL 101: Tuning MySQL After Upgrading Memory

MySQL 101: Tuning MySQL After Upgrading Memory

https://ift.tt/30jNwcH

Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading MemoryIn this post, we will discuss what to do when you add more memory to your instance. Adding memory to a server where MySQL is running is common practice when scaling resources.

First, Some Context

Scaling resources is just adding more resources to your environment, and this can be split in two main ways: vertical scaling and horizontal scaling.

Vertical scaling is increasing hardware capacity for a given instance, thus having a more powerful server, while horizontal scaling is adding more servers, a pretty standard approach for load balancing and sharding.

As traffic grows, working datasets are getting bigger, and thus we start to suffer because the data that doesn’t fit into memory has to be retrieved from disk. This is a costly operation, even with modern NVME drives, so at some point, we will need to deal with either of the scaling solutions we mentioned.

In this case, we will discuss adding more RAM, which is usually the fastest and easiest way to scale hardware vertically, and also having more memory is probably the main benefit for MySQL.

How to Calculate Memory Utilization

First of all, we need to be clear about what variables allocate memory during MySQL operations, and we will cover only commons ones as there are a bunch of them. Also, we need to know that some variables will allocate memory globally, and others will do a per-thread allocation.

For the sake of simplicity, we will cover this topic considering the usage of the standard storage engine: InnoDB.

We have globally allocated variables:

key_buffer_size: MyISAM setting should be set to 8-16M, and anything above that is just wrong because we shouldn’t use MyISAM tables unless for a particular reason. A typical scenario is MyISAM being used by system tables only, which are small (this is valid for versions up to 5.7), and in MySQL 8 system tables were migrated to the InnoDB engine. So the impact of this variable is negligible.

query_cache_size: 0 is default and removed in 8.0, so we won’t consider it.

innodb_buffer_pool_size: which is the cache where InnoDB places pages to perform operations. The bigger, the better. 🙂

Of course, there are others, but their impact is minimal when running with defaults.

Also, there are other variables that are allocated on each thread (or open connection):
read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_size and tmp_table_size, and few others. All of them, by default, work very well as allocation is small and efficient. Hence, the main potential issue becomes where we allocate many connections that can hold these buffers for some time and add extra memory pressure. The ideal situation is to control how many connections are being opened (and used) and try to reduce that number to a sufficient number that doesn’t hurt the application.

But let’s not lose the focus, we have more memory, and we need to know how to tune it properly to make the best usage.

The most memory-impacting setting we need to focus on is innodb_buffer_pool_size, as this is where almost all magic happens and is usually the more significant memory consumer. There is an old rule of thumb that says, “size of this setting should be set around 75% of available memory”, and some cloud vendors setup this value to total_memory*0.75.

I said “old” because that rule was good when running instances with 8G or 16G of RAM was common, so allocating roughly 6G out of 8G or 13G out of 16G used to be logical.

But what if we run into an instance with 100G or even 200G? It’s not uncommon to see this type of hardware nowadays, so we will use 80G out of 100G or 160G out of 200G? Meaning, will we avoid allocating something between 20G to 40G of memory and leave that for filesystem cache operations? While these filesystem operations are not useless, I don’t see OS needing more than 4G-8G for this purpose on a dedicated DB. Also, it is recommended to use the O_DIRECT flushing method for InnoDB to bypass the filesystem cache.

Example

Now that we understand the primary variables allocating memory let’s check a good use case I’m currently working on. Assuming this system:

$ free -m
      total      used     free    shared    buff/cache    available
Mem: 385625    307295    40921         4         37408        74865

So roughly 380G of RAM, a nice amount of memory. Now let’s check what is the maximum potential allocation considering max used connections.

*A little disclaimer here, while this query is not entirely accurate and thus it can diverge from real results, we can have a sense of what is potentially going to be allocated, and we can take advantage of performance_schema database, but this may require enabling some instruments disabled by default:

mysql > show global status like 'max_used_connections';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections |    67 |
+----------------------+-------+
1 row in set (0.00 sec)

So with a maximum of 67 connections used, we can get:

mysql > SELECT ( @@key_buffer_size
-> + @@innodb_buffer_pool_size
-> + 67 * (@@read_buffer_size
-> + @@read_rnd_buffer_size
-> + @@sort_buffer_size
-> + @@join_buffer_size
-> + @@tmp_table_size )) / (1024*1024*1024) AS MAX_MEMORY_GB;
+---------------+
| MAX_MEMORY_GB |
+---------------+
| 316.4434      |
+---------------+
1 row in set (0.00 sec)

So far, so good, we are within memory ranges, now let’s see how big the innodb_buffer_pool_size is and if it is well sized:

mysql > SELECT (@@innodb_buffer_pool_size) / (1024*1024*1024) AS BUFFER_POOL_SIZE;
+------------------+
| BUFFER_POOL_SIZE |
+------------------+
| 310.0000         |
+------------------+
1 row in set (0.01 sec)

So the buffer pool is 310G, roughly 82% of total memory, and total usage so far was around 84% which leaves us around 60G of memory not being used. Well, being used by filesystem cache, which, in the end, is not used by InnoDB.

Ok now, let’s get to the point, how to properly configure memory to be used effectively by MySQL. From pt-mysql-summary we know that the buffer pool is fully filled:

Buffer Pool Size | 310.0G
Buffer Pool Fill | 100%

Does this mean we need more memory? Maybe, so let’s check how many disk operations we have in an instance we know with a working dataset that doesn’t fit in memory (the very reason why we increased memory size) using with this command:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99857480858|
| Innodb_buffer_pool_reads         | 598600690  |
| Innodb_buffer_pool_read_requests | 274985     |
| Innodb_buffer_pool_reads         | 1602       |
| Innodb_buffer_pool_read_requests | 267139     |
| Innodb_buffer_pool_reads         | 1562       |
| Innodb_buffer_pool_read_requests | 270779     |
| Innodb_buffer_pool_reads         | 1731       |
| Innodb_buffer_pool_read_requests | 287594     |
| Innodb_buffer_pool_reads         | 1567       |
| Innodb_buffer_pool_read_requests | 282786     |
| Innodb_buffer_pool_reads         | 1754       |

Innodb_buffer_pool_read_requests: page reads satisfied from memory (good)
Innodb_buffer_pool_reads: page reads from disk (bad)

As you may notice, we still get some reads from the disk, and we want to avoid them, so let’s increase the buffer pool size to 340G (90% of total memory) and check again:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99937722883 |
| Innodb_buffer_pool_reads         | 599056712   |
| Innodb_buffer_pool_read_requests | 293642      |
| Innodb_buffer_pool_reads         | 1           |
| Innodb_buffer_pool_read_requests | 296248      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 294409      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 296394      |
| Innodb_buffer_pool_reads         | 6           |
| Innodb_buffer_pool_read_requests | 303379      |
| Innodb_buffer_pool_reads         | 0           |

Now we are barely going to disk, and IO pressure was released; this makes us happy –  right?

Summary

If you increase the memory size of a server, you mostly need to focus on innodb_buffer_pool_size, as this is the most critical variable to tune. Allocating 90% to 95% of total available memory on big systems is not bad at all, as OS requires only a few GB to run correctly, and a few more for memory swap should be enough to run without problems.

Also, check your maximum connections required (and used,) as this is a common mistake causing memory issues, and if you need to run with 1000 connections opened, then allocating 90% of the memory of the buffer pool may not be possible, and some additional actions may be required (i.e., adding a proxy layer or a connection pool).

From MySQL 8, we have a new variable called innodb_dedicated_server, which will auto-calculate the memory allocation. While this variable is really useful for an initial approach, it may under-allocate some memory in systems with more than 4G of RAM as it sets the buffer pool size = (detected server memory * 0.75), so in a 200G server, we have only 150 for the buffer pool.

Conclusion

Vertical scaling is the easiest and fastest way to improve performance, and it is also cheaper – but not magical. Tuning variables properly requires analysis and understanding of how memory is being used. This post focused on the essential variables to consider when tuning memory allocation, specifically innodb_buffer_pool_size and max_connections. Don’t over-tune when it’s not necessary and be cautious of how these two affect your systems.

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 30, 2020 at 11:34AM

2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders

2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders

https://ift.tt/36jlIZt

The 2019 FBI crime statistics just released via its annual report, “Crime in the United States.” And once again, without fail, it highlights the overwhelming hypocrisy of gun control. Rifles of any sort—let alone “scary” black rifles—were responsible for less murders than several other categories, including “blunt objects” like hammers and clubs.

FBI Report Crime, kittery trading post rifles

RELATED STORY

FBI Report: Crime in the US Shows Rifles Rarely Used to Murder

2019 FBI Crime Stastics

In all, 10,258 murders took place using a firearm in 2019. Predictably, handguns led the way, accounting for 6,368 murders. But murderers used rifles only 364 times. Meanwhile, shotguns accounted for just 200 wrongful deaths.

Those numbers stand out when compared to other categories, ones not regulated or facing the mob of gun control zealots. Knives (cutting instruments), for example, counted for 1,476 murders in 2019. Blunt objects wracked up another 397. Personal weapons, clarified as hands, fists, feet, etc., accounted for 600 murders.

So while the left wants to take our guns, the actual FBI data proves the dishonesty of the narrative. Fists, Feet, knives and clubs all pose a more imminent source of danger. Yet, black guns, specifically rifles with magazines of more than 10 rounds, face the mob.

The 2019 FBI crime statistics showed crime decreased compared to 2018.

Interestingly, all the while, crime continues to fall. In some categories, numbers approach two decades of constant decline, as gun ownership swells across the country.

Recently released FBI crime statistics for 2019 shows violent crime decreasing for the third consecutive year. Violent crime dropped 0.5 percent compared to 2018.

Property crime also dropped 4.1 percent over 2018. The figure marked the seventeenth consecutive year the collective estimates declined. The 2019 statistics further shows the estimated rate of violent crime occurred 366.7 offenses per 100,000 inhabitants. Meanwhile, property crime occurred 2,109.9 offenses per 100,000 inhabitants.

The FBI data shows 2019 as a great year, with property crime, burglary and violent crime among other categories falling compared to 2018. Sadly, 2020 will most assuredly prove differently.

Crime and civil unrest continue to rise. And we expect next year’s FBI data to tell a much different tale — except on guns; we fully expect those stats to remain solid.

The post 2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders appeared first on Personal Defense World.

guns

via Personal Defense World https://ift.tt/2Arq2GB

September 29, 2020 at 03:06PM

Starships Size Comparison 2.0

Starships Size Comparison 2.0

https://ift.tt/3kY5KYV

Starships Size Comparison 2.0

Link

MetaBallStudios revisits their earlier starship size comparison with significantly more ships from the worlds of science fiction and fantasy. Just about every imaginable class of ship is represented, from the teensy Hocotate ship flown by the Pikmin to the Planet Express ship from Futurama to the gargantuan Ringworld.

fun

via The Awesomer https://theawesomer.com

September 29, 2020 at 04:15PM

MySQL: Import CSV, not using LOAD DATA

MySQL: Import CSV, not using LOAD DATA

https://ift.tt/339yFDb

All over the Internet people are having trouble getting LOAD DATA and LOAD DATA LOCAL to work. Frankly, do not use them, and especially not the LOCAL variant. They are insecure, and even if you get them to work, they are limited and unlikely to do what you want. Write a small data load program as shown below.

Not using LOAD DATA LOCAL

The fine manual says:

The LOCAL version of LOAD DATA has two potential security issues:

  • Because LOAD DATA LOCAL is an SQL statement, parsing occurs on the server side, and transfer of the file from the client host to the server host is initiated by the MySQL server, which tells the client the file named in the statement. In theory, a patched server could tell the client program to transfer a file of the server’s choosing rather than the file named in the statement. Such a server could access any file on the client host to which the client user has read access. (A patched server could in fact reply with a file-transfer request to any statement, not just LOAD DATA LOCAL, so a more fundamental issue is that clients should not connect to untrusted servers.)

  • In a Web environment where the clients are connecting from a Web server, a user could use LOAD DATA LOCAL to read any files that the Web server process has read access to (assuming that a user could run any statement against the SQL server). In this environment, the client with respect to the MySQL server actually is the Web server, not a remote program being run by users who connect to the Web server.

The second issue in reality means that if the web server has a suitable SQL injection vulnerability, the attacker may use that to read any file the web server has access to, bouncing this through the database server.

In short, never use (or even enable) LOAD DATA LOCAL.

  • local_infile is disabled in the server config, and you should keep it that way.
  • client libraries are by default compiled with ENABLED_LOCAL_INFILE set to off. It can still be enabled using a call to the mysql_options() C-API, but never do that.
  • 8.0.21+ places additional restrictions on this, to prevent you from being stupid (that is, actually enabling this anywhere).

Not using LOAD DATA

The LOAD DATA variant of the command assumes that you place a file on the database server, into a directory in the file system of the server, and load it from there. In the age of “MySQL as a service” this is inconvenient to impossible, so forget about this option, too.

If you were able to do place files onto the system where your mysqld lives,

  • your user needs to have FILE as a privilege, a global privilege (GRANT FILE TO ... ON *.*)
  • the server variable secure_file_priv needs to be set to a directory name, and that directory needs to be world-readable. LOAD DATA and SELECT INTO OUTFILE work only with filenames below this directory. Setting this variable requires a server restart, this is not a dynamic variable (on purpose).

Note that the variable can be NULL (this is secure in the sense that LOAD DATA is disabled) or empty (this is insecure in that there are no restrictions).

There is nothing preventing you from setting the variable to /var/lib/mysql or other dumb locations which would expose vital system files to load and save operations. Do not do this.

Also, a location such as /tmp or any other world-writeable directory would be dumb: Use a dedicated directory that is writeable by the import user only, and make sure that it is world-readable in order to make the command work.

Better: Do not use this command at all (and set secure_file_priv to NULL).

Using data dump and load programs instead

We spoke about dumping a schema into CSV files in Export the entire database to CSV already.

To complete the discussion we need to provide a way to do the inverse and load data from a CSV file into a table.

The full code is in load.py.

The main idea is to open a .csv file with csv.reader, and then iterate over the rows. For each row, we execute an INSERT statement, and every few rows we also COMMIT.

In terms of dependencies, we rely on MySQLdb and csv:

import MySQLdb
import csv

We need to know the name of a table, and the column names of that table (in the order in which they appear).

We should also make sure we can change the delimiter and quoting character used by the CSV, and make the commit interval variable.

Finally, we need to be able to connect to the database.

# table to load into
table = "data"

# column names to load into
columns = [
    "id",
    "d",
    "e",
]

# formatting options
delimiter = ","
quotechar = '"'

# commit every commit_interval lines
commit_interval = 1000

# connect to database, set mysql_use_results mode for streaming
db_config = dict(
    host="localhost",
    user="kris",
    passwd="geheim",
    db="kris",
)

From this, we can build a database connection and an INSERT statement, using the table name and column names:

db = MySQLdb.connect(**db_config)

# build a proper insert command
cmd = f"insert into {table} ( "
cmd += ", ".join(columns)
cmd += ") values ("
cmd += "%s," * len(columns)
cmd = cmd[:-1] + ")"
print(f"cmd = {cmd}")

The actual code is then rather simple: Open the CSV file, named after the table, and create a csv.reader(). Using this, we iterate over the rows.

For each row, we execute the insert statement.

Every commit_interval rows we commit, and for good measure we also commit after finishing, to make sure any remaining rows also get written out.

with open(f"{table}.csv", "r") as csvfile:
    reader = csv.reader(csvfile, delimiter=delimiter, quotechar=quotechar)

    c = db.cursor()
    counter = 0

    # insert the rows as we read them
    for row in reader:
        c.execute(cmd, row)

        # ever commit_interval we issue a commit
        counter += 1
        if (counter % commit_interval) == 0:
            db.commit()

    # final commit to the remainder
    db.commit()

And that it. That’s all the code.

  • No FILE privilege,
  • No special permissions besides insert_priv into the target table.
  • No config in the database.
  • No server restart to set up the permissions.

And using Python’s multiprocessing, you could make it load multiple tables in parallel or chunk a very large table and load that in parallel – assuming you have database hardware that could profit from any of this.

In any case – this is simpler, more secure and less privileged than any of the broken LOAD DATA variants.

Don’t use them, write a loader program.

Let’s run it. First we generate some data, using the previous example from the partitions tutorial:

(venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py setup-tables
(venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py start-processing
create p2 reason: not enough partitions
cmd = alter table data add partition ( partition p2 values less than ( 20000))
create p3 reason: not enough partitions
cmd = alter table data add partition ( partition p3 values less than ( 30000))
create p4 reason: not enough partitions
cmd = alter table data add partition ( partition p4 values less than ( 40000))
create p5 reason: not enough partitions
cmd = alter table data add partition ( partition p5 values less than ( 50000))
create p6 reason: not enough empty partitions
cmd = alter table data add partition ( partition p6 values less than ( 60000))
counter = 1000
counter = 2000
counter = 3000
counter = 4000
^CError in atexit._run_exitfuncs: ...

We then dump the data, truncate the table, and reload the data. We count the rows to be sure we get all of them back.

(venv) kris@server:~/Python/mysql$ mysql-csv/dump.py
table = data
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+
| count(*) |
+----------+
| 4511 |
+----------+
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'truncate table data'
mysql: [Warning] Using a password on the command line interface can be insecure.
(venv) kris@server:~/Python/mysql$ mysql-csv/load.py
cmd = insert into data ( id, d, e) values (%s,%s,%s)
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+
| count(*) |
+----------+
| 4511 |
+----------+

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 28, 2020 at 02:09PM