Percona announces the release of Percona Monitoring and Management 1.9.1. PMM (Percona Monitoring and Management) is a free and open-source platform for managing and monitoring MySQL and MongoDB performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.
This release contains bug fixes only and supersedes Percona Monitoring and Management 1.9.0. This release effectively solves the problem in QAN when the Count column actually displayed the number of queries per minute, not per second, as the user would expect. The following screenshot demonstrates the problem. The value of the Countcolumn for the TOTAL row is 649.38 QPS (queries per second). The total number 38.96 k (38960) is only sixty times greater than the reported value of QPS. Thus, queries were counted for each minute within the selected time range of Last1hour.
Query Analytics in PMM version 1.9.0.
The corrected version of QAN in PMM 1.9.1 shows that queries are now counted per second. The total number of queries is 60 * 60 greater than the value of QPS, as should be expected for the chosen time range.
Ft Collins, CO –-(Ammoland.com)- “You can’t ‘manage’ men into battle. You have to lead them!” ~ Rear Admiral Grace M Hopper,
The eldest son of President Theodore “Teddy” Roosevelt was Theodore Roosevelt Jr.
Theodore Roosevelt Jr’s cousin was Franklin D Roosevelt, who became president, and was Commander-in-Chief when Theodore Jr, then a brigadier general, was the only flag-grade officer to land at Normandy during D-Day as part of the First Wave, and at age fifty-six, the oldest person to participate in the Invasion. He was the second American to step off the boat and into the water at Utah Beach (Captain Leonard T Schroeder Jr was the first).
Theodore Jr’s younger brother, Quentin, at the rank of 2nd Lt, was a pilot during WWI and was killed in action during that War. During that same War, Theodore Jr had been awarded the Distinguished Service Cross
Theodore Jr’s son, also named Quentin (the 2nd), and also an officer (captain), also landed at Normandy on the same day as his father, also in the First Wave! Theodore Jr and Quentin II were the only father/son team to land at Normandy.
Between wars, Theodore Jr was involved in politics, and held the post of Assistant Secretary of the Navy. It was during that time that the acrimonious (and more or less permanent) family schism between the “Oyster Bay” Roosevelts (Teddy’s side) and the “Hyde Park” Roosevelts (FDR’s side) took place, precipitated by foul political maneuverings on the part of FDR’s wife, Eleanor Roosevelt.
Theodore Jr had no kind words for FDR, nor Eleanor, thereafter, considering them unworthy of the Roosevelt family name.
Eleanor later apologized for her less-than-honorable political activity, describing it as “below my dignity.” But, the damage was done, and was, as is so often the case, irreparable.
Captain Leonard T Schroeder Jr
A durable and versatile commander, Theodore Jr also served as Governor of Puerto Rico, and Governor-General of the Philippines. While in the Philippines, he earned the nickname “One-Shot-Teddy,” due to his expert marksmanship while hunting local water buffalo.
During WWII and back in the Army, now at the rank of colonel, and soon brigadier general, Theodore found himself ADC of the 4th ID, stationed in England.
Theodore Jr’s requests to personally land with the First Wave during the D-Day Invasion were repeatedly denied. Never in good health, Theodore Jr was now fifty-six, gaunt, and required a cane. From his father, Theodore Jr had inherited abject bravery and an unselfish spirit of service to his Country, but not his robust health.
Major General Raymond “Tubby” Barton, commanding the 4 ID, exhausted with his requests, eventually conceded to Theodore Jr’s request, never expecting to see him alive again.
Hours later, when he stepped off the landing craft on Utah Beach, Theodore Jr immediately knew his entire First Wave (8th Infantry Regiment), owing to strong cross-currents, had landed at least a half-mile away from their intended landing zone.
Subordinates pointed-out, during a hasty beach-meeting, that the Second and subsequent waves, with the rest of the regiment and most of their heavy equipment, might land on the “right” beach.
Theodore Jr then made his famously audacious statement, for which he would be known ever thereafter:
“The rest of the Regiment will just have to catch up with us. We’ll start this war from right here!”
Contemptuously oblivious of hostile fire, for hours thereafter Theodore Jr personally greeted commanders of each succeeding wave (including General Barton himself), on the beach as they arrived, briefing them on the current situation and directing them to their assigned areas. He light-heartedly encouraged them with jokes, even poetry.
On 12 July 1944, after being involved in continuous fighting since the Invasion started 35 days earlier, Theodore Jr died, of all things, from a heart attack! He was eventually buried at the Omaha Beach American Cemetery, next to his younger brother, Quentin, who had died in France so many years earlier.
Theodore Jr was posthumously awarded the CMH for conspicuous bravery.
Throughout his military career, during both WWI and WWII, Theodore Jr had the reputation of always being personally up front with his troops. He was usually found sharing a fox-hole with front-line soldiers. Rarely was he found “in the rear with the gear.”
In fact, George Patton and Omar Bradley both criticized him for being “too close to his men.” However, in his notes, Patton referred to Theodore Jr as “one of the bravest men I’ve ever known.”
Omar Bradley described Theodore Jr’s actions at Utah Beach to be, “… the single most heroic action I’ve ever seen in combat.”
Paradoxically, both Patton and Bradley would be pallbearers at Theodore Jr’s funeral.
And from the other end of the rank-spectrum, an unnamed PFC, under Theodore Jr’s command at Utah Beach, said that he witnessed the general confidently walking around the battle area, apparently unconcerned with enemy fire (which was continuous).
“His conspicuous gallantry gave me the courage to get up and get on with the job.”
That’s what real leaders do!
/John
About John Farnam & Defense Training International, Inc As a defensive weapons and tactics instructor John Farnam will urge you, based on your own beliefs, to make up your mind in advance as to what you would do when faced with an imminent lethal threat. You should, of course, also decide what preparations you should make in advance if any. Defense Training International wants to make sure that their students fully understand the physical, legal, psychological, and societal consequences of their actions or in-actions.
It is our duty to make you aware of certain unpleasant physical realities intrinsic to the Planet Earth. Mr. Farnam is happy to be your counselor and advisor. Visit: www.defense-training.com
(NSFW: Mature) Looks like Byron & Dave have hit the nail on the head about some of the people we have known in our past work lives. After watching this dark comedy about socializing outside the office, nothing pleases us more than to be self-employed.
No, I don’t really believe that, because I’ve seen NRA members of all races in my life and the NRA famously stood for a black man’s right to own a gun in the Supreme Court case of McDonald v. Chicago, but a lot of people do.
To that end, NRATV’s Colion Noir decided to present a history of the NRA and its relationship with black people.
“I know many of you won’t be willing to give me a chance,” Colion says, and, unfortunately, he’s right. They don’t want to hear the facts, especially from a black man that has gone off the Democratic plantation.
Still, he implores people to listen. “All I ask is that you watch,” he says.
If they do, though, I expect they’ll ignore what he says and cling bitterly to their irrationalities and fictions. They don’t want the truth. What they do want is to undermine gun ownership, and that starts with the NRA.
When I wrote “The Stigmatization Of Gun Owners,” I forgot to mention the demonization of the NRA as racist. I mentioned claims that it’s a terrorist organization, but I’ve gotten used to the accusations that the NRA is hateful, bigoted and cares nothing about minorities. It’s not that they’re factual, but that they’re old and common, enough so that I’ve already addressed them before.
This isn’t going anywhere, though. It’s our new normal for the time being, so we have to settle in and get ready to fight to keep our Second Amendment rights.
While I expect the post-Parkland outrage to die out sooner or later, there will be another incident. There’s always another incident and that won’t change no matter what the anti-gunners do. When it happens, though, all this will come back up. They’ll talk about how racist the NRA and gun owners are. We’ll be right back where we started.
Yes, I think the NRA needs more spokespeople like Noir. I have suggestions, of course, though whether those people would be interested or not remains to be seen. More white people like myself won’t change anyone’s mind about the NRA. We need as many different faces as possible to disprove these claims, to show them for the lies that they are.
What Noir misses, though, is that when so-called journalists call the NRA racist, they don’t just mean the NRA. They use those three letters as a proxy for gun owners in general. They want to paint all of us with the same broad brushstroke, even if we’re not members of the NRA. Of course, a lot of us renewed our membership because of what the mainstream media has been pulling, lately.
But I encourage you all to watch the video. There’s little in it that should be to us, but it’s still good information in a handy format that might become necessary in your own debates.
The overall point, that the media’s portrayal of the NRA as racist despite extensive evidence to the contrary, can’t be overstated.
New research links so-called “good” HDL cholesterol with infectious diseases such as gasteroenteritis and pneumonia.
“Surprisingly, we found that individuals with both low and high HDL cholesterol had high risk of hospitalization with an infectious disease. Perhaps more importantly, these same groups of individuals had high risk of dying from infectious disease,” explains Børge Nordestgaard, professor and chief physician at the University of Copenhagen and Copenhagen University Hospital.
The results are based on data from 100,000 individuals from the Copenhagen General Population Study whom researchers followed for more than 6 years using national Danish health registries. The findings appear in the European Heart Journal.
“Numerous studies in animals and cells indicate that HDL is of importance for the function of the immune system and thereby the susceptibility to infectious disease, but this study is the first to examine if HDL is associated with the risk of infectious disease among individuals from the general population,” explains PhD student, physician, and study coauthor Christian Medom Madsen.
The authors cannot, based on this study, conclude that very low or very high HDL is the direct cause of the increased risk of infectious disease, but conversely they cannot rule out a direct causal relationship either, as data from the genetic part of the study indicate that this might be the case.
“Our findings indicate that, in the future, research into the role and function of HDL should not narrowly focus on cardiovascular disease, but rather focus on the role of HDL in other disease areas, such as infectious disease,” says Nordestgaard.
The 21 percent of the population with the lowest concentrations of HDL cholesterol and the 8 percent of the population with the highest concentrations of HDL cholesterol had high risk of infectious disease.
Individuals with very low HDL cholesterol had a 75 percent higher risk of infectious disease as compared to the reference group and the risk was 43 percent higher in those with very high HDL cholesterol.
Before City Barbecue hit town, Hoggy’s was pretty much the place for local barbecue aficionados. It had a stranglehold on the Columbus market, with several locations throughout the central Ohio region in the 1990s. But things change. Eaters are fickle, tastes change, and new competition enters the scene; nothing lasts forever. Hoggy’s presence in the market ebbed until its only remaining outlet was as a side operation out of a Johnny Bucelli’s project on the north end.
It was all but dead.
A recent drive by the aforementioned outlet offered a new perspective. The address now houses a Hoggy’s: a real Hoggy’s. In fact, according to the host, it’s been a real Hoggy’s for a year.
There was great rejoicing in the land. There was also some trepidation: experiences with the restaurant chain in its dying days were not terrific. That is, of course, part of the whirlpool of death for any restaurant. When a restaurant starts to struggle, its quality goes down, when quality declines, no one wants to eat there. If no one eats there, the restaurant makes no money. Presto: Death of restaurant.
So, the pressing first question on the agenda was this: Is the Smoked Half Chicken ($12.95) right again? Happily, it has returned to its former glory. It is huge; a half chicken the size of a full rotisserie chicken. The dark hue of its skin betrays meat that is fully imbibed with smoky flavor — succulent, and completely binge-worthy.
Staying in the chicken department a little longer, you can also get Smoked Wings ($11.95) in a variety of flavors, including garlic ranch, Cajun and hogfire. The Jamaican version tried was on the cloyingly sweet side, which overwhelmed any smokiness. One and done, but that’s not to say there won’t be further tries with other flavors.
The Single Pig sandwich ($8.95) permits an opportunity to try out the house brisket (not a pig) on a long, soft bun. The brisket is lean and thinly sliced with more smoky flavor that teams well with the sweetness of the house barbecue sauce.
Ribs ($16.95) are also on the menu. Lean and meaty, they’re okay, but lots of places have good ribs in this day and age. So, back to the chicken instead: the BBQ Nachos ($9.95) with chicken are a new favorite. It’s the requisite chips, covered with melted cheese, salsa, sour cream and miles and miles of hunks of smoked chicken. The chicken pieces are large enough to appreciate their quality and flavor contributions.
Dining options such as the aforementioned half chicken are served with sides. The house baked beans and macaroni and cheese are both recommended choices. The former, because the tangy beans have more complexity and care than your average baked beans, the latter, because of uncommon cheesiness.
Also, there’s a kids menu with mini-versions of Hoggy’s offerings. There’s also an option to order grown-up Loaded Fries, which will make you immortal in the eyes of junior guests. An order yields a dinner-plate-sized serving of fries doused in cheese sauce, ranch dressing, and something like a pound of bacon crumbles. It is $4.50, and surely every bit as nutritious as chicken tenders. You can find it all at 830 Bethel Road.
Miriam Bowers Abbott is a freelancer contributor to Columbus Underground who reviews restaurants, writes food-centric featurettes and occasionally pens other community journalism pieces.
In MySQL 8.0 we have replaced the old regular expression library with the ICU regex library. See Martin’s blog on the topic. The main goal is to get full Unicode support for regular expressions, but in addition we get a lot of neat features.…
Running databases on cloud infrastructure is getting increasingly popular these days. Although a cloud VM may not be as reliable as an enterprise-grade server, the main cloud providers offer a variety of tools to increase service availability. In this blog post, we’ll show you how to architect your MySQL or MariaDB database for high availability, in the cloud. We will be looking specifically at Amazon Web Services and Google Cloud Platform, but most of the tips can be used with other cloud providers too.
Both AWS and Google offer database services on their clouds, and these services can be configured for high availability. It is possible to have copies in different availability zones (or zones in GCP), in order to increase your chances to survive partial failure of services within a region. Although a hosted service is a very convenient way of running a database, note that the service is designed to behave in a specific way and that may or may not fit your requirements. So for instance, AWS RDS for MySQL has a pretty limited list of options when it comes to failover handling. Multi-AZ deployments come with 60-120 seconds failover time as per the documentation. In fact, given the “shadow” MySQL instance has to start from a “corrupted” dataset, this may take even longer as more work could be required on applying or rolling back transactions from InnoDB redo logs. There is an option to promote a slave to become a master, but it is not feasible as you cannot reslave existing slaves off the new master. In the case of a managed service, it is also intrinsically more complex and harder to trace performance problems. More insights on RDS for MySQL and its limitations in this blog post.
On the other hand, if you decide to manage the databases, you are in a different world of possibilities. A number of things that you can do on bare metal are also possible on EC2 or Compute Engine instances. You do not have the overhead of managing the underlying hardware, and yet retain control on how to architect the system. There are two main options when designing for MySQL availability – MySQL replication and Galera Cluster. Let’s discuss them.
MySQL Replication
MySQL replication is a common way of scaling MySQL with multiple copies of the data. Asynchronous or semi-synchronous, it allows to propagate changes executed on a single writer, the master, to replicas/slaves – each of which would contain the full data set and can be promoted to become the new master. Replication can also be used for scaling reads, by directing read traffic to replicas and offloading the master in this way. The main advantage of replication is the ease of use – it is so widely known and popular (it’s also easy to configure) that there are numerous resources and tools to help you manage and configure it. Our own ClusterControl is one of them – you can use it to easily deploy a MySQL replication setup with integrated load balancers, manage topology changes, failover/recovery, and so on.
One major issue with MySQL replication is that it is not designed to handle network splits or master’s failure. If a master goes down, you have to promote one of the replicas. This is a manual process, although it can be automated with external tools (e.g. ClusterControl). There is also no quorum mechanism and there is no support for fencing of failed master instances in MySQL replication. Unfortunately, this may lead to serious issues in distributed environments – if you promoted a new master while your old one comes back online, you may end up writing to two nodes, creating data drift and causing serious data consistency issues.
We’ll look into some examples later in this post, that shows you how to detect network splits and implement STONITH or some other fencing mechanism for your MySQL replication setup.
Galera Cluster
We saw in the previous section that MySQL replication lacks fencing and quorum support – this is where Galera Cluster shines. It has a quorum support built-in, it also has a fencing mechanism which prevents partitioned nodes from accepting writes. This makes Galera Cluster more suitable than replication in multi-datacenter setups. Galera Cluster also supports multiple writers, and is able to resolve write conflicts. You are therefore not limited to a single writer in a multi-datacenter setup, it is possible to have a writer in every datacenter which reduces the latency between your application and database tier. It does not speed up writes as every write still has to be sent to every Galera node for certification, but it’s still easier than to send writes from all application servers across WAN to one single remote master.
As good as Galera is, it is not always the best choice for all workloads. Galera is not a drop-in replacement for MySQL/InnoDB. It shares common features with “normal” MySQL – it uses InnoDB as storage engine, it contains the entire dataset on every node, which makes JOINs feasible. Still, some of the performance characteristics of Galera (like the performance of writes which are affected by network latency) differ from what you’d expect from replication setups. Maintenance looks different too: schema change handling works slightly different. Some schema designs are not optimal: if you have hotspots in your tables, like frequently updated counters, this may lead to performance issues. There is also a difference in best practices related to batch processing – instead of executing queries in large transactions, you want your transactions to be small.
Proxy tier
It is very hard and cumbersome to build a highly available setup without proxies. Sure, you can write code in your application to keep track of database instances, blacklist unhealthy ones, keep track of the writeable master(s), and so on. But this is much more complex than just sending traffic to a single endpoint – which is where a proxy comes in. ClusterControl allows you to deploy ProxySQL, HAProxy and MaxScale. We will give some examples using ProxySQL, as it gives us good flexibility in controlling database traffic.
ProxySQL can be deployed in a couple of ways. For starters, it can be deployed on separate hosts and Keepalived can be used to provide Virtual IP. The Virtual IP will be moved around should one of the ProxySQL instances fail. In the cloud, this setup can be problematic as adding an IP to the interface usually is not enough. You would have to modify Keepalived configuration and scripts to work with elastic IP (or static -however it might be called by your cloud provider). Then one would use cloud API or CLI to relocate this IP address to another host. For this reason, we’d suggest to collocate ProxySQL with the application. Each application server would be configured to connect to the local ProxySQL, using Unix sockets. As ProxySQL uses an angel process, ProxySQL crashes can be detected/restarted within a second. In case of hardware crash, that particular application server will go down along with ProxySQL. The remaining application servers can still access their respective local ProxySQL instances. This particular setup has additional features. Security – ProxySQL, as of version 1.4.8, does not have support for client-side SSL. It can only setup SSL connection between ProxySQL and the backend. Collocating ProxySQL on the application host and using Unix sockets is a good workaround. ProxySQL also has the ability to cache queries and if you are going to use this feature, it makes sense to keep it as close to the application as possible to reduce latency. We would suggest to use this pattern to deploy ProxySQL.
Typical setups
Let’s take a look at examples of highly available setups.
Single datacenter, MySQL replication
The assumption here is that there are two separate zones within the datacenter. Each zone has redundant and separate power, networking and connectivity to reduce the likelihood of two zones failing simultaneously. It is possible to set up a replication topology spanning both zones.
Here we use ClusterControl to manage the failover. To solve the split-brain scenario between availability zones, we collocate the active ClusterControl with the master. We also blacklist slaves in the other availability zone to make sure that automated failover won’t result in two masters being available.
Multiple datacenters, MySQL replication
In this example we use three datacenters and Orchestrator/Raft for quorum calculation. You might have to write your own scripts to implement STONITH if master is in the partitioned segment of the infrastructure. ClusterControl is used for node recovery and management functions.
Multiple datacenters, Galera Cluster
In this case we use three datacenters with a Galera arbitrator in the third one – this makes possible to handle whole datacenter failure and reduces a risk of network partitioning as the third datacenter can be used as a relay.
In this blog post, we’ll look at options for migrating database charsets to utf8mb4.
Migrating charsets, in my opinion, is one of the most tedious tasks in a DBA’s life. There are so many things involved that can screw up our data, making it work is always hard. Sometimes what seems like a trivial task can become a nightmare very easily, and keeps us working for longer than expected.
I’ve recently worked on a case that challenged me with lots of tests due to some existing schema designs that made InnoDB suffer. I’ve decided to write this post to put together some definitive guide to enact charset conversion with minimal downtime and pain.
First disclosure: I can’t emphasize enough that you need to always backup your data. If something goes wrong, you can always roll things back by keeping a healthy set of backups.
Second disclosure: A backup can’t be considered a good backup until you test it, so I can’t emphasize enough that running regular backups and also performing regular restore tests is a must-to-do task for being in the safe side.
Third and last disclosure: I’m not pretending to present the best or only way to do this exercise. This is the way I consider easiest and painless to perform a charset conversion with minimal downtime.
My approach involves at least one slave for failover and logical/physical backup operations to make sure that data is loaded properly using the right charset.
In this case, we are moving from latin1 (default until MySQL 8.0.0) to utf8mb4 (new default from 8.0.1). In this post, Lefred refers to this change and some safety checks for upgrading. For our change, an important thing to consider: Latin1 charset stores one byte per character, while utf8mb4 can store up to four bytes per character. This change definitely impacts the disk usage, but also makes us hit some limits that I describe later in the plan.
So let’s put out hands in action. First, let’s create a slave using a fresh (non-locking) backup. Remember that these operations are designed to minimize downtime and reduce any potential impact on our production server.
If you already have a slave that can act as a master replacement then you can skip this section. In our source server, configure binlog_format and flush logs to start with fresh binary logs:
set globalbinlog_format=MIXED;
flushlogs;
Start a streaming backup using Percona Xtrabackup through netcat in the destination server:
nc–l9999|cat–>/dest/folder/backup.tar
and in our source server:
innobackupex—stream=tar./|nc dest_server_ip9999
Once the backup is done, untar and restore the backup. Then set up the slave:
Now that we have the slave ready, we prepare our dataset by running two mysqldump processes so we have data and schemas in separate files. You can also run this operation using MyDumper or mysqlpump, but I will keep it easy:
STOP SLAVE;
SHOW SLAVE STATUS;
Write down this output, as it may be needed later:
mysqldump—skip–set–charset—no–data—databases`mysql—skip–column–names–e“SELECT GROUP_CONCAT(schema_name SEPARATOR ‘ ‘) FROM information_schema.schemata WHERE schema_name NOT IN (‘mysql’,’performance_schema’,’information_schema’);”`>schema.sql
mysqldump—skip–set–charset–n–t—databases`mysql—skip–column–names–e“SELECT GROUP_CONCAT(schema_name SEPARATOR ‘ ‘) FROM information_schema.schemata WHERE schema_name NOT IN (‘mysql’,’performance_schema’,’information_schema’);”`>data.sql
Notice that I’m passing a command as an argument to –databases to dump all databases but mysql, performance_schema and information_schema (hack stolen from this post, with credit to Ronald Bradford). It is very important to keep the replication stopped, as we will resume replication after fully converting our charset.
Now we have to convert our data to utf8mb4. This is easy as we just need to touch the schema.sql file by running few commands:
sed–e“s/DEFAULT CHARACTER SET latin1/DEFAULT CHARACTER SET utf8mb4/g”schema.sql
Can this be a one-liner? Yes, but I’m not a good basher. 🙂
Now we are ready to restore our data using new encoding:
mysql–e“set global innodb_large_prefix=1;”
mysql<schema.sql
mysql<data.sql
Notice I’ve enabled the variable innodb_large_prefix. This is important because InnoDB limits index prefixes to 768 bytes by default. If you have an index based in a varchar(255) data type, you will get an error because the new charset exceeds this limit (up to four bytes per character goes beyond 1000 bytes) unless you limit the index prefix. To avoid issues during data load, we enable this variable to extend the limit to 3072 bytes.
Finally, let’s configure our server and restart it to make sure to set new defaults properly. In the my.cnf file, add:
[client]
default–character–set=utf8mb4
[mysqld]
skip–slave–start
character–set–server=utf8mb4
collation–server=utf8mb4_unicode_ci
innodb_large_prefix=1
Let’s resume replication after the restart, and make sure everything is ok:
START SLAVE;
SHOW SLAVE STATUS;
Ok, at this point we should be fine and our data should be already converted to utf8mb4. So far so good. The next step is to failover applications to use the new server, and rebuild the old server using a fresh backup using xtrabackup as described above.
There are few things we need to consider now before converting this slave into master:
Make sure you properly configured applications. Charset and collation values can be set as session level, so if you set your connection driver to another charset then you may end up mixing things in your data.
Make sure the new slave is powerful enough to handle traffic from the master.
Test everything before failing over production applications. Going from Latin1 to utf8mb4 should be straightforward, as utf8mb4 includes all the characters in Latin1. But let’s face it, things can go wrong and we are trying to avoid surprises.
Last but not least, all procedures were done in a relatively small/medium sized dataset (around 600G). But this conversion (done via logical backups) is more difficult when talking about big databases (i.e., in the order of TBs). In these cases, the procedure helps but might not be good enough due to time restrictions (imagine loading a 1TB table from a logical dump — it take ages). If you happen to face such a conversion, here is a short, high-level plan:
Convert only smaller tables in the slave (i.e., those smaller than 500MB) following same procedure. Make sure to exclude big tables from the dump using the –ignore-tables parameter in mysqldump.
Convert bigger tables via alter table, as follows:
Once everything is finished, you can resume replication. Notice you can do dump/conversion/restore in parallel with the altering of bigger tables, which should reduce the time required for conversion.
It’s important to understand why we need the double conversion from latin1 to varbinary to utf8mb4. This post from Marco Tusa largely explains this.
Conclusion
I wrote this guide from my experience working with these type of projects. If you Google a bit, you’ll find a lot of resources that make this work, along with different solutions. What I’ve tried to present here is a guide to help you deal with these projects. Normally, we have to perform these changes in existing datasets that sometimes are big enough to prevent any work getting done via ALTER TABLE commands. Hopefully, you find this useful!
Francisco has been working in MySQL since 2006, he has worked for several companies which includes Health Care industry to Gaming. Over the last 6 years he has been working as a Remote DBA and Database Consultant which help him to acquire a lot of technical and multi-cultural skills.
He lives in La Plata, Argentina and during his free time he likes to play football, spent time with family and friends and cook.