If you’ve seen our excellent series on different species of wood, by looking at boards you can identify the ones most commonly used in furniture and homebuilding. But do you know what an actual Poplar, Walnut or Zebrawood tree looks like? Could you actually draw one if you were playing some forestry version of Pictionary?
Qi’ra (Emilia Clarke) in Solo: A Star Wars Story.Image: Lucasfilm
Solo: A Star Wars Story offers a shadier look into the period between Revenge of the Sith and A New Hope, removed from stories of the Rebellion vs. the Empire, and Jedi vs. Sith… mostly. But if you’ve only kept up with the Star Wars movies, and not the entirety of the multimedia behemoth that the franchise has become, one of Solo’s greatest surprises is likely also its most baffling. If you were confused, then here’s what you need to know.
Late into the events of Solo, Bantha poodoo has hit the fan in the most spectacular of manners for Han and his friends. Their attempt to deceive criminal overlord Dryden Vos out of the deal they’d cut with him goes sideways when Han’s would-be mentor Beckett double-crosses the group, and Dryden himself is killed in a scrap with Qi’ra and Han. As Han chases after Beckett, Qi’ra, although now seemingly free of the grip the Crimson Dawn crime syndicate has on her, chooses to return to the fold rather than running off with Han. She opens a communication channel to the Dawn’s high command—revealing the syndicate is run by a very familiar face:
Darth Maul, before he gets cut in two during The Phantom Menace.Image: Lucasfilm
Darth Maul himself. Yes, if you’ve only kept up with the movies, Maul was last seen getting lopped in half by a young Obi-Wan Kenobi during the climax of The Phantom Menace, seemingly plummeting to his death on Naboo. But Maul survived that not-so-fatal blow, and has actually been around for quite a while again in Star Wars canon, even before Disney purchased Lucasfilm. In fact, to know more about how Maul got from The Phantom Menace to Solo (and from half a Zabrak to a whole person again, more or less), we need to head into the world of Star Wars animation.
Ventress and Opress make for an uneasy alliance in Star Wars: Clone Wars.Image: Lucasfilm Animation
The Return of Maul
Maul’s re-emergence into Star Wars canon begins in the Clone Wars TV series’ fourth season, which introduced Maul’s brother, the exquisitely named Savage Opress. Savage and Maul (and their other brother, who I kid you not, is named Feral) grew up on the planet Dathomir, home to the fabled, witchy Force users known as the Nightsisters. While Maul was plucked from the world by Darth Sidious to become his apprentice, Savage was left behind—until the Clone Wars, when the dark assassin Asaaj Ventress (a former Nightsister herself) headed there after being betrayed by Count Dooku.
Ventress chose Savage as her apprentice in a quest for revenge against Dooku, but after even more betrayals—lots of betrayal among Dark Side users, as you’d expect—Savage struck out on his own, operating on information given to him by the leader of the Nightsisters, Mother Talzin, that Maul had survived his battle on Naboo and was living on the junkyard world of Lotho Minor.
A very spider-y, crazed Maul living on the world of Lotho Minor.Image: Lucasfilm Animation
That indeed proved to be the case—surviving his duel with Obi-Wan by feasting on the Dark Side energies of his own rage (at being chopped in half!), Maul managed to save his upper body from destruction and drag himself to a trash container, which was then dumped on Lotho Minor. In the years between his “death” on Naboo and Savage finding him—during which he cobbled himself a set of spider-y mechanical lower limbs—Maul’s mind was fractured, not just by his defeat but by Sidious’ abandonment of him. All that was left was a singular, all-encompassing thirst for vengeance against the Jedi and Obi-Wan, which Savage and Talzin were all too willing to stoke.
After bringing him back to Dathomir and using Nightsister magic to restore his mind (and replace Maul’s spidery-y legs with more humanoid ones), Savage found himself apprenticed to Maul in a quest to destroy Obi-Wan, one that eventually culminated in a duel between Maul, his hated enemy, and the Jedi Knight Adi Gallia. Gallia fell in battle, but Obi-Wan turned the tide on the Zabrak brothers, leading to their retreat. Maul immediately hatched more plans to get at Kenobi.
Maul executes Vizla and takes rulership on Mandalore.Image: Lucasfilm Animation
The Lord of Mandalore
Maul and Savage went big for their next grab at power, allying themselves with the Mandalorian Death Watch group’s Pre Vizla in their bid to overthrow the peaceful diplomatic rule of Mandalore’s current ruler, Satine Kryze (who was also once a potential paramour for Obi-Wan, before he had to distance his attachment to her). Using an army of Black Sun gangsters to bolster the Death Watch’s forces, the trio formed the Shadow Collective, and staged a coup attempt on Mandalore—but Vizla betrayed Maul (gasp!), trying to take rule of the planet for himself, leading to a duel between the two that culminated with Maul taking Vizla’s weapon, the legendary Mandalorian Darksaber, and beheading the Death Watch leader, taking his place as the current ruler of Mandalore.
Using Satine as a hostage to draw Obi-Wan out alone, Maul enacted a fraction of his vengeance against the Jedi by killing Satine in front of him—but his victory over Obi-Wan was short-lived. The Death Watch forces that refused to pledge loyalty to him after Vizla’s death freed Obi-Wan from prison, but worse, Maul’s re-emergence on such a major stage attracted the attention of a far bigger threat: his former master, Darth Sidious.
Sidious came to Mandalore to ensure that the Sith’s rule of two—currently taking the form of Sidious and Dooku—stayed true, dueling both Maul and Savage and ultimately killing the latter. Maul was kept alive by Sidious, however, who planned to use him to draw out Talzin, so he could eliminate the last of any meddlesome threats standing in the way of his imminent ascendance to rule over the entire galaxy.
Maul and Mother Talzin—in Dooku’s body—make their last stand against Sidious and Grievous in Darth Maul: Son of Dathomir.Image: Juan Frigeri (Dark Horse)
The Rise of the Empire and Solo
Although Clone Wars came to an end before Sidious’ plans for Maul could be shown on screen, the gaps were filled in by both the Darth Maul: Son of Dathomir comic (one of the last Star Wars comics published by Dark Horse before Disney’s purchase of Lucasfilm lead to the rights transferring over to Marvel, and adapted from a scrapped Clone Wars storyline) and the Ahsoka novel. The former sees Sidious successfully eradicate Talzin and the Nightsisters, leading to Maul fleeing once again back to his stronghold on Mandalore—only for the latter to see Ahsoka Tano joining Republic forces in the final days of the Clone Wars to help liberate the planet from Maul and his Shadow Collective, although Maul escaped.
It is after these two stories that Maul’s Solo appearance lies. Solo is largely set in the earlier half of the 20-year gap between Revenge of the Sith and A New Hope, making Maul’s move back into the world of organized crime in the Crimson Dawn (itself based on Dathomir, according to his call with Qi’ra in the movie, so Maul is back on his homeworld) an unsurprising leap, given his history of working with gangsters and criminals during the formation of the Shadow Collective. Although we don’t know for sure, perhaps the Dawn is made up of what was left of Maul’s loyal forces from the Collective, which would explain why Maul is its overarching leader. Now that Maul has returned to the cinematic portion of the Star Wars universe, it’s likely that these gaps will be filled in through comics and books, giving us the exact details of how the Dawn and Maul rose to prominence in the galactic underworld.
But Maul’s story doesn’t end with Solo. However, we’ve already seen its end, which happens years after the event of the movie, in the recently-concluded Star Wars Rebels TV series.
Maul encounters the young Jedi Ezra Bridger on Malachor.Image: Disney XD
The Final End of Darth Maul
Maul returned to TV screens during the climax of Rebels’ second season, set just a handful of years before the events of A New Hope. Fledgling Jedi-in-training Ezra Bridger encountered him on the planet Malachor, as the former Sith searched for Dark Side artifacts he could use to bring down Sidious and his Empire. It was suggested he’d been there for some time, implying that either Maul had left the Crimson Dawn in the intervening years or that the crime syndicate had become severely diminished.
Maul being Maul, he betrayed Ezra and fled Malachor, only to capture Ezra and his Rebel friends several months later in an attempt to gain access to an old Jedi Holocron saved by Ezra’s master, Kanan. Combining the Holocron with a Sith one he recovered on Malachor, Maul is granted a vision that confirms that his hated foe Obi-Wan is alive and hiding on a planet with twin suns, leading to a final conflict that, for Maul, had been decades in the making.
After locating Obi-Wan on Tatooine, however, Maul’s quest for vengeance came to an unexpected end… for him. After Maul had figured out his exile on the desert world was not to avoid the eye of the Empire, but to protect someone capable of bringing it down, the aged Jedi cut down Maul for good. Although unsuccessful in his own quest to kill Obi-Wan, Maul’s dying words were to ask if Kenobi’s charge was the fated chosen one. Kenobi replied in the affirmative and Maul died knowing at least someone would get the vengeance against Darth Sidious that he had so desperately craved.
Although we’ve seen a great deal, Solo proves there is still so much about the life and times of Maul—previously nothing more than a cool-looking henchman who appeared in a single movie—left to explore. A lot of the gaps have already been filled in thanks to Clone Wars and Rebels, but his surprising return in the latest Star Wars film means there’s still a lot more we have to learn about the former Dark Lord of the Sith.
We manage several hundreds of MySQL servers, We carefully benchmark and build custom database infrastructure operations for performance, scalability, availability and reliability … But What if we have provision for auto sizing of MySQL system variables innodb_buffer_pool_size, innodb_log_file_sizeand innodb_flush_method ? Actually, These are top 3 system variables we consider tuning for MySQL performance and when we first read about this feature, we got super excited so did some research and decided to write this post:
What was our first reaction, when we first read about innodb_dedicated_server ?
Wow, That will be awesome … Indeed, When we manage several hundreds of MySQL instances, This feature will really improve efficiency and DBA Ops. governance.
Now, Let us explain what we have found:
How does innodb_dedicated_server system variable in MySQL 8.0 size the following variables:
innodb_buffer_pool_size:
<1G – 128M (default value if innodb_dedicated_server is disabled / OFF)
<=4G = Detected Physical RAM * 0.5
>4G : Detected Physical RAM *0.75
innodb_log_file_size:
<1G: 48M(default value if innodb_dedicated_server is OFF)
<=4G: 128M
<=8G: 512M
<=16G: 1024M
>16G: 2G
innodb_flush_method
Set to O_DIRECT_NO_FSYNC if the setting is available on the system. If not, set it to the default InnoDB flush method
The first impression of innodb_dedicated_server system variable in MySQL 8.0 is impressive, Definitely will deliver much better performance than default value. This new feature will configure the MySQL system variable mentioned above more intuitively to improve DBA productivity. Till MySQL 5.7 it was always presumed 512M RAM with the default settings.
Are we going to follow this in our daily DBA checklists ?
Not really, We are an very conservative team about implementing the new features immediately in the critical database infrastructure of our customers, Also we are afraid about the isolated issues due to auto sizing of MySQL / InnoDB memory structures, Let’s explain why we will not be using this feature immediately for our MySQL 8.0 customers:
We carefully size InnoDB memory parameters on various factors like database size, transaction complexity, archiving policies etc. So we want to be hands-on or follow manual sizing of system variables innodb_buffer_pool_size, innodb_log_file_size and innodb_flush_method.
Capacity planning and sizing – We are always afraid of over / undersizing of our database infrastructure operations. Database infrastructure operations reliability is very critical for us, We have dedicated team with-in to monitor and trend database infrastructure operations and system resource usage consumption.
P.S – innodb_dedicated_server system variable is a relatively new feature, We are confident MySQL engineering team will be improving this component in coming days so our perspective will also change, We will never forget then to blog about this feature and why we are seriously thinking about implementing it for our customer production infrastructure.. Technology keeps changing for good, We are adaptive for the change !
Photo: Suzanne Kreiter/The Boston Globe via Getty Images
It wasn’t the most authentic of settings, but my maiden voyage to the land of Mexican elotes took place via a Chicago White Sox game. Two thoughts stuck: “These greedy bastards are charging $6 for a small tray of corn?!” More consequentially: “Boy, sweet corn with mayo + butter + cheese + lime + chili is mighty delicious.”
Like baseball, fireflies, and small-town parades, elotes are a harbinger of summer, a simple but impressive and indulgent showcase for sweet corn. It’s an all-dressed-up version of corn-on-the-cob, where a helpful slather of mayonnaise and butter helps cheese and chili powder adhere to the kernels. The overpriced ballpark elotes left such an impression, the next day I sought out a more genuine version in Chicago.
Advertisement
I found it in the city’s Little Village neighborhood, the residential and economic heart of Chicago’s Mexican populace. In a city where regulation and red tape all but suppresses street food culture, the only carts I encountered were for elotes and tamales. At several of these elotes stands, the ritual was the same: The vendor would remove a corn-on-the-cob from a steaming cooler. Holding the cob upright by its stick, she would slice vertically, the kernels landing onto a plastic mat. She would fold the plastic mat in half and dump its contents into a styrofoam cup. The vendor would then juice a lime over the corn, scrape off a spatula’s worth of mayonnaise against the cup, squeeze on imitation butter, and spoon the feta-like cotija cheese and chili powder on top. This set me back $2.50, though the norm is to hand over three singles and tell her to keep it. (Purists will argue elotes is served on-the-cob, while esquites is off-the-cob and pan-fried. But elotes has become the catch-all word for the dish, and the term we’ll use here.)
Whether served on the cob or off, this marriage of sweet, fat, spice, citrus, and salty cheese is dangerous and enticing, a combination sounding more like a misprint than carefully considered. The undisputed best way to enjoy sweet corn is grilled with a pat of butter, so elotes might be seen as a next-level application, a justified gilding of the lily.
As with all dishes, but especially with elotes, achieving balance is key. While you could easily cut corn and fold in the mayonnaise, cheese, and spices (and making it easier to eat), there’s just something tactilely and visually satisfying about serving elotes on the cob (plus, it discourages you from adding too much mayonnaise, as a light slather on the cob suffices). The recipe below comes via chef Andres Padilla of Chicago’s Topolobampo, founded by Rick Bayless and recently awarded outstanding restaurant, the top prize of the James Beard Foundation.
Advertisement
In this recipe, Mexican crema (or sour cream) is employed, though you can substitute the more common mayonnaise. Boiled corn will also do, but please consider taking the extra step and grill whole cobs over charcoal. It doesn’t even compare.
Elote asado (charcoal-grilled corn with cream, cheese and chile)
Serves six; recipe courtesy Topolobampo in Chicago
Photo: Topolobampo
6 ears fresh sweet corn, in their husks
3 Tbsp. unsalted butter, melted
1/2 cup thick cream or commercial sour cream mixed with a little milk or cream
1/3 cup crumbled Mexican queso anejo or queso fresco, or cheese like Parmesan, feta, cotija, or farmer’s cheese
1 Tbsp. hot powdered chile (ground chile de arbol, guajillo, or New Mexico chile)
Limes
1. About an hour before serving, place the ears of corn in a deep bowl, cover with cold water and weight with a plate to keep them submerged. Light your charcoal fire and let it burn until the bed of coals is medium-hot; adjust the grill four inches above the fire.
Advertisement
2. Lay the corn on the grill and roast for 15 to 20 minutes, turning frequently, until the outer leaves are blackened. Remove, let cool several minutes, then remove the husks and silk. About 10 minutes before serving, brush the corn with melted butter, return to the grill and turn frequently until nicely browned. Serve right away, passing the cream, cheese, and powdered chile for your guests to use to their own liking. Serve with a wedge of lime.
Bonus variation: Esquites (as served in Toluca and Mexico City)
Cut the kernels from six cobs, then fry in three tablespoons lard, vegetable oil, or butter, with hot green chile to taste (seeded and sliced) and two or three tablespoons chopped epazote. Season with salt.
Getting rid of weeds is a pain. Pluck them, and they come back. Kill them with chemicals, and the ground is poisoned for other plants. These organic corn farmers demonstrate a much more reliable, chemical-free, and downright spectacular method to clear the ground – FIRE!
A health worker prepares an Ebola vaccine to administer to health workers during a vaccination campaign in Mbandaka, Congo.
The latest Ebola outbreak has claimed yet another life in the Democratic Republic of Congo, raising the death toll to 12—though the actual number may be as high as 27. Using a strategy known as “ring vaccinations,” officials began treating doctors and other frontline healthcare workers in Bikoro, the town where the outbreak was first declared in early May.
As of today, the DRC says there are about 56 cases of hemorrhagic fever (a primary symptom of the disease) and 35 confirmed Ebola cases, of which 13 are probable and eight suspected. This is the third Ebola outbreak in the DRC in the past five years and the ninth since 1976 when the disease was first identified. The Democratic Republic of Congo is located in the heart of sub-Saharan Africa and is home to nearly 79 million people.
Earlier today, DRC Health Minister Oly Ilunga traveled to Bikoro, a small market town located 78 miles (126 km) south of Mbandaka, to oversee the vaccinations of at least 10 people. It was here that the outbreak was first declared three weeks ago and at least five Ebola deaths have occurred so far, according to the Associated Press. Officials are employing a strategy known as ring vaccinations in which the people who are most likely to be infected are treated. Today’s vaccinations included three doctors at Bikoro Hospital, two health experts, two nurses, a woman’s community representative, and a pygmy representative. The drug used was the experimental rVSV-ZEBOV, and all vaccinations were voluntary.
A health worker prepares an Ebola vaccine to administer to health workers during a vaccination campaign in Mbandaka, Congo.
Ring vaccinations started on May 21 in Mbandaka, with 7,560 doses ready for immediate use, according to the World Health Organization. The drugs were donated by its developer, Merck, while Gavi, the Vaccine Alliance has contributed $1 million towards operational costs. Ground teams are currently searching for and following up with all known contacts, of which 600 have been identified to date.
Advertisement
“Implementing the Ebola ring vaccination is a complex procedure,” said Matshidiso Moeti, WHO Regional Director for Africa, in a statement “The vaccines need to be stored at a temperature of minus 60 to minus 80 degrees centigrade and so transporting them to and storing them in affected areas is a major challenge.”
On May 18, WHO said the current outbreak is not yet an international emergency, but admitted a “vigorous” response was still necessary, both on the ground and in terms of funding its $56.8 million Ebola strategic response plan. WHO is currently working to prevent the disease from crossing the DRC’s nine borders, CBS reports. Also, several schools have been shut down in the Iboko health zone as a precaution. The next few days and weeks are critical in ensuring the outbreak doesn’t escalate any further.
Advertisement
The current outbreak involves the Zaire Ebola virus, which is known to be fatal in up to 60 to 90 percent of cases. Ebola spreads from person to person via contact with bodily fluids, but it often makes the jump to humans from wild animals such as bats and monkeys.
CEO: I can’t take you seriously because there’s a typo in your slide deck. You’ve lost all credibility because of your sloppy presentation. And don’t mention my wife in your slide deck. Dilbert: That’s “wi-fi.”
Woman: I need help persuading your boss to bless my project. Should I use facts and logic? Dilbert: No, he hates that stuff. Woman: Maybe I could appeal to his better angels? Dilbert: His better angels wear noise-canceling headphones. Woman: Okay, fine. I’ll just appeal to his self-interest. Dilbert: It would be in his best interest to avoid people like you. Woman: What do you suggest? Dilbert: We’ve had good outcomes using his ignorance and fear. Woman: Sign this ore else a blockchain drone will kill you in your sleep. Boss: Where’s my pen!
Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your own environment for maximum security and reliability. It provides thorough time-based analysis for MySQL® and MongoDB® servers to ensure that your data works as efficiently as possible.
In PMM Release 1.11.0, we deliver the following changes:
Configurable MySQL Slow Log Rotation – enable or disable rotation, and specify how many files to keep on disk
Predictable Graphs – we’ve updated our formulas to use aggregation functions over time for more reliable graphs
MySQL Exporter Parsing of my.cnf – we’ve improved how we read my.cnf
Annotation improvements – passing multiple strings results in single annotation being written
The issues in the release includes 1 new features & improvements, and 9 bugs fixed.
MySQL Slow Log Rotation Improvements
We spent some time this release going over how we handle MySQL’s Slow Log rotation logic. Query Analytics requires that slow logging be enabled (either to file, or to PERFORMANCE_SCHEMA) and we found that users of Percona Server for MySQL overwhelmingly choose logging to a file in order to take advantage of log_slow_verbosity which provides enhanced InnoDB Usage information. However, the challenge with MySQL’s Slow Log is that it is very verbose and thus the number one concern is disk space. PMM strives to do no harm and so MySQL Slow Log Rotation was a natural fit, but until this release we were very strict and hadn’t enabled any configuration of these parameters.
Percona Server for MySQL Users have long known about Slow Query Log Rotation and Expiration, but until now had no way of using the in-built Percona Server for MySQL feature while ensuring that PMM wasn’t missing any queries from the Slow Log during file rotation. Or perhaps your use case is that you want to do Slow Log Rotation using logrotate or some other facility. Today with Release 1.11 this is now possible!
We’ve made two significant changes:
You can now specify the number of Slow Log files to remain on disk, and let PMM handle deleting the oldest files first. Default remains unchanged – 1 Slow Log to remain on disk.
Slow Log rotation can now be disabled, for example if you want to manage rotation using logrotate or Percona Server for MySQL Slow Query Log Rotation and Expiration. Default remains unchanged – Slow Log Rotation is ON.
Number of Slow Logs Retained on Disk
Slow Logs Rotation – On or Off
You specify each of these two new controls when setting up the MySQL service. The following example specifies that 5 Slow Log files should remain on disk:
pmm-admin add mysql ... --retain-slow-logs=5
While the following example specifies that Slow Log rotation is to be disabled (flag value of false), with the assumption that you will perform your own Slow Log Rotation:
pmm-admin add mysql ... --slow-log-rotation=false
We don’t currently support modifying option parameters for an existing service definition. This means you must remove, then re-add the service and include the new options.
We’re including a logrotate script in this post to get you started, and it is designed to keep 30 copies of Slow Logs at 1GB each. Note that you’ll need to update the Slow Log location, and ensure a MySQL User Account with SUPER, RELOAD are used for this script to successfully execute.
Example logrotate
/var/mysql/mysql-slow.log {
nocompress
create 660 mysql mysql
size 1G
dateext
missingok
notifempty
sharedscripts
postrotate
/bin/mysql -e 'SELECT @@global.long_query_time INTO @LQT_SAVE; SET GLOBAL long_query_time=2000; SELECT SLEEP(2); FLUSH SLOW LOGS; SELECT SLEEP(2); SET GLOBAL long_query_time=@LQT_SAVE;'
endscript
rotate 30
}
Predictable Graphs
We’ve updated the logic on four dashboards to better handle predictability and also to allow zooming to look at shorter time ranges. For example, refreshing PXC/Galera graphs prior to 1.11 led to graphs spiking at different points during the metric series. We’ve reviewed each of these graphs and their corresponding queries and added in <aggregation>_over_time() functions so that graphs display a consistent view of the metric series. This improves your ability to drill in on the dashboards so that no matter how short your time range, you will still observe the same spikes and troughs in your metric series. The four dashboards affected by this improvement are:
Home Dashboard
PXC/Galera Graphs Dashboard
MySQL Overview Dashboard
MySQL InnoDB Metrics Dashboard
MySQL Exporter parsing of my.cnf
In earlier releases, the MySQL Exporter expected only key=value type flags. It would ignore options without values (i.e. disable-auto-rehash), and could sometimes read the wrong section of the my.cnf file. We’ve updated the parsing engine to be more MySQL compatible.
Annotation improvements
Annotations permit the display of an event on all dashboards in PMM. Users reported that passing more than one string to pmm-admin annotate would generate an error, so we updated the parsing logic to assume all strings passed during annotation creation generates a single annotation event. Previously you needed to enclose your strings in quotes so that it would be parsed as a single string.
Issues in this release
New Features & Improvements
PMM-2432 – Configurable MySQL Slow Log File Rotation
In this blog post, I will show you how easy it is to set up a Percona Monitoring and Management server on Google Compute Engine from the command line.
First off you will need to have a Google account and install the Cloud SDK tool. You need to create a GCP (Google Cloud Platform) project and enable billing to proceed. This blog assumes you are able to authenticate and SSH into instances from the command line.
Here are the steps to install PMM server in Google Cloud Platform.
1) Create the Compute engine instance with the following command. The example creates an Ubuntu Xenial 16.04 LTS compute instance in the us-west1-b zone with a 100GB persistent disk. For production systems it would be best to use a 500GB disk instead (size=500GB). This should be enough for default data retention settings, although your needs may vary.
jerichorivera@percona-support:~/GCE$ gcloud compute instances create pmm-server --tags pmmserver --image-family ubuntu-1604-lts --image-project ubuntu-os-cloud --machine-type n1-standard-4 --zone us-west1-b --create-disk=size=100GB,type=pd-ssd,device-name=sdb --description "PMM Server on GCP" --metadata-from-file startup-script=deploy-pmm-xenial64.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
pmm-server us-west1-b n1-standard-4 10.138.0.2 35.233.216.225 RUNNING
This startup script will be executed right after the compute instance is created. The script will format the persistent disk and mount the file system; create a custom Docker unit file for the purpose of creating Docker’s root directory from /var/lib/docker to /mnt/disks/pdssd/docker; install the Docker package; and create the deploy.sh script.
2) Once the compute engine instance is created, SSH into the instance, check that Docker is running and the root directory pointing to the desired folder.
jerichorivera@pmm-server:~$ sudo systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─docker.root.conf
Active: active (running) since Wed 2018-05-16 12:53:30 UTC; 45s ago
Docs: https://docs.docker.com
Main PID: 4744 (dockerd)
CGroup: /system.slice/docker.service
├─4744 /usr/bin/dockerd -H fd:// -g /mnt/disks/pdssd/docker/
└─4764 docker-containerd --config /var/run/docker/containerd/containerd.toml
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391566708Z" level=warning msg="Your kernel does not support swap memory limit"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391638253Z" level=warning msg="Your kernel does not support cgroup rt period"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.391680203Z" level=warning msg="Your kernel does not support cgroup rt runtime"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.392913043Z" level=info msg="Loading containers: start."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.767048674Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.847907241Z" level=info msg="Loading containers: done."
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875129963Z" level=info msg="Docker daemon" commit=9ee9f40 graphdriver(s)=overlay2 version=18.03.1-ce
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.875285809Z" level=info msg="Daemon has completed initialization"
May 16 12:53:30 pmm-server dockerd[4744]: time="2018-05-16T12:53:30.884566419Z" level=info msg="API listen on /var/run/docker.sock"
May 16 12:53:30 pmm-server systemd[1]: Started Docker Application Container Engine.
3) Add your user to the docker group as shown below and change deploy.sh script to executable.
5) Finally, create a firewall rule to allow HTTP port 80 to access the PMM Server. For security reasons, we recommend that you secure your PMM server by adding a password, or limit access to it with a stricter firewall rule to specify which IP addresses can access port 80.
At this point you should have a PMM Server in GCP running on a Compute Engine instance.
The next steps is to install pmm-client on the database hosts and add services for monitoring.
Here I’ve launched a single standalone Percona Server 5.6 on another Compute Engine instance in the same project (thematic-acumen-204008).
jerichorivera@percona-support:~/GCE$ gcloud compute instances create mysql1 --tags mysql1 --image-family centos-7 --image-project centos-cloud --machine-type n1-standard-2 --zone us-west1-b --create-disk=size=50GB,type=pd-standard,device-name=sdb --description "MySQL1 on GCP" --metadata-from-file startup-script=compute-instance-deploy.sh
Created [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/mysql1].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
mysql1 us-west1-b n1-standard-2 10.138.0.3 35.233.187.253 RUNNING
Installed Percona Server 5.6 and pmm-client and then added services. Take note that since the PMM Server and the MySQL server is in the same project and same VPC network, we can connect directly through INTERNAL_IP 10.138.0.2, otherwise use the EXTERNAL_IP 35.223.216.225.
[root@mysql1 jerichorivera]# pmm-admin config --server 10.138.0.2
OK, PMM server is alive.
PMM Server | 10.138.0.2
Client Name | mysql1
Client Address | 10.138.0.3
[root@mysql1 jerichorivera]#
[root@mysql1 jerichorivera]# pmm-admin check-network
PMM Network Status
Server Address | 10.138.0.2
Client Address | 10.138.0.3
* System Time
NTP Server (0.pool.ntp.org) | 2018-05-22 06:45:47 +0000 UTC
PMM Server | 2018-05-22 06:45:47 +0000 GMT
PMM Client | 2018-05-22 06:45:47 +0000 UTC
PMM Server Time Drift | OK
PMM Client Time Drift | OK
PMM Client to PMM Server Time Drift | OK
* Connection: Client --> Server
-------------------- -------
SERVER SERVICE STATUS
-------------------- -------
Consul API OK
Prometheus API OK
Query Analytics API OK
Connection duration | 408.185µs
Request duration | 6.810709ms
Full round trip | 7.218894ms
No monitoring registered for this node identified as 'mysql1'.
[root@mysql1 jerichorivera]# pmm-admin add mysql --create-user
[linux:metrics] OK, now monitoring this system.
[mysql:metrics] OK, now monitoring MySQL metrics using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[mysql:queries] OK, now monitoring MySQL queries from slowlog using DSN pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
[root@mysql1 jerichorivera]# pmm-admin list
pmm-admin 1.10.0
PMM Server | 10.138.0.2
Client Name | mysql1
Client Address | 10.138.0.3
Service Manager | linux-systemd
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
SERVICE TYPE NAME LOCAL PORT RUNNING DATA SOURCE OPTIONS
-------------- ------- ----------- -------- ----------------------------------------------- ------------------------------------------
mysql:queries mysql1 - YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock) query_source=slowlog, query_examples=true
linux:metrics mysql1 42000 YES -
mysql:metrics mysql1 42002 YES pmm:***@unix(/mnt/disks/disk1/data/mysql.sock)
Lastly, in case you need to delete the PMM Server instance. Just execute this delete command below to completely remove the instance and the attached disk. Be aware that you may remove the boot disk and retain the attached persistent disk if you prefer.
jerichorivera@percona-support:~/GCE$ gcloud compute instances delete pmm-server
The following instances will be deleted. Any attached disks configured
to be auto-deleted will be deleted unless they are attached to any
other instances or the `--keep-disks` flag is given and specifies them
for keeping. Deleting a disk is irreversible and any data on the disk
will be lost.
- [pmm-server] in [us-west1-b]
Do you want to continue (Y/n)? y
Deleted [https://www.googleapis.com/compute/v1/projects/thematic-acumen-204008/zones/us-west1-b/instances/pmm-server].