A Step by Step Guide to Take your MySQL Instance to the Cloud
https://ift.tt/33nGXX6
You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag.
Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details:
-
– Export the data – Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course)
- – Create my MySQL cloud instance.
- – import the data into the cloud instance.
- – Redirect the application to the cloud instance.
All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available.
Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed.
Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/
Let’s get prepared then:
Main objective: keep application outage time down to minimum.
Preparation:
- You have an OCI account, and the OCI CLI configuration is in place.
- MySQL Shell 8.0.21 is installed on the on-premise environment.
- We create an Object Storage bucket for the data upload.
- Create our MySQL Database System.
- We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here.
- Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS.
So, now for our OCI environment setup. What do I need?
Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.)
What we need is the config file from .oci, which contains the following info:
You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”.
Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it.
The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket.
MySQL Shell 8.0.21 install on-premise.
Make a bucket.
I did this via the OCI console.
This creates a Standard Private bucket.
Click on the bucket name that now appears in the list, to see the details.
You will need to note down the Name and Namespace.
Create our MySQL Database System.
This is where the data will be uploaded to. This is also quite simple.
And hey presto. We have it.
Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons.
On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step.
For example, IP Address is 10.0.0.4.
Create our Endpoint Compute instance.
In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server.
Here we’ll install MySQL Router to be our proxy for external access.
And we’ll also install MySQL Shell to upload the data from our Object Storage bucket.
For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b.
First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned.
Router & Shell install ‘n’ configure
Test connectivity.
Test MySQL Router as our proxy, via MySQL Shell:
$ mysqlsh root@kh01:3306 –sql -e ‘show databases’
Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try:
$ mysqlsh root@<public-ip>:3306 –sql -e ‘show databases’
If you get any issues here, check your ingress rules at your VCN level.
Also, double check your o.s. firewall rules on the freshly created compute instance too.
Preparation is done.
We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment.
Let’s get the data streaming.
MySQL Shell Dump Utility
In effect, it’s here when we’ll be ‘streaming’ data.
This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS.
First of all, I want to check the commands with “dryRun: true”.
util.dumpSchemas dryRun
From our own environment / on-premise installation, we now want to dump / export the data:
$ mysqlsh root@OnPremiseHost:3306
You’ll want to see what options are available and how to use the util.dumpSchemas utility:
mysqlsh> \help util.dumpSchemas
NAME
dumpSchemas – Dumps the specified schemas to the files in the output
directory.
SYNTAX
util.dumpSchemas(schemas, outputUrl[, options])
WHERE
schemas: List of schemas to be dumped.
outputUrl: Target directory to store the dump files.
options: Dictionary with the dump options.
Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So:
util.dumpSchemas(
["test"], "test",
{dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]
}
)
["test"] I just want to dump the test schema. I could put a list of schemas here. Careful if you think you can export internal schemas, ‘cos you can’t.
“test” is the “outputURL target directort”. Watch the prefix of all the files being created in the bucket..
options:
dryRun: Quite obvious. Change it to false to run.
showProgress: I want to see the progress of the loading.
threads: Default is 4 but choose what you like here, according to the resources available.
ocimds: VERY IMPORTANT! This is to make sure that the environment is “MDS Ready” so when the data gets to the cloud, nothing breaks.
osBucketName: The name of the bucket we created.
osNamespace: The namespace of the bucket.
ociConfigFile: This is what we looked at, right at the beginning. This what makes it easy.
compatibility: There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS.
Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try!
I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump:
util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true})
Acquiring global read lock
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `test`.`reviews`
Writing DDL for schema `test`
Writing DDL for table `test`.`reviews`
Data dump for table `test`.`reviews` will be chunked using column `review_id`
(I created the primary key on the review_id column and got rid of the following warning at the end:)
WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file.
Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following:
util.LoadDump dryRun
Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data:
$ mysqlsh root@kh01:3306
util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"})
As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory.
Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix=’test’ using 8 threads.
Util.loadDump: Failed opening object ‘@.json’ in READ mode: Not Found (404) (RuntimeError)
This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second.
You want to do some “streaming”?
With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh:
On-premise:
dry run:
util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})
real:
util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})
OCI Compute host:
dry run:
util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})
real:
util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})
They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time:
On-premise:
At the OCI compute host you can see the waitDumpTimeout take effect with:
NOTE: Dump is still ongoing, data will be loaded as it becomes available.
In the osBucket, we can now see content (which is what the loadDump is reading):
And once it’s all dumped ‘n’ uploaded we have the following output:
If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute)
Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!!
Conclusion
technology
via Planet MySQL https://ift.tt/2iO8Ob8
September 13, 2020 at 11:32PM