MySQL in the Cloud – Online Migration from Amazon RDS to EC2 instance (part 1)

In our previous blog, we saw how easy it is to get started with RDS for MySQL. It is a convenient way to deploy and use MySQL, without worrying about operational overhead. The tradeoff though is reduced control, as users are entirely reliant on Amazon staff in case of poor performance or operational anomalies. No access to the data directory or physical backups makes it hard to move data out of RDS. This can be a major problem if your database outgrows RDS, and you decide to migrate to another platform. This two-part blog shows you how to do an online migration from RDS to your own MySQL server.

We’ll be using EC2 to run our own MySQL Server. It can be a first step for more complex migrations to your own private datacenters. EC2 gives you access to your data so xtrabackup can be used. EC2 also allows you to setup SSH tunnels and it removes requirement of setting up hardware VPN connections between your on-premises infrastructure and VPC.

Assumptions

Before we start, we need to make couple of assumptions – especially around security. First and foremost, we assume that RDS instance is not accessible from outside of AWS. We also assume that you have an application in EC2. This implies that either the RDS instance and the rest of your infrastructure shares a VPC or there is access configured between them, one way or the other. In short, we assume that you can create a new EC2 instance and it will have access (or it can be configured to have the access) to your MySQL RDS instance.

We have configured ClusterControl on the application host. We’ll use it to manage our EC2 MySQL instance.

Initial setup

In our case, the RDS instance shares the same VPC with our “application” (EC2 instance with IP 172.30.4.228) and host which will be a target for the migration process (EC2 instance with IP 172.30.4.238). As the application we are going to use tpcc-MySQL benchmark executed in the following way:

./tpcc_start -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com -d tpcc1000 -u tpcc -p tpccpass -w 20 -r 60 -l 600 -i 10 -c 4

Initial plan

We are going to perform a migration using the following steps:

  1. Setup our target environment using ClusterControl – install MySQL on 172.30.4.238
  2. Then, install ProxySQL, which we will use to manage our traffic at the time of failover
  3. Dump the data from the RDS instance
  4. Load the data into our target host
  5. Set up replication between RDS instance and target host
  6. Switchover traffic from RDS to target host
ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Prepare environment using ClusterControl

Assuming we have ClusterControl installed (if you don’t you can grab it from: http://ift.tt/2kRZ2IQ), we need to setup our target host. We will use the deployment wizard from ClusterControl for that:

Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl
Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl
Deploying a Database Cluster in ClusterControl

Deploying a Database Cluster in ClusterControl

Once this is done, you will see a new cluster (in this case, just your single server) in the cluster list:

Database Cluster in ClusterControl

Database Cluster in ClusterControl

Next step will be to install ProxySQL – starting from ClusterControl 1.4 you can do it easily from the UI. We covered this process in details in this blog post. When installing it, we picked our application host (172.30.4.228) as the host to install ProxySQL to. When installing, you also have to pick a host to route your traffic to. As we have only our “destination” host in the cluster, you can include it but then couple of changes are needed to redirect traffic to the RDS instance.

If you have chosen to include destination host (in our case it was 172.30.4.238) in the ProxySQL setup, you’ll see following entries in the mysql_servers table:

mysql> select * from mysql_servers\G
*************************** 1. row ***************************
       hostgroup_id: 20
           hostname: 172.30.4.238
               port: 3306
             status: ONLINE
             weight: 1
        compression: 0
    max_connections: 100
max_replication_lag: 10
            use_ssl: 0
     max_latency_ms: 0
            comment: read server
*************************** 2. row ***************************
       hostgroup_id: 10
           hostname: 172.30.4.238
               port: 3306
             status: ONLINE
             weight: 1
        compression: 0
    max_connections: 100
max_replication_lag: 10
            use_ssl: 0
     max_latency_ms: 0
            comment: read and write server
2 rows in set (0.00 sec)

ClusterControl configured ProxySQL to use hostgroups 10 and 20 to route writes and reads to the backend servers. We will have to remove the currently configured host from those hostgroups and add the RDS instance there. First, though, we have to ensure that ProxySQL’s monitor user can access the RDS instance.

mysql> SHOW VARIABLES LIKE 'mysql-monitor_username';
+------------------------+------------------+
| Variable_name          | Value            |
+------------------------+------------------+
| mysql-monitor_username | proxysql-monitor |
+------------------------+------------------+
1 row in set (0.00 sec)
mysql> SHOW VARIABLES LIKE 'mysql-monitor_password';
+------------------------+---------+
| Variable_name          | Value   |
+------------------------+---------+
| mysql-monitor_password | monpass |
+------------------------+---------+
1 row in set (0.00 sec)

We need to grant this user access to RDS. If we need it to track replication lag, the user would have to have then‘REPLICATION CLIENT’ privilege. In our case it is not needed as we don’t have slave RDS instance – ‘USAGE’ will be enough.

root@ip-172-30-4-228:~# mysql -ppassword -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 210
Server version: 5.7.16-log MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> CREATE USER 'proxysql-monitor'@172.30.4.228 IDENTIFIED BY 'monpass';
Query OK, 0 rows affected (0.06 sec)

Now it’s time to reconfigure ProxySQL. We are going to add the RDS instance to both writer (10) and reader (20) hostgroups. We will also remove 172.30.4.238 from those hostgroups – we’ll just edit them and add 100 to each hostgroup.

mysql> INSERT INTO mysql_servers (hostgroup_id, hostname, max_connections, max_replication_lag) VALUES (10, 'rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com', 100, 10);
Query OK, 1 row affected (0.00 sec)
mysql> INSERT INTO mysql_servers (hostgroup_id, hostname, max_connections, max_replication_lag) VALUES (20, 'rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com', 100, 10);
Query OK, 1 row affected (0.00 sec)
mysql> UPDATE mysql_servers SET hostgroup_id=110 WHERE hostname='172.30.4.238' AND hostgroup_id=10;
Query OK, 1 row affected (0.00 sec)
mysql> UPDATE mysql_servers SET hostgroup_id=120 WHERE hostname='172.30.4.238' AND hostgroup_id=20;
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.01 sec)
mysql> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.07 sec)

Last step required before we can use ProxySQL to redirect our traffic is to add our application user to ProxySQL.

mysql> INSERT INTO mysql_users (username, password, active, default_hostgroup) VALUES ('tpcc', 'tpccpass', 1, 10);
Query OK, 1 row affected (0.00 sec)
mysql> LOAD MYSQL USERS TO RUNTIME; SAVE MYSQL USERS TO DISK; SAVE MYSQL USERS TO MEMORY;
Query OK, 0 rows affected (0.00 sec)

Query OK, 0 rows affected (0.05 sec)

Query OK, 0 rows affected (0.00 sec)
mysql> SELECT username, password FROM mysql_users WHERE username='tpcc';
+----------+-------------------------------------------+
| username | password                                  |
+----------+-------------------------------------------+
| tpcc     | *8C446904FFE784865DF49B29DABEF3B2A6D232FC |
+----------+-------------------------------------------+
1 row in set (0.00 sec)

Quick note – we executed “SAVE MYSQL USERS TO MEMORY;” only to have password hashed not only in RUNTIME but also in working memory buffer. You can find more details about ProxySQL’s password hashing mechanism in their documentation.

We can now redirect our traffic to ProxySQL. How to do it depends on your setup, we just restarted tpcc and pointed it to ProxySQL.

Redirecting Traffic with ProxySQL

Redirecting Traffic with ProxySQL

At this point, we have built a target environment to which we will migrate. We also prepared ProxySQL and configured it for our application to use. We now have a good foundation for the next step, which is the actual data migration. In the next post, we will show you how to copy the data out of RDS into our own MySQL instance (running on EC2). We will also show you how to switch traffic to your own instance while applications continue to serve users, without downtime.

via Planet MySQL
MySQL in the Cloud – Online Migration from Amazon RDS to EC2 instance (part 1)

Mussels inspire glue that sticks despite water

Scientists have modeled a new adhesive that works underwater after shellfish that stick to surfaces. It’s stronger than many commercial glues created for the purpose.

“Our current adhesives are terrible at wet bonding, yet marine biology solved this problem eons ago,” says Jonathan Wilker, professor of chemistry and materials engineering at Purdue University.

“Mussels, barnacles, and oysters attach to rocks with apparent ease. In order to develop new materials able to bind within harsh environments, we made a biomimetic polymer that is modeled after the adhesive proteins of mussels.”

New findings, published in the journal ACS Applied Materials and Interfaces, show that the bio-based glue performed better than 10 commercial adhesives when used to bond polished aluminum. When compared with the five strongest commercial glues included in the study, the new adhesive performed better when bonding wood, Teflon, and polished aluminum. It was the only adhesive of those tested that worked with wood and far out-performed the other adhesives when used to join Teflon.

Mussel chemistry

Mussels extend hair-like fibers that attach to surfaces using plaques of adhesive. Proteins in the glue contain the amino acid DOPA, which harbors the chemistry needed to provide strength and adhesion. The researchers have now inserted this chemistry of mussel proteins into a biomimetic polymer called poly(catechol-styrene), creating an adhesive by harnessing the chemistry of compounds called catechols, which DOPA contains.

“We are focusing on catechols given that the animals use this type of chemistry so successfully,” Wilker says. “Poly(catechol-styrene) is looking to be, possibly, one of the strongest underwater adhesives found to date.”

Sandcastle worms teach us how to make underwater glue

While most adhesives interact with water instead of sticking to surfaces, the catechol groups may have a special talent for “drilling down” through surface waters in order to bind onto surfaces, he says. The researchers conducted a series of underwater bond tests in tanks of artificial seawater.

“These findings are helping to reveal which aspects of mussel adhesion are most important when managing attachment within their wet and salty environment,” Wilker says. “All that is needed for high strength bonding underwater appears to be a catechol-containing polymer.”

17X stronger

Surprisingly, the new adhesive also proved to be about 17 times stronger than the natural adhesive produced by mussels. “In biomimetics, where you try to make synthetic versions of natural materials and compounds, you almost never can achieve performance as good as the natural system,” Wilker says.

One explanation might be that the animals have evolved to produce adhesives that are only as strong as they need to be for their specific biological requirements. The natural glues might be designed to give way when the animals are hunted by predators, breaking off when pulled from a surface instead of causing injury to internal tissues.

“We have shown that this adhesive system works quite well within controlled laboratory conditions. In the future we want to move on to more practical applications in the real world,” Wilker says.

The Office of Naval Research funded the work.

Source: Purdue University

The post Mussels inspire glue that sticks despite water appeared first on Futurity.

via Futurity.org
Mussels inspire glue that sticks despite water

MySQL in the Cloud – Online Migration from Amazon RDS to your own server (part 2)

As we saw earlier, it might be challenging for companies to move their data out of RDS for MySQL. In the first part of this blog, we showed you how to set up your target environment on EC2 and insert a proxy layer (ProxySQL) between your applications and RDS. In this second part, we will show you how to do the actual migration of data to your own server, and then redirect your applications to the new database instance without downtime.

Copying data out of RDS

Once we have our database traffic running through ProxySQL, we can start preparations to copy our data out of RDS. We need to do this in order to set up replication between RDS and our MySQL instance running on EC2. Once this is done, we will configure ProxySQL to redirect traffic from RDS to our MySQL/EC2.

As we discussed in the first blog post in this series, the only way you can get data out of the RDS is via logical dump. Without access to the instance, we cannot use any hot, physical backup tools like xtrabackup. We cannot use snapshots either as there is no way to build anything else other than a new RDS instance from the snapshot.

We are limited to logical dump tools, therefore the logical option would be to use mydumper/myloader to process the data. Luckily, mydumper can create consistent backups so we can rely on it to provide binlog coordinates for our new slave to connect to. The main issue while building RDS replicas is binlog rotation policy – logical dump and load may take even days on larger (hundreds of gigabytes) datasets and you need to keep binlogs on the RDS instance for the duration of this whole process. Sure, you can increase binlog rotation retention on RDS (call mysql.rds_set_configuration(‘binlog retention hours’, 24); – you can keep them up to 7 days) but it’s much safer to do it differently.

Before we proceed with taking a dump, we will add a replica to our RDS instance.

Amazon RDS Dashboard

Amazon RDS Dashboard
Create Replica DB in RDS

Create Replica DB in RDS

Once we click on the “Create Read Replica” button, a snapshot will be started on the “master” RDS replica. It will be used to provision the new slave. The process may take hours, it all depends on the volume size, when was the last time a snapshot was taken and performance of the volume (io1/gp2? Magnetic? How many pIOPS a volume has?).

Master RDS Replica

Master RDS Replica

When slave is ready (its status has changed to “available”), we can log into it using its RDS endpoint.

RDS Slave

RDS Slave

Once logged in, we will stop replication on our slave – this will ensure the RDS master won’t purge binary logs and they will be still available for our EC2 slave once we complete our dump/reload process.

mysql> CALL mysql.rds_stop_replication;
+---------------------------+
| Message                   |
+---------------------------+
| Slave is down or disabled |
+---------------------------+
1 row in set (1.02 sec)

Query OK, 0 rows affected (1.02 sec)

Now, it’s finally time to copy data to EC2. First, we need to install mydumper. You can get it from github: http://ift.tt/2e5Py5g. The installation process is fairly simple and nicely described in the readme file, so we won’t cover it here. Most likely you will have to install a couple of packages (listed in the readme) and the harder part is to identify which package contains mysql_config – it depends on the MySQL flavor (and sometimes also MySQL version).

Once you have mydumper compiled and ready to go, you can execute it:

root@ip-172-30-4-228:~/mydumper# mkdir /tmp/rdsdump
root@ip-172-30-4-228:~/mydumper# ./mydumper -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com -p tpccpass -u tpcc  -o /tmp/rdsdump  --lock-all-tables --chunk-filesize 100 --events --routines --triggers
. 

Please note –lock-all-tables which ensures that the snapshot of the data will be consistent and it will be possible to use it to create a slave. Now, we have to wait until mydumper complete its task.

One more step is required – we don’t want to restore the mysql schema but we need to copy users and their grants. We can use pt-show-grants for that:

root@ip-172-30-4-228:~# wget http://ift.tt/1mmspPa
root@ip-172-30-4-228:~# chmod u+x ./pt-show-grants
root@ip-172-30-4-228:~# ./pt-show-grants -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com -u tpcc -p tpccpass > grants.sql

Sample of pt-show-grants may look like this:

-- Grants for 'sbtest'@'%'
CREATE USER IF NOT EXISTS 'sbtest'@'%';
ALTER USER 'sbtest'@'%' IDENTIFIED WITH 'mysql_native_password' AS '*2AFD99E79E4AA23DE141540F4179F64FFB3AC521' REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE USER, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, PROCESS, REFERENCES, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SELECT, SHOW DATABASES, SHOW VIEW, TRIGGER, UPDATE ON *.* TO 'sbtest'@'%';

It is up to you to pick what users are required to be copied onto your MySQL/EC2 instance. It doesn’t make sense to do it for all of them. For example, root users don’t have ‘SUPER’ privilege on RDS so it’s better to recreate them from scratch. What you need to copy are grants for your application user. We also need to copy users used by ProxySQL (proxysql-monitor in our case).

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Inserting data into your MySQL/EC2 instance

As stated above, we don’t want to restore system schemas. Therefore we will move files related to those schemas out of our mydumper directory:

root@ip-172-30-4-228:~# mkdir /tmp/rdsdump_sys/
root@ip-172-30-4-228:~# mv /tmp/rdsdump/mysql* /tmp/rdsdump_sys/
root@ip-172-30-4-228:~# mv /tmp/rdsdump/sys* /tmp/rdsdump_sys/

When we are done with it, it’s time to start to load data into the MySQL/EC2 instance:

root@ip-172-30-4-228:~/mydumper# ./myloader -d /tmp/rdsdump/ -u tpcc -p tpccpass -t 4 --overwrite-tables -h 172.30.4.238

Please note that we used four threads (-t 4) – make sure you set this to whatever makes sense in your environment. It’s all about saturating the target MySQL instance – either CPU or I/O, depending on the bottleneck. We want to squeeze as much out of it as possible to ensure we used all available resources for loading the data.

After the main data is loaded, there are two more steps to take, both are related to RDS internals and both may break our replication. First, RDS contains a couple of rds_* tables in the mysql schema. We want to load them in case some of them are used by RDS – replication will break if our slave won’t have them. We can do it in the following way:

root@ip-172-30-4-228:~/mydumper# for i in $(ls -alh /tmp/rdsdump_sys/ | grep rds | awk '{print $9}') ; do echo $i ;  mysql -ppass -uroot  mysql < /tmp/rdsdump_sys/$i ; done
mysql.rds_configuration-schema.sql
mysql.rds_configuration.sql
mysql.rds_global_status_history_old-schema.sql
mysql.rds_global_status_history-schema.sql
mysql.rds_heartbeat2-schema.sql
mysql.rds_heartbeat2.sql
mysql.rds_history-schema.sql
mysql.rds_history.sql
mysql.rds_replication_status-schema.sql
mysql.rds_replication_status.sql
mysql.rds_sysinfo-schema.sql

Similar problem is with timezone tables, we need to load them using data from the RDS instance:

root@ip-172-30-4-228:~/mydumper# for i in $(ls -alh /tmp/rdsdump_sys/ | grep time_zone | grep -v schema | awk '{print $9}') ; do echo $i ;  mysql -ppass -uroot  mysql < /tmp/rdsdump_sys/$i ; done
mysql.time_zone_name.sql
mysql.time_zone.sql
mysql.time_zone_transition.sql
mysql.time_zone_transition_type.sql

When all this is ready, we can setup replication between RDS (master) and our MySQL/EC2 instance (slave).

Setting up replication

Mydumper, when performing a consistent dump, writes down a binary log position. We can find this data in a file called metadata in the dump directory. Let’s take a look at it, we will then use the position to setup replication.

root@ip-172-30-4-228:~/mydumper# cat /tmp/rdsdump/metadata
Started dump at: 2017-02-03 16:17:29
SHOW SLAVE STATUS:
    Host: 10.1.4.180
    Log: mysql-bin-changelog.007079
    Pos: 10537102
    GTID:

Finished dump at: 2017-02-03 16:44:46

One last thing we lack is a user that we could use to setup our slave. Let’s create one on the RDS instance:

root@ip-172-30-4-228:~# mysql -ppassword -h rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com
mysql> CREATE USER IF NOT EXISTS 'rds_rpl'@'%' IDENTIFIED BY 'rds_rpl_pass';
Query OK, 0 rows affected (0.04 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO 'rds_rpl'@'%';
Query OK, 0 rows affected (0.01 sec)

Now it’s time to slave our MySQL/EC2 server off the RDS instance:

mysql> CHANGE MASTER TO MASTER_HOST='rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com', MASTER_USER='rds_rpl', MASTER_PASSWORD='rds_rpl_pass', MASTER_LOG_FILE='mysql-bin-changelog.007079', MASTER_LOG_POS=10537102;
Query OK, 0 rows affected, 2 warnings (0.03 sec)
mysql> START SLAVE;
Query OK, 0 rows affected (0.02 sec)
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Queueing master event to the relay log
                  Master_Host: rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com
                  Master_User: rds_rpl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin-changelog.007080
          Read_Master_Log_Pos: 13842678
               Relay_Log_File: relay-bin.000002
                Relay_Log_Pos: 20448
        Relay_Master_Log_File: mysql-bin-changelog.007079
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 10557220
              Relay_Log_Space: 29071382
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 258726
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1237547456
                  Master_UUID: b5337d20-d815-11e6-abf1-120217bb3ac2
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: System lock
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name:
           Master_TLS_Version:
1 row in set (0.01 sec)

Last step will be to switch our traffic from the RDS instance to MySQL/EC2, but we need to let it catch up first.

When the slave has caught up, we need to perform a cutover. To automate it, we decided to prepare a short bash script which will connect to ProxySQL and do what needs to be done.

# At first, we define old and new masters
OldMaster=rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com
NewMaster=172.30.4.238

(
# We remove entries from mysql_replication_hostgroup so ProxySQL logic won’t interfere
# with our script

echo "DELETE FROM mysql_replication_hostgroups;"

# Then we set current master to OFFLINE_SOFT - this will allow current transactions to
# complete while not accepting any more transactions - they will wait (by default for 
# 10 seconds) for a master to become available again.

echo "UPDATE mysql_servers SET STATUS='OFFLINE_SOFT' WHERE hostname=\"$OldMaster\";"
echo "LOAD MYSQL SERVERS TO RUNTIME;"
) | mysql -u admin -padmin -h 127.0.0.1 -P6032


# Here we are going to check for connections in the pool which are still used by 
# transactions which haven’t closed so far. If we see that neither hostgroup 10 nor
# hostgroup 20 has open transactions, we can perform a switchover.

CONNUSED=`mysql -h 127.0.0.1 -P6032 -uadmin -padmin -e 'SELECT IFNULL(SUM(ConnUsed),0) FROM stats_mysql_connection_pool WHERE status="OFFLINE_SOFT" AND (hostgroup=10 OR hostgroup=20)' -B -N 2> /dev/null`
TRIES=0
while [ $CONNUSED -ne 0 -a $TRIES -ne 20 ]
do
  CONNUSED=`mysql -h 127.0.0.1 -P6032 -uadmin -padmin -e 'SELECT IFNULL(SUM(ConnUsed),0) FROM stats_mysql_connection_pool WHERE status="OFFLINE_SOFT" AND (hostgroup=10 OR hostgroup=20)' -B -N 2> /dev/null`
  TRIES=$(($TRIES+1))
  if [ $CONNUSED -ne "0" ]; then
    sleep 0.05
  fi
done

# Here is our switchover logic - we basically exchange hostgroups for RDS and EC2
# instance. We also configure back mysql_replication_hostgroups table.

(
echo "UPDATE mysql_servers SET STATUS='ONLINE', hostgroup_id=110 WHERE hostname=\"$OldMaster\" AND hostgroup_id=10;"
echo "UPDATE mysql_servers SET STATUS='ONLINE', hostgroup_id=120 WHERE hostname=\"$OldMaster\" AND hostgroup_id=20;"
echo "UPDATE mysql_servers SET hostgroup_id=10 WHERE hostname=\"$NewMaster\" AND hostgroup_id=110;"
echo "UPDATE mysql_servers SET hostgroup_id=20 WHERE hostname=\"$NewMaster\" AND hostgroup_id=120;"
echo "INSERT INTO mysql_replication_hostgroups VALUES (10, 20, 'hostgroups');"
echo "LOAD MYSQL SERVERS TO RUNTIME;"
) | mysql -u admin -padmin -h 127.0.0.1 -P6032

When all is done, you should see the following contents in the mysql_servers table:

mysql> select * from mysql_servers;
+--------------+-----------------------------------------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+-------------+
| hostgroup_id | hostname                                      | port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment     |
+--------------+-----------------------------------------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+-------------+
| 20           | 172.30.4.238                                  | 3306 | ONLINE | 1      | 0           | 100             | 10                  | 0       | 0              | read server |
| 10           | 172.30.4.238                                  | 3306 | ONLINE | 1      | 0           | 100             | 10                  | 0       | 0              | read server |
| 120          | rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com | 3306 | ONLINE | 1      | 0           | 100             | 10                  | 0       | 0              |             |
| 110          | rds2.cvsw8xpajw2b.us-east-1.rds.amazonaws.com | 3306 | ONLINE | 1      | 0           | 100             | 10                  | 0       | 0              |             |
+--------------+-----------------------------------------------+------+--------+--------+-------------+-----------------+---------------------+---------+----------------+-------------+

On the application side, you should not see much of an impact, thanks to the ability of ProxySQL to queue queries for some time.

With this we completed the process of moving your database from RDS to EC2. Last step to do is to remove our RDS slave – it did its job and it can be deleted.

In our next blog post, we will build upon that. We will walk through a scenario in which we will move our database out of AWS/EC2 into a separate hosting provider.

via Planet MySQL
MySQL in the Cloud – Online Migration from Amazon RDS to your own server (part 2)

The Scout Rifle: The One Rifle To Have If You Could Only Have One

Note: This article was originally posted on NRA Blog: http://bit.ly/2n1SVCC

NRAblog.com
NRAblog.com

USA -(Ammoland.com)- Firearms come in many forms, with some designed deliberately for distinct undertakings, such as long-range marksmanship with a precision rifle or a micro subcompact built for concealed carry. Others may fill multiple roles, like an AR-15, which can be used for plinking, hunting or competition.

However, there is one type of rifle designed with excruciating detail that fits a spectrum of uses, meant to be “the one rifle you would have if you could only have just one rifle: the scout rifle.

Former Marine lieutenant colonel and firearms instructor Jeff Cooper, the founder of the legendary Gunsite Academy in Paulden, Arizona, is the architect of the versatile scout rifle concept. A leading expert on rifle shooting and marksmanship – he authored the rifleman’s tome “The Art of the Rifle” – Cooper envisioned a general-purpose rifle that could be used for hunting and fighting.

(Graphic courtesy/NRA Publications)

In the late 1970s and early 1980s, Cooper began prescribing the characteristics of what would constitute the scout rifle, named so as it would befit the lone rifleman –he described the scout as “a man who acted alone, not as a member of a team” –heading into unfamiliar and potentially hostile environments. To meet the demands of the uncertain, Cooper thought the scout rifle should be lightweight and maneuverable, but use a full-power cartridge capable of stopping large game or other threats to the shooter’s person.

While Cooper’s ideas weren’t groundbreaking by any means – the Germans employed forward-mounted scopes on their Mauser K98K rifles in World War II – it did bring together a series of features not before purpose-engineered in one rifle. The characteristics outlined by Cooper included:

Read the rest of this story on NRA Blog!

This post The Scout Rifle: The One Rifle To Have If You Could Only Have One appeared first on AmmoLand.com Shooting Sports News .

via AmmoLand.com Shooting Sports News
The Scout Rifle: The One Rifle To Have If You Could Only Have One

Designing Your Outdoor Shooting Range

Editor’s Note: As articles about irresponsible rural gun owners allowing rounds to escape from home-made ranges featuring poorly-constructed backstops continue to make headlines, Buckeye Firearms Association is dedicated to continue efforts to educate  gun owners on the importance of range responsibility.

Following are links to recommended resources to ensure that "if it’s shot here, it stays here."

Ohio Revised Code 1501:31-29-03 Shooting ranges

Outdoor Shooting Ranges: Best Practices (Minnesota ODNR)

Design Criteria for Shooting Ranges (National Shooting Sports Foundation)

The National Rifle Association offers a number of helpful resources, including, but not limited to:

2012 NRA RANGE SOURCE BOOK ON CD-ROM

2012 NRA RANGE SOURCE BOOK

NRA Range Development & Operations Course

There are ongoing efforts to push legislation that would allow Ohio townships to regulate the discharge of firearms. Don’t be the guy that provides more fuel to the gun control fire.

If you shoot on your property, take responsibility to set up a proper range. 


via Buckeye Firearms Association
Designing Your Outdoor Shooting Range

Stockpiling Basic SHTF Survival Gear

It is not a matter of if, but when there is a major disruption of society, have you taken your friends and family members into account?  When prepping, we can not just think of ourselves.

Let’s take a few minutes and talk about stockpiling basic survival gear for friends and family members who may show up at your door.  We are not talking about blankets, pillows or cots, those should be a given.

We are going to talk about stockpiling basic survival gear for a complete collapse of society.  This should allow friends and family members to hunt, fish, skin wild game and be able to carry basic gear.

Backpack – I use to recommend the medium ALICE pack, but they have gotten expensive and prices continue increase.

Wait until after school starts, and stores should put their “back to school” backpacks on clearance sale.  Several years ago, I found school backpacks for $5 each.  The store wanted to get rid of the overstock, so the packs were put into bins and put on sale.

Bedroll – Something like a fleece sleeping bag.  Prices range around $20.  Can double as a light blanket for around the house.

Canteen and cup – Military surplus, nothing expensive.  Why a canteen and cup over a water bottle?  The cup can be used to cook with.

Cord – I buy trotline string and use it for cord around the house.

Fire starter – Pill bottle with matches and striker.  Maybe another pill bottle with dryer lint.

Flashlight – Some kind of cheap flashlight.  Everyone should have their own personal flashlight and keep it close at hand.

Dogs start barking in the middle of the night, nobody should be asking where their flashlight is at.

Knife – There are a number of decent quality knives on the market at an affordable price. Sites like Ebay and Amazon are a good place to start.  Do not spend a lot of money.  Just something that can cut cord or skin small game.

I have been adding Survivor brand name knives to my stockpile. They are very affordable and have a wide selection.

Rain poncho – Nothing expensive, just something to build a hooch and keep the rain off.

Water filter – There are a wide range of affordable water filter options on the market.  A buddy kept telling me about the Sawyer mini water filter, so I bought one. If you keep it cleaned out, the filter is rated for 100,000 gallons. As of March 12, 2017 it has a price of $19.99.

Basic Gear

This should cover basic gear needed for someone to do recon around the bug out location or go on food gathering trips.

Food – foraging, hunting and fishing.

Water – water filter and canteen.

Shelter – poncho for hooch and bedroll

 

 

The post Stockpiling Basic SHTF Survival Gear appeared first on AllOutdoor.com.

via All Outdoor
Stockpiling Basic SHTF Survival Gear

Chick-Fil-A Opening High Street Location Near OSU

Soon, you’ll no longer need to drive to the suburbs to get your dose of Christian chicken sandwiches and waffle fries. Popular fast food chain Chick-Fil-A has submitted a design package to the University Area Review Board, asking for approval on a new store front that will be located at 1912 North High Street, inside The Wellington — a new six-story building currently under construction.

While representatives from both Chick-Fil-A and leasing agent CASTO declined to comment further at this point, the submission reveals that the store would be just over 4,000 square feet in size, and will feature patio seating on High Street. The location would be the eighth in Central Ohio, and the only one located in the central city.

chick-fil-a-01

There is no projected opening date for Chick-Fil-A as of yet, but the urban-style Target that was previously announced at The Wellington is slated to open sometime in mid-2018 while the apartments are expected to be available in August 2018.

The University Area Review Board will meet to review the Chick-Fil-A submission on Thursday.

CLICK HERE for more updates on the 15th & High development.

For more information, visit www.chick-fil-a.com.

via ColumbusUnderground.com
Chick-Fil-A Opening High Street Location Near OSU

New Wonder Woman Trailer Shows How the Girl Became a Legend

The latest Wonder Woman trailer is finally here, and it takes us into Diana’s past to show how it shaped her amazing future.

It’s still hard to believe we’re just three months away from a Wonder Woman solo film. Gal Gadot charmed audiences by being the break-out star in Batman v Superman, and fans have been eager to see the heroine take center stage.

This latest trailer introduces us to the younger version of Diana, watching her grow in her strength and abilities over the years. The trailer’s definitely more focused on her personal journey, showing how she overcame the doubt imposed by others and learned to embrace her true destiny. Looks like one exciting ride. Wonder Woman opens June 2.

[Twitter]

via Gizmodo
New Wonder Woman Trailer Shows How the Girl Became a Legend