Build Your Own Glock Series with TFB

The Firearm Blog has partnered with Lone Wolf Distributors to bring you a series on building your own Glock. Buying an 80% receiver is not new to the gun industry. Glocks are the rage, why not build one the way YOU want it? Get yourself a hot gat, and loose that Fudd factory gun. Get that sweet trigger in there, and pay no attention to the factory fanboys. (Factory purest click here)

I looked at my Gen 2 Glock 21 and figured she is probably ready to retire to safe queen status, therefore, clearly time for my next Glock. I already own a (cough) number of Glocks, but I decided this has to be a hot gat. Rather than paying top dollar for a pimped out Glock, I want to build exactly what I want the way I want.

Taking an 80% receiver is perfectly legal, and surprisingly common. I know of a retired military operator who has built a half dozen and working on another. In this series, I will be building my pimped out        Glock 19. RMR ready and eventually suppressed.

So dear reader hit that link below, visit Lone Wolf, get yourself an 80% receiver and join us for this surprisingly easy adventure in building yourself a Glock. If it is for a Glock, these guys have it.

Everything you will ever need for your Glock. Check them out.

 What You Will Need

  • 80% polymer Frame Kit (here) Lone Wolf will also supply you with a “completion kit” that will throw in everything you need for the lower (see here)
  • Slide
  • Slide completion kit (see here)
  • Barrel
  • Drill Press

The 80% frame will arrive in the jig you will use to remove the material to bring the frame to 100%. It also ships with the drill bits required. I will go over all of this in the series, step by step.

Next addition I will briefly address the legality, as I sat down with an ATF agent before I started. A very cool guy who helped me out with pitfalls people run into. Then I will start creating my new sweet non-factory Glock.

I know there a lot of you who have already built up an 80% frame. What do you think? How was your experience? Let us know below

 

Lone Wolf THE leading Glock accessory supplier – (Click here)


We are committed to finding, researching, and recommending the best products. We earn commissions from purchases you make using the retail links in our product reviews.

Learn more about how this works

.

via The Firearm Blog
Build Your Own Glock Series with TFB

MySQL Error: Too many connections!

When your application get error "too many connections" underlying problem might be caused by multiple things.

In this blog we will investigate why and how to solve these problems when running MySQL installed on Debian/Ubuntu using systemd (default MySQL packages).

OS: Debian 9
MySQL Server version: 5.7.25 MySQL Community Server (GPL)
(will most likely be the same for MySQL 8.0 versions)

Goal is to have 10.000 working connections to MySQL!

The default value for max_connections is 151 connections so first step is to increase the max_connections variable to 10.000.
This is documented in the manuals here:
– https://ift.tt/23yLjHw
– https://ift.tt/2FNe7om

The max_connections is a dynamic setting so lets increase value to 10.000 and see what happens.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
214

Hmmm, it looks like we only got 214 connections, lets look at the error log:

2019-04-01T06:29:48.301871Z 0 [Warning] Changed limits: max_open_files: 1024 (requested 50000)
2019-04-01T06:29:48.302000Z 0 [Warning] Changed limits: max_connections: 214 (requested 10000)
2019-04-01T06:29:48.302004Z 0 [Warning] Changed limits: table_open_cache: 400 (requested 2000)

Looks like we hit some resource limit.
Lets look in the MySQL manual for running MySQL under systemd, that can be found here.
Looks like we need to increase the number of allowed open files for MySQL, locate the systemd configuration file for MySQL and create the /etc/systemd/system/mysqld.service.d/override.conf (file can be called anything ending with .conf).

Add LimitNOFILE=10000 in file override.conf like:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf
[Service]
LimitNOFILE=10000

After this we need to reload the systmed daemon and restart the MySQL service like:

root@debian:~# systemctl daemon-reload
root@debian:~# systemctl restart mysql

MySQL is now saying we have 9190 connections:

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
@@max_connections
9190

So, MySQL is using some files for additional work and we need to set this a bit higher to get 10.000 connections, lets set it to 11.000 and reload the systemd daemon and restart the MySQL service.

root@debian:~# mysql -uroot -proot -se "select @@max_connections"
mysql: [Warning] Using a password on the command line interface can be insecure.
@@max_connections
10000

Good, now we have 10.000 connections available according to MySQL.

Lets run our application and verify we can get 10.000 connections, I use this small perl script to open 10.000 connections.

Just below 5.000 connection I get a new error in the application "Can’t create a new thread (errno 11)"
Lets have a look at the MySQL error log:

root@debian:~# tail -1 /var/log/mysql/error.log
2019-04-01T06:50:35.657397Z 0 [ERROR] Can’t create thread to handle new connection(errno= 11)

I found this new limit by running command below when perl script was running:
watch -n.5 "mysql -uroot -proot -se’show status like \"%threads%\"’"

Strange, where is this new limitation just below 5.000 connections coming from?

Looking at resource limits for my MySQL daemon I should have close to 8000 processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes
Max open files            11000                11000                files

Lets looks at status report for my MySQL service:

root@debian:~# systemctl status mysql | grep Tasks
    Tasks: 127 (limit: 4915)

There seem to be some additional limit on Tasks that limit me to 4915 connections.
Lets expand our override.conf configuration to cover for 11.000 tasks also.

root@debian:~# cat /etc/systemd/system/mysql.service.d/
[Service]
LimitNOFILE=11000
TasksMax=11000

(remember to reload systemd and restart MySQL service after each change in override.conf)

Now we got just under 8.000 connection and got the same error " Can’t create a new thread (errno 11)" but this time it’s because of the limit of max processes:

root@debian:~# cat /proc/$( pgrep -o mysql )/limits
Max processes             7929                 7929                 processes

Lets increase this limit to 11000 in our override.conf:

root@debian:~# cat /etc/systemd/system/mysql.service.d/override.conf and
[Service]
LimitNOFILE=11000
LimitNPROC=11000
TasksMax=11000

After reloading systemd configuration and restarting MySQL service I can now get
10.000 connections and the perl script runs without any errors!

Summary:
There are different limits when setting max_connections in MySQL:
– The default max connections is 151.
– At 214 connections you are limited by max open files.
– At 4915 you are limited by Max Tasks (systemd)
– Just below 8000 you are limited by max number of processes

By adding a override file (as showed above) you can overcome all these limits.

Remember to:
Look at error message in application and MySQL error logs.
Look at output from: cat /proc/$( pgrep -o mysql )/limits
Look at output from: systemctl status mysql
Test your application (or use my perl script) and monitor that it works!
Monitor how many connections you have: watch -n.5 "mysql -uroot -proot -se’show status like \"Threads_connected\"’"

via Planet MySQL
MySQL Error: Too many connections!

Got Amazon Prime? You Can Get a Free Year of Nintendo Switch Online.

Best Gaming DealsThe best deals on games, consoles, and gaming accessories from around the web, updated daily.   

Right now, Twitch Prime subscribers can get up to a year of Nintendo Switch Online benefits, including online play, access to classic NES games, and other benefits thanks to Twitch Prime.

If you’ve got Amazon Prime, you can sign up for Twitch Prime for free.

Here’s how it works: You can claim the 3-months membership offer now by linking your Twitch and Nintendo accounts, then come back to claim the 9-month individual membership when it unlocks later.


via Gizmodo
Got Amazon Prime? You Can Get a Free Year of Nintendo Switch Online.

MailEclipse: Laravel Mail Editor Package

MailEclipse: Laravel Mail Editor Package

MailEclipse is a mailable editor package for your Laravel applications to create and manage mailables using a web UI. You can use this package to develop mailables without using the command line, and edit templates associated with mailables using a WYSIWYG editor, among other features.

You can even edit your markdown mailable templates:

When creating a mailable template, you can pick from existing themes provided by this package:

The best way to get an idea of what this package does is to install it and try it out or check out this five-minute demo from the author:

Note: the video doesn’t provide sound, but does give an excellent overview and demo of MailEclipse features.

At the time of writing this package is a work in progress and under active development. Part of being under active development means that if you’re interested, you’re encouraged to try this package out and provide feedback.

To start using this package, check out the source code and readme at Qoraiche/laravel-mail-editor on GitHub.


Filed in: News


Enjoy this? Get Laravel News delivered straight to your inbox every Sunday.

No Spam, ever. We’ll never share your email address and you can opt out at any time.

via Laravel News
MailEclipse: Laravel Mail Editor Package

Every MySQL should have these variables set …

So over the years, we all learn more and more about what we like and use often in MySQL. 

Currently, I step in and out of a robust about of different systems. I love it being able to see how different companies use MySQL.  I also see several aspect and settings that often get missed. So here are a few things I think should always be set and they not impact your MySQL database. 

At a high level:

  • >Move the Slow log to a table 
  • Set report_host_name 
  • Set master & slaves to use tables
  • Turn off log_queries_not_using_indexes until needed 
  • Side note — USE  ALGORITHM=INPLACE
  • Side note — USE mysql_config_editor
  • Side note — USE  mysql_upgrade  –upgrade-system-tables


Move the Slow log to a table 


This is a very simple process with a great return. YES you can use Percona toolkit to analyze the slow logs. However, I like being able to query against the table and find duplicate queries or by times and etc with a simple query call. 


mysql> select count(*) from mysql.slow_log;
+----------+
| count(*) |
+----------+
|       0 |
+----------+
1 row in set (0.00 sec)

mysql> select @@slow_query_log,@@sql_log_off;
+------------------+---------------+
| @@slow_query_log | @@sql_log_off |
+------------------+---------------+
|                1 |            0 |
+------------------+---------------+

mysql> set GLOBAL slow_query_log=0;
Query OK, 0 rows affected (0.04 sec)

mysql> set GLOBAL sql_log_off=1;
Query OK, 0 rows affected (0.00 sec)

mysql> ALTER TABLE mysql.slow_log ENGINE = MyISAM;
Query OK, 0 rows affected (0.39 sec)

mysql> set GLOBAL slow_query_log=0;
Query OK, 0 rows affected (0.04 sec)

mysql> set GLOBAL sql_log_off=1;
Query OK, 0 rows affected (0.00 sec)

mysql> ALTER TABLE mysql.slow_log ENGINE = MyISAM;
Query OK, 0 rows affected (0.39 sec)

mysql> set GLOBAL slow_query_log=1;
Query OK, 0 rows affected (0.00 sec)

mysql> set GLOBAL sql_log_off=0;
Query OK, 0 rows affected (0.00 sec)

mysql> SET GLOBAL log_output = 'TABLE';
Query OK, 0 rows affected (0.00 sec)

mysql> SET GLOBAL log_queries_not_using_indexes=0;
Query OK, 0 rows affected (0.00 sec)

mysql> select count(*) from mysql.slow_log;
+----------+
| count(*) |
+----------+
|       0 |
+----------+
1 row in set (0.00 sec)
mysql> select @@slow_launch_time;
+--------------------+
| @@slow_launch_time |
+--------------------+
|                   2 |
+--------------------+
1 row in set (0.00 sec)

mysql> SELECT SLEEP(10);
+-----------+
| SLEEP(10) |
+-----------+
|         0 |
+-----------+
1 row in set (9.97 sec)

mysql> select count(*) from mysql.slow_log;
+----------+
| count(*) |
+----------+
|         1 |
+----------+
1 row in set (0.00 sec)

mysql> select * from   mysql.slow_log\G
*************************** 1. row ***************************
    start_time: 2019-03-27 18:02:32
     user_host: klarson[klarson] @ localhost []
    query_time: 00:00:10
     lock_time: 00:00:00
     rows_sent: 1
 

rows_examined: 0
            db:
last_insert_id: 0
     insert_id: 0
     server_id: 502
      sql_text: SELECT SLEEP(10)
     thread_id: 16586457

Now you can truncate it or dump it or whatever you like to do with this data easily also. 
Note variable values into your my.cnf file to enable upon restart.

Set report_host_name 

This is a simple my.cnf file edit in all my.cnf files but certainly the slaves my.cnf files. On a master.. this is just set for when it ever gets flipped and becomes a slave.


report_host                     = <hostname>  <or whatever you want to call it>
This allows you from the master to do


mysql> show slave hosts;
+-----------+-------------+------+-----------+--------------------------------------+
| Server_id | Host         | Port | Master_id | Slave_UUID                           |
+-----------+-------------+------+-----------+--------------------------------------+
|   21235302 | <hostname>  | 3306 |   

21235301| a55faa32-c832-22e8-b6fb-e51f15b76554 |
+———–+————-+——+———–+————————————–+

Set master & slaves to use tables


mysql> show variables like '%repository';
+---------------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| master_info_repository     | FILE   |
| relay_log_info_repository | FILE   |
+---------------------------+-------+

mysql_slave> stop slave;
mysql_slave> SET GLOBAL master_info_repository = 'TABLE'; 
mysql_slave> SET GLOBAL relay_log_info_repository = 'TABLE'; 
mysql_slave> start slave;

Make sure you add to my.cnf to you do not lose binlog and position at a restart. It will default to FILE otherwise.

  • master-info-repository =TABLE 
  • relay-log-info-repository =TABLE

mysql> show variables like '%repository';
---------------------+-------+
| Variable_name             | Value |
+---------------------------+-------+
| master_info_repository     | TABLE |
| relay_log_info_repository | TABLE |
+---------------------------+-------+

All data is available in tables now and easily stored with backups


mysql> desc mysql.slave_master_info;
+------------------------+---------------------+------+-----+---------+-------+
| Field                   | Type                 | Null | Key | Default | Extra |
+------------------------+---------------------+------+-----+---------+-------+
| Number_of_lines         | int(10) unsigned     | NO   |     | NULL     |       |
| Master_log_name         | text                 | NO   |     | NULL     |       |
| Master_log_pos         | bigint(20) unsigned | NO   |     | NULL     |       |
| Host                   | char(64)             | YES   |     | NULL     |       |
| User_name               | text                 | YES   |     | NULL     |       |
| User_password           | text                 | YES   |     | NULL     |       |
| Port                   | int(10) unsigned     | NO   |     | NULL     |       |
| Connect_retry           | int(10) unsigned     | NO   |     | NULL     |       |
| Enabled_ssl             | tinyint(1)           | NO   |     | NULL     |       |
| Ssl_ca                 | text                 | YES   |     | NULL     |       |
| Ssl_capath             | text                 | YES   |     | NULL     |       |
| Ssl_cert               | text                 | YES   |     | NULL     |       |
| Ssl_cipher             | text                 | YES   |     | NULL     |       |
| Ssl_key                 | text                 | YES   |     | NULL     |       |
| Ssl_verify_server_cert | tinyint(1)           | NO   |     | NULL     |       |
| Heartbeat               | float               | NO   |     | NULL     |       |
| Bind                   | text                 | YES   |     | NULL     |       |
| Ignored_server_ids     | text                 | YES   |     | NULL     |       |
| Uuid                   | text                 | YES   |     | NULL     |       |
| Retry_count             | bigint(20) unsigned | NO   |     | NULL     |       |
| Ssl_crl                 | text                 | YES   |     | NULL     |       |
| Ssl_crlpath             | text                 | YES   |     | NULL     |       |
| Enabled_auto_position   | tinyint(1)           | NO   |     | NULL     |       |
| Channel_name           | char(64)             | NO   | PRI | NULL     |       |
| Tls_version             | text                 | YES   |     | NULL     |       |
| Public_key_path         | text                 | YES   |     | NULL     |       |
| Get_public_key         | tinyint(1)           | NO   |     | NULL     |       |
+------------------------+---------------------+------+-----+---------+-------+
27 rows in set (0.05 sec)

mysql> desc mysql.slave_relay_log_info;
+-------------------+---------------------+------+-----+---------+-------+
| Field             | Type                 | Null | Key | Default | Extra |
+-------------------+---------------------+------+-----+---------+-------+
| Number_of_lines   | int(10) unsigned     | NO   |     | NULL     |       |
| Relay_log_name     | text                 | NO   |     | NULL     |       |
| Relay_log_pos     | bigint(20) unsigned | NO   |     | NULL     |       |
| Master_log_name   | text                 | NO   |     | NULL     |       |
| Master_log_pos     | bigint(20) unsigned | NO   |     | NULL     |       |
| Sql_delay         | int(11)             | NO   |     | NULL     |       |
| Number_of_workers | int(10) unsigned     | NO   |     | NULL     |       |
| Id                 | int(10) unsigned     | NO   |     | NULL     |       |
| Channel_name       | char(64)             | NO   | PRI | NULL     |       |

+-------------------+---------------------+------+-----+---------+-------+

Turn off log_queries_not_using_indexes until needed 

This was shown above also. This is a valid variable .. but depending on application it can load a slow log with useless info. Some tables might have 5 rows in it, you use it for some random drop down and you never put an index on it. With this enabled every time you query that table it gets logged. Now.. I am a big believer in you should put an index on it anyway. But focus this variable when you are looking to troubleshoot and optimize things. Let it run for at least 24hours so you get a full scope of a system if not a week.

mysql> SET GLOBAL log_queries_not_using_indexes=0;
Query OK, 0 rows affected (0.00 sec)


To turn on 


mysql> SET GLOBAL log_queries_not_using_indexes=1;

Query OK, 0 rows affected (0.00 sec)


Note variable values into your my.cnf file to enable upon restart. 


Side note — USE  ALGORITHM=INPLACE 

OK this is not a variable but more of a best practice. You should already be using EXPLAIN before you run a query, This shows you the query plan and lets you be sure all syntax is valid.  I have seen more than once a Delete query executed without an WHERE by mistake. So 1st always use EXPLAIN to double check what you plan to do.  Not the other process you should always do is try to use an ALGORITHM=INPLACE or  ALGORITHM=COPY when altering tables. 


mysql> ALTER TABLE TABLE_DEMO   ALGORITHM=INPLACE, ADD INDEX `datetime`(`datetime`);
Query OK, 0 rows affected (1.49 sec)
Records: 0   Duplicates: 0   Warnings: 0


mysql> ALTER TABLE TABLE_DEMO   ALGORITHM=INPLACE, ADD INDEX `datetime`(`datetime`);
Query OK, 0 rows affected (1.49 sec)
Records: 0   Duplicates: 0   Warnings: 0

A list of online DLL operations is here

Side note — USE mysql_config_editor

Previous blog post about this is here 

The simple example

mysql_config_editor set  --login-path=local --host=localhost --user=root --password
Enter password:
# mysql_config_editor print --all
[local]
user = root
password = *****
host = localhost

# mysql
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)

# mysql  --login-path=local
Welcome to the MySQL monitor.

# mysql  --login-path=local -e 'SELECT NOW()';

Side note — USE  mysql_upgrade  –upgrade-system-tables

Don’t forget to use mysql_upgrade after you actually upgrade. 
This is often forgotten and leads to issues and errors at start up. You do not have to run upgrade across every table that exists though.  The focus of most upgrade are the system tables. So at the very least focus on those. 

mysql_upgrade –login-path=local  –upgrade-system-tables

via Planet MySQL
Every MySQL should have these variables set …

Replace MariaDB 10.3 by MySQL 8.0

Why migrating to MySQL 8.0 ?

MySQL 8.0 brings a lot of new features. These features make MySQL database much more secure (like new authentication, secure password policies and management, …) and fault tolerant (new data dictionary), more powerful (new redo log design, less contention, extreme scale out of InnoDB, …), better operation management (SQL Roles, instant add columns), many (but really many!) replication enhancements and native group replication… and finally many cool stuff like the new Document Store, the new MySQL Shell and MySQL InnoDB Cluster that you should already know if you follow this blog (see these TOP 10 for features for developers and this TOP 10 for DBAs & OPS).

Not anymore a drop in replacement !

We saw in this previous post how to migrate from MariaDB 5.5 (default on CentOS/RedHat 7) to MySQL. This was a straight forward migration as at the time MariaDB was a drop in replacement for MySQL…but this is not the case anymore since MariaDB 10.x !

Lets get started with the migration to MySQL 8.0 !

Options

Two possibilities are available to us:

  1. Use logical dump for schemes and data
  2. Use logical dump for schemes and transportable InnoDB tablespaces for the data

Preparing the migration

Option 1 – full logical dump

It’s recommended to avoid to have to deal with mysql.* tables are they won’t be compatible, I recommend you to save all that information and import the required entries like users manually. It’s maybe the best time to do some cleanup.

As we are still using our WordPress site to illustrate this migration. I will dump the wp database:

# mysqldump -B wp > wp.sql

MariaDB doesn’t provide mysqlpump, so I used the good old mysqldump. There was a nice article this morning about MySQL logical dump solutions, see it here.

Option 2 – table design dump & transportable InnoDB Tables

First we take a dump of our database without the data (-d):

# mysqldump -d -B wp > wp_nodata.sq

Then we export the first table space:

[wp]> flush tables wp_comments for export;
Query OK, 0 rows affected (0.008 sec

We copy it to the desired location (the .ibd and the .cfg):

# cp wp/wp_comments.ibd ~/wp_innodb/
# cp wp/wp_comments.cfg ~/wp_innodb/

And finally we unlock the table:

[wp]> unlock tables;

These operation above need to be repeated for all the tables ! If you have a large amount of table I encourage you to script all these operations.

Replace the binaries / install MySQL 8.0

Unlike previous version, if we install MySQL from the Community Repo as seen on this post, MySQL 8.0 won’t be seen as a conflicting replacement for MariaDB 10.x. To avoid any conflict and installation failure, we will replace the MariaDB packages by the MySQL ones using the swap command of yum:

# yum swap -- install mysql-community-server mysql-community-libs-compat -- \ 
remove MariaDB-server MariaDB-client MariaDB-common MariaDB-compat

This new yum command is very useful, and allow other dependencies like php-mysql or postfix for example to stay installed without breaking some dependencies

The result of the command will be something similar to:

Removed:
MariaDB-client.x86_64 0:10.3.13-1.el7.centos
MariaDB-common.x86_64 0:10.3.13-1.el7.centos
MariaDB-compat.x86_64 0:10.3.13-1.el7.centos
MariaDB-server.x86_64 0:10.3.13-1.el7.centos
Installed:
mysql-community-libs-compat.x86_64 0:8.0.15-1.el7
mysql-community-server.x86_64 0:8.0.15-1.el7
Dependency Installed:
mysql-community-client.x86_64 0:8.0.15-1.el7
mysql-community-common.x86_64 0:8.0.15-1.el7
mysql-community-libs.x86_64 0:8.0.15-1.el7

Now the best is to empty the datadir and start mysqld:

# rm -rf /var/lib/mysql/*
# systemctl start mysq

This will start the initialize process and start MySQL.

As you may know, by default MySQL is now more secure and a new password has been generated to the root user. You can find it in the error log (/var/log/mysqld.log):

2019-03-26T12:32:14.475236Z 5 [Note] [MY-010454] [Server] 
A temporary password is generated for root@localhost: S/vfafkpD9a

At first login with the root user, the password must be changed:

# mysql -u root -p
mysql> set password='Complicate1#'

Adding the credentials

Now we need to create our database (wp), our user and its credentials.

Please, note that the PHP version used by default in CentOS might now be yet compatible with the new default secure authentication plugin, therefor we will have to create our user with the older authentication plugin, mysql_native_password. For more info see these posts:

Migrating to MySQL 8.0 without breaking old application

Drupal and MySQL 8.0.11 – are we there yet ?

Joomla! and MySQL 8.0.12

PHP 7.2.8 & MySQL 8.0

mysql> create user 'wp'@'127.0.0.1' identified with 
'mysql_native_password' by 'fred';

By default, this password (fred) won’t be allowed with the default password policy.

To not have to change our application, it’s possible to override the policy like this:

mysql> set global validate_password.policy=LOW;
mysql> set global validate_password.length=4


It’s possible to see the user and its authentication plugin easily using the following query:

mysql> select Host, User, plugin,authentication_string from mysql.user where User='wp';
+-----------+------+-----------------------+-------------------------------------------+
| Host | User | plugin | authentication_string |
+-----------+------+-----------------------+-------------------------------------------+
| 127.0.0.1 | wp | mysql_native_password | *6C69D17939B2C1D04E17A96F9B29B284832979B7 |
+-----------+------+-----------------------+-------------------------------------------+

We can now create the database and grant the privileges to our user:

mysql> create database wp;
Query OK, 1 row affected (0.00 sec)
mysql> grant all privileges on wp.* to 'wp'@'127.0.0.1';
Query OK, 0 rows affected (0.01 sec)

Restore the data

This process is also defined by the options chosen earlier.

Option 1

This option, is the most straight forward, one restore and our site is back online:

# mysql -u wp -pfred wp <~/wp.sql

Option 2

This operation is more complicated as it requires more steps.

First we will have to restore all the schema with no data:

# mysql -u wp -pfred wp <~/wp_nodata.sql

And now for every tables we need to perform the following operations:

mysql> alter table wp_posts discard tablespace;

# cp ~/wp_innodb/wp_posts.ibd /var/lib/mysql/wp/
# cp ~/wp_innodb/wp_posts.cfg /var/lib/mysql/wp/
# chown mysql. /var/lib/mysql/wp/wp_posts.*

mysql> alter table wp_posts import tablespace

Yes, this is required for all tables, this is why I encourage you to script it if you choose this option.

Conclusion

So as you could see, it’s still possible to migrate from MariaDB to MySQL but since 10.x, this is not a drop in replacement anymore and requires several steps including logical backup.

via Planet MySQL
Replace MariaDB 10.3 by MySQL 8.0

Make Gun Tools That Don’t Exist with Moldable Thermoplastic

FEATURED LARGE_20190324215133176

FEATURED LARGE_20190324215133176While working on a completely different project I discovered something curious on Amazon. That product was moldable  thermoplastic pellets. Shaped in balls like smaller-than-usual airsoft pellets, moldable thermoplastic melts at just 140F, can be formed like clay, and then increases in hardness as it approaches room temperature. There are seemingly endless uses for this product, but I had a pet… more
via Recoil
Make Gun Tools That Don’t Exist with Moldable Thermoplastic

Laravel 5.8 REST CRUD API Tutorial – Build a CRM [PART 1]: Eloquent Models and Relationships

Laravel 5.8 is recently released with many improvements so we’ll be learning, throughout this tutorial how to create an example CRUD application from scratch. The application we’ll be building is a simple CRM with a MySQL database.
You can see this Upgrade Guide for instructions on how to upgrade an existing web application from Laravel 5.7 to Laravel 5.8
Tutorial Prerequisites
This tutorial has some prerequisites
You have at least PHP 7.1.3 installed on your development machine,
Working experience of PHP. The OpenSSL, PDO, Mbstring, Tokenizer, XML, Ctype, JSON and BCMath PHP extensions installed with your PHP version.
The MySQL database management system,
Composer (A PHP dependency management for PHP) installed on your machine. You can head to the official website for instructions how to download install it.
If you have these prerequisites, let’s get started by creating our first Laravel 5.8 project.
Creating a Laravel 5.8 Project with PHP Composer
Let’s use Composer to create a project based on Laravel 5.8. Open a new terminal and run the following command:
$ composer create-project –prefer-dist laravel/laravel laravel-crud
This command will automatically start installing the latest version of Laravel provided that you have the required dependencies of PHP for Laravel 5.8 installed on your system:
After finishing the installation process, navigate to the project’s folder:
$ cd ./laravel-crud
Next, serve your application using the artisan serve command:
$ php artisan serve
This will start a Laravel development server on the http://127.0.0.1:8000. Just leave it open as any changes we’ll be making will automatically get reloaded.
Configuring the MySQL Database
We’ll be using MySQL, the most popular database system used by PHP and Laravel developers so make sure you have created a database for your project. You can simply use the mysql client. Open a new terminal window and run the following command:
$ mysql -u root -p
You will get prompted for a password. Enter the one you submitted when you configured your MySQL installation and hit Enter.
When the mysql clients starts, enter the following SQL instruction to create a database:
mysql> create database l58db;
Note: You can also use phpMyAdmin to create and work with MySQL databases. phpMyAdmin is a free web interface tool created in PHP, intended to handle the administration of MySQL over the Web. It’s beginners friendlier tool that’s commonly used by PHP developers.
Now, let’s let Laravel know about our created database. Open the .env file in the root of your project and update the MySQL credentials with your own values: DB_CONNECTION=mysql DB_HOST=127.0.0.1 DB_PORT=3306 DB_DATABASE=l58db DB_USERNAME=root DB_PASSWORD=<YOUR_DATABASE_PASSWORD>
This will allow your application to connect to your MySQL database.
You can also configure the database connection in the config/database.php.
The database configuration for your application is located at config/database.php. In this file you may define all of your database connections, as well as specify which connection should be used by default. Examples for most of the supported database systems are provided in this file. The official docs
Next, let’s create the database tables using the following command:
$ php artisan migrate
Note: Until now, we didn’t create any models but Laravel makes use of multiple tables so we need to run the previous command to create them in our database. After we create our own models, we can run the artisan migrate command again to update the database structure.
This is the output of the command:
Migration table created successfully.
Migrating: 2014_10_12_000000_create_users_table
Migrated: 2014_10_12_000000_create_users_table
Migrating: 2014_10_12_100000_create_password_resets_table
Migrated: 2014_10_12_100000_create_password_resets_table
You can see that our project has already the users and password_resets tables.
The Application We’ll Be Building
We’ll be building a simple CRM application that allows sales managers to manage contacts, accounts, leads, opportunities, tasks and related activities. For the sake of simplicity we’ll try to add few interfaces as we can in our application. The main interface is a dashboard which contains the table of contacts and their status (lead, opportunity and customer). We’ll not add login and authentication in this tutorial as we’ll be the subject of another tutorial.
In our CRM database we’ll be making use of the following tables:
contacts — contains information about contacts/customers such as name, address, company/account, ,
activities — contains activities (phone calles, meetings and emails etc.) about the contacts,
accounts — contains information about contact companies,
users — contains information about the application users
We’ll also be using the following JOIN tables:
contact_status — contains contact status such as lead, opportunity or customer which indicates the stage in the sales cycle
activity_status — the activity status can be either pending, ongoing or completed,
contact_source — contains contact source.
The contacts table has the following fields:
id
title,
first name,
last name,
email,
phone,
address,
source_id,
date of first contact,
account_id,
status_id,
user_id,
The contact_status table has the following fields:
id,
status = (lead, proposal, customer, archived)
The contact_source table:
id,
name
The accounts table has the following fields:
id,
name,
description
The activities table has the following fields:
id,
date,
description,
contact_id
status_id The activity_status table has the following fields:
id,
status
Creating Laravel 5.8 Models
According to the database structure above, we’ll need to create the followng Eloquent models:
Contact
Account
Activity
ContactStatus
ContactSource
ActivityStatus
Head back to your terminal and run the following commands:
$ php artisan make:model Contact –migration
$ php artisan make:model Account –migration
$ php artisan make:model Activity –migration
$ php artisan make:model ContactStatus –migration
$ php artisan make:model ContactSource –migration
$ php artisan make:model ActivityStatus –migration
This will create models with the corresponding migrations files. The models exist in the app folder and you can find the migration files in the database/migrations folder.
The -m flag will also create the corresponding migration file for the model.
Next, in your terminal, run the following command to create the base tables:
$ php artisan migrate
You will get the following output:
Migration table created successfully.
Migrating: 2019_03_12_223818_create_contacts_table
Migrated: 2019_03_12_223818_create_contacts_table
Migrating: 2019_03_12_223832_create_accounts_table
Migrated: 2019_03_12_223832_create_accounts_table
Migrating: 2019_03_12_223841_create_activities_table
Migrated: 2019_03_12_223841_create_activities_table
Migrating: 2019_03_12_223855_create_contact_statuses_table
Migrated: 2019_03_12_223855_create_contact_statuses_table
Migrating: 2019_03_12_223904_create_contact_sources_table
Migrated: 2019_03_12_223904_create_contact_sources_table
Migrating: 2019_03_12_223912_create_activity_statuses_table
Migrated: 2019_03_12_223912_create_activity_statuses_table
In Laravel, you can specify the structure (table fields) in the migration files. Let’s start with the contacts table. Open the database/migrations/2019_03_12_223818_create_contacts_table.php file (the date prefix for the file will be different for you) and add the following changes:
public function up()
{
Schema::create(‘contacts’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘title’);
$table->string(‘first_name’);
$table->string(‘last_name’);
$table->string(’email’);
$table->string(‘phone’);
$table->string(‘address’);
$table->date(‘date’);
$table->biginteger(‘user_id’)->unsigned(); $table->foreign(‘user_id’)->references(‘id’)->on(‘users’); });
}
Next, open the database/migrations/<YOUR_TIMESTAMP>_create_accounts_table.php file and change accordingly:
public function up()
{
Schema::create(‘accounts’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘name’);
$table->description(‘description’); });
}
Next, open the database/migrations/<YOUR_TIMESTAMP>_create_activities_table.php file and change accordingly:
public function up()
{
Schema::create(‘activities’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘description’);
});
}
Next, open the database/migrations/<YOUR_TIMESTAMP>_create_contact_statuses_table.php file and change accordingly:
public function up()
{
Schema::create(‘contact_statuses’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘status’);
});
}
Next, open the database/migrations/<YOUR_TIMESTAMP>_create_contact_sources_table.php file and change accordingly:
public function up()
{
Schema::create(‘contact_sources’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘name’);
});
}
Next, open the database/migrations/<YOUR_TIMESTAMP>_create_activity_statuses_table.php file and change accordingly:
public function up()
{
Schema::create(‘activity_statuses’, function (Blueprint $table) {
$table->bigIncrements(‘id’);
$table->timestamps();
$table->string(‘status’); });
}
You can see that we didn’t create any foreign keys between the tables. That’s because we need to avoid any issues to creating a foreign key to a table that doesn’t exist yet. The order of the migrations is important so you either make sure that the tables that are being referenced are created first or create the tables without any foreign keys and then add a migration to update the tables with the required relationships after the tables are created. Now, let’s create the update_contacts_table migration by running the following command:
$ php artisan make:migration update_contacts_table –table=contacts
Created Migration: 2019_03_12_235456_update_contacts_table
Open the database/migrations/<YOUR_TIMESTAMP>_update_contacts_table.php file and update accordingly:
public function up()
{
Schema::table(‘contacts’, function (Blueprint $table) {
$table->biginteger(‘source_id’)->unsigned(); $table->foreign(‘source_id’)->references(‘id’)->on(‘contact_sources’); $table->biginteger(‘account_id’)->unsigned(); $table->foreign(‘account_id’)->references(‘id’)->on(‘accounts’); $table->biginteger(‘status_id’)->unsigned(); $table->foreign(‘status_id’)->references(‘id’)->on(‘contact_statuses’);
});
}
We create three foreign key relationships to the contact_sources, accounts and contact_statuses tables.
Next, let’s create the update_activities_table migration by running the following command:
$ php artisan make:migration update_activities_table –table=activities
Created Migration: 2019_03_13_002644_update_activities_table
Open the database/migrations/<YOUR_TIMESTAMP>_update_activities_table.php file and update accordingly:
public function up()
{
Schema::table(‘activities’, function (Blueprint $table) {
$table->biginteger(‘contact_id’)->unsigned(); $table->foreign(‘contact_id’)->references(‘id’)->on(‘contacts’); $table->biginteger(‘status_id’)->unsigned(); $table->foreign(‘status_id’)->references(‘id’)->on(‘activity_statuses’); });
}
We create two foreign keys to the contacts and activity_statuses table.
Now, run the following command to migrate your database:
$ php artisan migrate
Implementing the Models
The Eloquent ORM included with Laravel provides a beautiful, simple ActiveRecord implementation for working with your database. Each database table has a corresponding "Model" which is used to interact with that table. Models allow you to query for data in your tables, as well as insert new records into the table. The official docs
We can interact with our database tables using the corresponding Eloquent models so we need implement the required methods in each model.
Defining the Relationships between Models
A contact belongs to a source, a status, an account and to a user and has many activities.
An account belongs to a user (i.e created by a user) and has many contacts.
An activity belongs to a status, a contact and to a user.
A contact status has many contacts.
A contact source has many contacts.
An activity status has many activities
Open the app/Account.php file and change accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Account extends Model
{
public function contacts(){
return $this->hasMany(‘App\Contact’);
}
public function user(){
return $this->belongsTo(‘App\User’);
}
}
Next, open the app/Activity.php file and change accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Activity extends Model
{
public function contact(){
return $this->belongsTo(‘App\Contact’);
}
public function status(){
return $this->belongsTo(‘App\ActivityStatus’);
}
public function user(){
return $this->belongsTo(‘App\User’);
}
}
Next, open the app/ActivityStatus.php file and change accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class ActivityStatus extends Model
{
public function activities(){
return $this->hasMany(‘App\Activiy’);
}
}
Next, open the app/Contact.php file and update accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Contact extends Model
{
protected $fillable = [
‘title’,
‘first_name’,
‘last_name’,
’email’,
‘phone’,
‘address’,
‘date’ ];
public function source(){
return $this->belongsTo(‘App\ContactSource’);
}
public function status(){
return $this->belongsTo(‘App\ContactStatus’);
}
public function account(){
return $this->belongsTo(‘App\Account’);
}
public function user(){
return $this->belongsTo(‘App\User’);
}
public function activities(){
return $this->hasMany(‘App\Contact’);
}
}
Next, open the app/ContactSource.php file and update accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class ContactSource extends Model
{
public function contacts(){
$this->hasMany(‘App\Contact’);
}
}
Next, open the app/ContactStatus.php file and update accordingly:
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class ContactStatus extends Model
{
//
public function contacts(){
$this->hasMany(‘App\Contact’);
}
}
Finally, open the app/User.php file and update as follows:
<?php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Foundation\Auth\User as Authenticatable;
class User extends Authenticatable
{
use Notifiable;
/**
* The attributes that are mass assignable.
*
* @var array
*/
protected $fillable = [
‘name’, ’email’, ‘password’,
];
/**
* The attributes that should be hidden for arrays.
*
* @var array
*/
protected $hidden = [
‘password’, ‘remember_token’,
];
/**
* The attributes that should be cast to native types.
*
* @var array
*/
protected $casts = [
’email_verified_at’ => ‘datetime’,
];
public function contacts(){
$this->hasMany(‘App\Contact’);
}
public function activities(){
return $this->hasMany(‘App\Activiy’);
}
public function accounts(){
return $this->hasMany(‘App\Account’);
}
}
Next we’ll be creating the REST API controllers.
via Planet MySQL
Laravel 5.8 REST CRUD API Tutorial – Build a CRM [PART 1]: Eloquent Models and Relationships

Beyond the signature: DocuSign introduces new cloud-powered tools to manage agreement process

DocuSign CEO Daniel Springer. (DocuSign Photo)

DocuSign, best known for its digital signature business, is releasing new tools that focus on the entire process of drawing up and completing agreements.

It’s part of a broader effort by the company to expand beyond its core offering of e-signatures, in the face of growing competition in that space from Adobe and other rivals. Going forward, DocuSign wants to convince customers it is more than just an e-signature company, a move that opens up new fronts of competition against global companies like Oracle as well as Pacific Northwest startups like Icertis, all while attempting to get on a path of long-term profitability.

The three new tools are part of an ongoing evolution to owning what the company calls the “agreement cloud.” By providing tools to not only sign documents but create and manage them, DocuSign believes it will double its market opportunity, CEO Dan Springer said in an interview with GeekWire.

“E-signature was the first step to unlocking the modernization and automation of (customers’) system of agreement,” Springer said. “But now we are coming to what feels like the natural fruition of the start of that journey by saying we have a full agreement cloud solution. You can go back through your entire system as a customer and modernize that and automate that process and get rid of the paper and get rid of the manual processes to allow you to have a much more cost efficient and effective system that’s also a lot friendlier for the environment.”

DocuSign originally started in Seattle and later relocated its headquarters to the San Francisco Bay Area, though more than a third of the company’s 3,100 employees remain in its original hometown. With 1,074 employees and contractors, the Seattle office is DocuSign’s largest, followed by San Francisco with 816 people.

DocuSign leaders celebrate the company’s debut on Wall Street in April. (Nasdaq Photo)

Here is a look at the new tools, which will debut in DocuSign’s ’19 Release:

  • DocuSign Gen for Salesforce, available on Salesforce AppExchange, lets sales reps and other users automatically generate signature-ready contracts within Salesforce with a few clicks.
  • DocuSign Click lets organizations set up a single-click process for getting consent on standard agreement terms on websites, such as a privacy policy.
  • DocuSign ID Verification digitizes and automates verifying government identification in sensitive transactions, like opening a bank account, which would normally require someone to present a physical ID.

In addition to these new tools, DocuSign continues to invest in document creation and storage through the integration of SpringCM, the Chicago-based company that DocuSign acquired for $220 million last year.

Springer compared DocuSign’s evolution to when Salesforce branched out beyond sales to focus on broader services approximately a decade ago. He downplayed competition in the company’s new realm, arguing that no one else is providing services to shake up the entire agreement process, and instead said the biggest challenge is getting customers to recognize that DocuSign is not just a signature company.

DocuSign pegged the value of its original market of e-signatures at roughly $25 billion. But the market opportunity in the full system of agreement is twice as large, at $50 billion, the company says.

(Google Finance Chart)

DocuSign, which was the long-time leader on the GeekWire 200 list of the top Pacific Northwest startups, went public last April. Its stock shot up 30 percent out of the gate, with investors showing serious appetite for the enterprise software company. The stock has since climbed 43 percent, rising faster than the Nasdaq Stock Exchange it is listed on, though it is down 5.7 percent today.

DocuSign’s surge on the public markets is part of an interesting trend of unprofitable growth companies faring much better among investors than companies that focus more on profits. DocuSign did eek out a small non-GAAP profit for the first time last year, though it has traditionally prioritized growth over profit.

Springer expects DocuSign to post operating profits of about 20 to 25 percent in three to five years, up from the 5 percent profits projected for this year. Springer thinks the company’s focus on growth and scale will ramp up profits in the coming years.

“If we find choices and tradeoffs, we are growth company, so we will make the investments in growth and not have the increased profitability come as fast,” said Springer. “But at this current point we believe we’ll be doing both, keeping the high growth point and improving profitability.”

DocuSign CEO Dan Springer at the Nasdaq opening bell ceremony. (Nasdaq Photo)

DocuSign capped its first year as a public company by besting analyst expectations for revenue and profits in the fourth quarter. For the full year, DocuSign posted $701 million in revenue, up 35 percent over year-over-year. This year, the company expects revenue of $910 million to $915 million, which would represent 29.8 to 30.5 percent growth over last year. DocuSign finished the year with 477,000 customers.

Infrastructure remains one of DocuSign’s largest costs as it delves further into the “agreement cloud.” Rather than working with a public cloud provider like Amazon or Microsoft, DocuSign has its own “rings” of data centers, with three a piece in the U.S. and Europe.

Springer says DocuSign has very high standards for redundancy and making sure its systems are always running smoothly. He gave an example of a customer using its e-signature services, T-Mobile, to show the importance of reliability and why it runs its own data centers.

“Whether you go into a store, whether you call into a call center, whether you go onto the website, every route you go it goes through DocuSign,” Springer said. “So if we were down, they are down for business.”

via GeekWire
Beyond the signature: DocuSign introduces new cloud-powered tools to manage agreement process

3 Tips to Remember When Building a Cloud-Ready Application

The cloud is capable of hosting virtually any application. However, certain development and design considerations can help optimize cloud deployment in a way that facilitates present performance and future growth. Therefore, both cloud providers and application owners must have a clear picture of what a cloud-ready application looks like. They can then facilitate the services and infrastructure needed to maximize performance.

Building a cloud-ready application isn’t something every techie inherently understands. Instead, it’s a skill that requires meticulous planning and seamless execution. When developers don’t build cloud applications for the cloud, those applications fail to deliver the value they should. Fortunately, programmers can easily learn how to create applications that are a perfect fit for the cloud.

Here are three of the most
important tips you should consider when building a cloud-ready application.

1.  
A Collection of Services

Don’t build a cloud application as a monolith. Instead, deploy it as an aggregation of APIs. Therefore, start by defining the data. Next, create the cloud services needed to manage this data. Finally, combine these APIs into either higher-level composite services or complete compound applications. This, in a nutshell, is the principle of service-based application architecture.

While this is a widely understood concept in the developer community, a surprising number of programmers still create unwieldy, tightly coupled software. Moreover, their designs seem more focused on a spectacular user interface than they are on efficiently blending the underlying services. However, API-based and loosely coupled applications make it easier to tap into the distribution power of cloud infrastructure.

RELATED ARTICLE: GETTING THE MOST OUT OF YOUR CLOUD SYSTEM

Additional advantages of API-based software include the potential for reuse by other applications and much better granularity. Granularity can be especially handy where an application is made up of hundreds or thousands of underlying services that can be readily isolated. That way, application developers don’t have to reinvent the wheel each time they set out to create a new application.

Perhaps the best example of how convenient API
reuse can be is the credit-check service that a wide range of applications use.
If you have this as a reusable API, you can cut down on your development time.

2.  
Decoupling the Data

Decoupling an application’s components is one thing. However, decoupling the data is another. Moreover, the latter is every bit as important as the first. That’s because when your data is too deeply tied into an application, it won’t be a good fit for the cloud. Application architecture that separates data and processing will be more at home in cloud environments.

Moreover, when you decouple data, you gain the ability to store and process it on a server instance of your choosing. For instance, some organizations prefer that their data remain on their local in-house servers while the applications run in the cloud. This would be hard to do if the developer hasn’t sufficiently decoupled the data from the application.

RELATED ARTICLE: IS A MOBILE APP A GOOD IDEA FOR MY SMALL BUSINESS?

All the same, developers must balance decoupling against performance requirements. If there’s too large a gap between the data and the software, latency could weigh down overall performance. Ultimately, developers must make sure data remains separate from the application but that it does not sit too far away for the application to easily leverage it.

One way to do that is to employ a caching system. This can bolster performance by storing locally the most commonly accessed data sets. Therefore, this reduces the number of read requests that have to be relayed back to the database. However, caching systems must be built into the application itself and tested for efficiency.

3.  
Model for Scale

One of the main reasons a growing number of governments and large corporations have opted to run their applications in the cloud is the ease of scaling. Additionally, some cloud providers offer auto-scaling provisioning to accommodate rapid changes in network, database, and application loads. But such scaling capacity won’t matter much if the applications themselves aren’t built from the ground up with scaling in mind.

Designing for scaling means thinking about how the application performs under a growing load. Do the application and the back-end database behave the same way irrespective of whether 10 or 1,000 users simultaneously log into it? If such perfectly consistent behavior isn’t possible, is the deterioration in performance small enough to go largely unnoticed by end users?

RELATED ARTICLE: PYTHON: VERSATILITY IN CODE DESIGN THAT YOUR TEAM MUST HAVE

Building an application that’s ready for cloud scaling means understanding its regular workload and defining a path for scaling it during workload surges. Such architecture must go hand-in-hand with robust monitoring mechanisms that leverage the appropriate performance and access management tools.

Building a Cloud-Ready Application Requires Attention to Detail

Building a cloud-ready application requires you to pay attention to several aspects of the architecture. However, the above three are the most important. If you haven’t tried to build a cloud-ready application before, you can expect to make a couple of mistakes at the beginning. However, with practice you’ll eventually understand what works.

Source of Featured Image: Pixabay.com

The post 3 Tips to Remember When Building a Cloud-Ready Application appeared first on Business Opportunities.


via Business Opportunities Weblog
3 Tips to Remember When Building a Cloud-Ready Application