Step-By-Step Guide to Deploying Laravel Applications on Virtual Private Servers

https://production.ams3.digitaloceanspaces.com/2021/09/Social-Sharing-photo-Template-1-14.png

Developing modern full-stack web applications has become much easier thanks to Laravel but deploying them on a real server is another story.

There are just so many options.

PaaS like Heroku or AWS Elastic Beanstalk, unmanaged virtual private servers, shared hosting, and so on.

Deploying a Laravel app on a shared server using cPanel is as easy as zipping up the source code along with all the dependencies and uploading it to the server. But on shared hosting, you don’t have much control over the server.

PaaS like Heroku or AWS Elastic Beanstalk strikes a good balance between ease of usage and control, but they can be expensive at times. A standard 1x dyno from Heroku, for example, costs $25 per month and comes with only 512MB of RAM.

Unmanaged virtual private servers are affordable and provide a lot of control on the server. You can avail a server with 2GB of RAM, 20GB of SSD space, and 2TB of transfer bandwidth, costing only $15 per month.

Now the problem with unmanaged virtual private servers is that they are unmanaged. You’ll be responsible for installing all necessary software, configuring them, and keeping them updated.

In this article, I’ll guide you step-by-step in the process of how to deploy a Laravel project on an unmanaged virtual private server (we’ll refer to it as VPS from now on). If you want to check out the benefits of the framework first, go ahead and get an answer to the question of why use the Laravel framework. If you are ready, without any further ado, let’s jump in.

Prerequisites

The article assumes that you have previous experience with working with the Linux command-line. The server will use Ubuntu as its operating system, and you’ll have to perform all the necessary tasks from the terminal. The article also expects you to understand basic concepts like Sudo, file permissions, differences between a root and non-root user, and git.

Project Code and Deployment Plan

I’ve built a dummy project for this article. It’s a simple question board application where users can post a question, and others can answer that question. You can consider this a dumbed-down version of StackOverflow.

The project source code is available on https://github.com/fhsinchy/guide-to-deploying-laravel-on-vps repository. Make a fork of this repository and clone it on your local computer.

Once you have a copy of the project on your computer, you’re ready to start the Laravel deployment process. You’ll start by provisioning a new VPS and setting up a way for pushing the source from your local computer to the server.

Provisioning a New Ubuntu Server

There are several VPS providers out there, such as DigitalOcean, Vultr, Linode, and Hetzner. Although working with an unmanaged VPS is more or less the same across providers, they don’t provide the same kind of services.

DigitalOcean, for example, provides managed database services. Linode and Vultr, on the other hand, don’t have such services. You don’t have to worry about these differences.

I’ll demonstrate only the unmanaged way of doing things. So regardless of the provider, you’re using, the steps should be identical.

Before provisioning a new server, you’ll have to generate SSH keys.

Generating New SSH Keys

According to Wikipedia – “Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network.” It allows you to connect to a remote server using a password or a key-pair.

If you’re already familiar with SSH and have previously generated SSH key-pairs on your computer, you may skip this subsection. To generate a new key-pair on macOS, Linux, or Windows 10 machines, execute the following command:

ssh-keygen -t rsa

You’ll see several prompts on the terminal. You can go through them by pressing enter. You don’t have to put any password either. Once you’ve generated the key-pair, you’ll find a file named id_rsa.pub inside the ~/.ssh/ directory. You’ll need this file when provisioning a new VPS.

Provisioning a New VPS

I’ve already said there are some differences between the VPS service providers, so if you want to be absolutely in line with this article, use DigitalOcean.

A single virtual private server on DigitalOcean is known as a droplet. On Vultr, it’s called an instance, and on Linode, it’s called a linode. Log into your provider of choice and create a new VPS. Use Ubuntu 20.04 LTS as the operating system.

For size, pick the one with 1GB of RAM and 25GB of SSD storage. It should cost you around $5 per month. For the region, choose the one closest to your users. I live in Bangladesh, and most of my users are from here, so I deploy my applications in the Singapore region.

Under the SSH section, create a new SSH key. Copy the content from the ~/.ssh/id_rsa.pub file and paste it as the content. Put a descriptive name for the key and save.

You can leave the rest of the options untouched. Most of the providers come with an automatic backup service. For this demonstration, keep that option disabled. But in a real scenario, it can be a lifesaver. After the process finishes, you’ll be ready to connect to your new server using SSH.

Performing Basic Setup

Now that your new server is up and running, it’s time to do some basic setup. First, use SSH with the server IP address to log in as the root user.

ssh [email protected]

You can find the server’s IP address on the dashboard or inside the server details. Once you’re inside the server, the first thing to do is create a new non-root user.

By default, every server comes with the root user only. The root user, as you may already know, is very mighty. If someone manages to hack your server and logs in as the root user, the hacker can wreak havoc. Disabling login for the root user can prevent such mishaps.

Also, logging in using a key-pair is more secure than logging in using a password. So, disabling logging in using a password should be disabled for all users.

To create a new user from the terminal, execute the following command inside your server:

adduser nonroot

The name nonroot can be anything you want. I used nonroot as the name to make the fact clear that this is a non-root user. The adduser program will ask for a password and several other information. Put a strong password and leave the others empty.

After creating the user, you’ll have to add this new user to the sudo group. Otherwise, the nonroot user will be unable to execute commands using sudo.

usermod -aG sudo nonroot

In this command, sudo is the group name, and nonroot is the username. Now, if you try to log into this account, you’ll face a permission denied error.

It happens because most of the VPS providers disable login using a password when you add an SSH key to the server, and you haven’t configured the new user to use SSH key-pairs. One easy way to fix this is to copy the content of /root/.ssh directory to the /home/nonroot/.ssh directory. You can use the rsync program to do this.

rsync --archive --chown=nonroot:nonroot /root/.ssh /home/nonroot

The –archive option for rsync copies directories recursively preserving symbolic links, user and group ownership, and timestamps. The –chown option sets the nonroot user as the owner in the destination. Now you should be able to log in as the new user using SSH.

After logging in as a non-root user, you should update the operating system, including all the installed programs on the server. To do so, execute the following command:

sudo apt update && sudo apt upgrade -y && sudo apt dist-upgrade -y

Downloading and installing the updates will take a few minutes. During this process, if you see a screen titled “Configuring openssh-server” asking about some file changes, select the “keep the local version currently installed” option and press enter.

After the update process finishes, reboot the server by executing the sudo reboot command. Wait a few minutes for the server to boot again and log back in as a non-root user.

Deploying Code on the Server

After completing the basic setups, the next thing you’ll tackle is deploying code on the server. I’ve seen people cloning the repository somewhere on the production server and logging into the server to perform a pull whenever there are some new changes to the code.

There is a much better way of doing this. Instead of logging into the server to perform a pull, you can use the server itself as a repository and push code directly to the server. You can also automate the post-deployment steps like installing dependencies, running the migrations, and so on, which will make the Laravel deploy to server an effortless action. But before doing all these, you’ll first have to install PHP and Composer on the server.

Installing PHP

You can find a list of PHP packages required by Laravel on the official docs. To install all these packages, execute the following command on your server:

sudo apt install php7.4-fpm php7.4-bcmath php7.4-json php7.4-mbstring php7.4-xml -y

Depending on whether you’re using MySQL or PostgreSQL, or SQLite in your project, you’ll have to install one of the following packages:

sudo apt install php7.4-mysql php7.4-pgsql php7.4-sqlite3 -y

The following package provides support for the Redis in-memory databases:

sudo apt install php7.4-redis

Apart from these packages, you’ll also need php-curl, php-zip, zip, unzip, and curl utilities.

sudo apt install zip unzip php7.4-zip curl php7.4-curl -y

The question bank project uses MySQL as its database system and Redis for caching and running queues, so you’ll have to install the php7.4-mysql and the php7.4-redis packages.

Depending on the project, you may have to install more PHP packages. Projects that work images, for example, usually depend on the php-gd package. Also, you don’t have to mention the PHP version with every package name. If you don’t specify a version number, APT will automatically install whatever is the latest.

At the writing of this article, PHP 7.4 is the latest one on Ubuntu’s package repositories but considering that the question board project requires PHP 7.4 and PHP 8 may become the default in the future, I’ve specified the version number in this article.

Installing Composer

After installing PHP and all the required packages on the server, now you’re ready to install Composer. To do so, navigate to the official composer download page and follow the command-line installation instructions or execute the following commands:

php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
sudo php composer-setup.php --install-dir /usr/local/bin --filename composer
php -r "unlink('composer-setup.php');"

Now that you’ve installed both PHP and Composer on your server, you’re ready to configure the automated deployment of your code.

Deploying Code Using Git

For automating code deployment on the server, log in as a non-root user and create a new directory under the /home/nonroot directory. You’ll use this directory as the repository and push production code to it.

mkdir -p /home/nonroot/repo/question-board.git

The -p option to the mkdir command will create any nonexistent parent repository. Next, cd into the newly created directory and initialize a new bare git repository.

cd /home/nonroot/repo/question-board.git
git init --bare

A bare is the same as a regular git repository, except it doesn’t have a working tree. The practical usage of such a git repository is as a remote origin. Don’t worry if you don’t understand what I said just now. Things will become lucid as you keep going.

Assuming you’re still inside the /home/nonroot/repo/question-board.git directory, cd inside the hooks subdirectory and create a new file called post-receive.

cd hooks
touch post-receive

Files inside this directory are regular shell scripts that git invokes when some major event happens on a repository. Whenever you push some code, git will wait until all the code has been received and then call the post-receive script.

Assuming you’re still inside the hooks directory, open the post-receive script by executing the following command:

nano post-receive

Now update the script’s content as follows:

#!/bin/sh

sudo /sbin/deploy

As you may have already guessed, /sbin/deploy is another script you’ll have to create. The /sbin directory is mainly responsible for storing scripts that perform administrative tasks. Go ahead and touch the /sbin/deploy script and open it using the nano text editor.

sudo touch /sbin/deploy
sudo nano /sbin/deploy

Now update the script’s content as follows:

#!/bin/sh

git --work-tree=/srv/question-board --git-dir=/home/nonroot/repo/question-board.git checkout -f

Evident by the #!/bin/sh line, this is a shell script. After that line, the only line of code in this script copies the content of the /home/nonroot/repo/question-board.git repository to the  /srv/question-board directory.

Here, the –work-tree option specifies the destination directory, and the –git-dir option specifies the source repository. I like to use the /srv directory for storing files served by this server. If you want to use the /var/www directory, go ahead.

Save the file by hitting Ctrl + O and exit nano by hitting Ctrl + X key combination. Make sure that the script has executable permission by executing the following command:

sudo chmod +x post-receive

The last step to make this process functional is creating the work tree or the destination directory. To do so, execute the following command:

sudo mkdir /srv/question-board

Now you have a proper work tree directory, a bare repository, and a post-hook that in turn calls the /sbin/deploy script with sudo. But, how would the post-receive hook invoke the /sbin/deploy script using sudo without a password?

Open the /etc/sudoers file on your server using the nano text editor and append the following line of code at the end of the file:

nonroot ALL=NOPASSWD: /sbin/deploy

This line of code means that the nonroot user will be able to execute the /sbin/deploy script with sudo on ALL hosts with NOPASSWD or no password. Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination.

Finally, you’re ready to push the project source code. Assuming that you’ve already forked and cloned the https://github.com/fhsinchy/guide-to-deploying-laravel-on-vps repository on your local system, open up your terminal on the project root and execute the following command:

git remote add production ssh://[email protected]/home/nonroot/repo/question-board.git

Make sure to replace my IP address with the IP address from your server. Now assuming that the stable code is no the master branch, you can push code to the server by executing the following command:

git push production master

After sending the code to the server, log back in as a non-root user and cd into the /srv/question-board directory. Use the ls command to list out the content, and you should see that git has successfully checked out your project code.

Automating Post Deployment Steps

Congratulations on you being able to deploy Laravel project on the server directly but, is that enough? What about the post-deployment steps? Tasks like installing or updating dependencies, migrating the database, caching the views, configs, and routes, restarting workers, and so on.

Honestly, automating these tasks is much easier than you may think. All you’ve to do is create a script that does all these for you, set some permissions, and call that script from inside the post-receive hook.

Create another script called post-deploy inside the /sbin directory. After creating the file, open it inside the nano text editor.

sudo touch /sbin/post-deploy
sudo nano /sbin/post-deploy

Update the content of the post-deploy script as follows. Don’t worry if you don’t clearly understand everything. I’ll explain each line in detail.

#!/bin/sh

cd /srv/question-board

cp -n ./.env.example ./.env

COMPOSER_ALLOW_SUPERUSER=1 composer install --no-dev --optimize-autoloader
COMPOSER_ALLOW_SUPERUSER=1 composer update --no-dev --optimize-autoloader

The first line changes the working directory to the /srv/question-board directory. The second line makes a copy of the .env.example file. The -n option makes sure that the cp command doesn’t override a previously existing file.

The third and fourth commands will install all the necessary dependencies and update them if necessary. The COMPOSER_ALLOW_SUPERUSER environment variable disables a warning about running the composer binary as root.

Save the file by pressing Ctrl + O and exit nano by pressing Ctrl + X key combination. Make sure that the script has executable permission by executing the following command:

sudo chmod +x /sbin/post-deploy

Open the /home/nonroot/repo/question-board.git/hooks/post-receive script with nano and append the following line after the sudo /sbin/deploy script call:

sudo /sbin/post-deploy

Make sure that you call the post-deploy script after calling the deploy script. Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination.

Open the /etc/sudoers file on your server using the nano text editor once again and update the previously added line as follows:

nonroot ALL=NOPASSWD: /sbin/deploy, /sbin/post-deploy

Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination. You can add more post deploy steps to this script if necessary.

To test the new post-deploy script, make some changes to your code, commit the changes and push to the production master branch. This time you’ll see composer packages installation progress on the terminal and outputs from other artisan calls.

Once the deployment process finishes, log back into the server, cd into the /srv/question-board directory, and list the content by executing the following command:

ls -la

Among other files and folders, you’ll see a newly created vendor directory and an env file. At this point, you can generate the application encryption key required by Laravel. To do so, execute the following command:

sudo php artisan key:generate

If you look at the content of the .env file using the nano text editor, you’ll see the APP_KEY value populated with a long string.

Installing and Configuring NGINX

Now that you’ve successfully pushed the source code to the server, the next step is to install a web server and configure it to serve your application. I’ll use NGINX in the article. If you want to use something else like Apache, you’ll be on your own.

This article will strictly focus on configuring the webserver for serving a Laravel application and will not discuss NGINX-related stuff in detail. NGINX itself is a very complex software, and if you wish to learn NGINX from the ground up, The NGINX Handbook is a solid resource.

To install NGINX on your Ubuntu server, execute the following command:

sudo apt install nginx -y

This command should install NGINX and should also register as a systemd service. To verify, you can execute the following command:

sudo systemctl status nginx

You should see something as follows in the output:

You can regain control of the terminal by hitting q on your keyboard. Now that NGINX is running, you should see the default welcome page of NGINX if you visit the server IP address.

You’ll have to change the NGINX configuration to serve your Laravel application instead. To do so, create a new file /etc/nginx/sites-available/question-board and open the file using the nano text editor.

sudo touch /etc/nginx/sites-available/question-board
sudo nano /etc/nginx/sites-available/question-board

This file will contain the NGINX configuration code for serving the question board application. Configuring NGINX from scratch can be difficult, but the official Laravel docs have a pretty good configuration. Follows is the code copied from the docs:

server {
listen 80;
server_name 104.248.157.172;
root /srv/question-board/public;

add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";

index index.php;

charset utf-8;

location / {
    try_files $uri $uri/ /index.php?$query_string;
}

location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt  { access_log off; log_not_found off; }

error_page 404 /index.php;

location ~ \.php$ {
    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    include fastcgi_params;
}

location ~ /\.(?!well-known).* {
    deny all;
}
}

You don’t have to make any changes to this code except the first two lines. Make sure you’re using the IP address from your server as the server_name , and the root is pointing to the correct directory. You’ll replace this IP address with a domain name in a later section.

Also, inside the location ~ \.php$ { } block, make sure that the fastcgi_pass directive is pointing to the correct PHP version. In this demonstration, I’m using PHP 7.4, so this configuration is correct. If you’re using a different version, like 8.0 or 8.1, update the code accordingly.

If you cd into the /etc/nginx directory and list out the content using the ls command, you’ll see two folders named sites-available and sites-enabled.

The sites-available folder holds all the different configuration files serving applications (yes, there can be multiple) from this server.

The sites-enabled folder, on the other hand, contains symbolic links to the active configuration files. So if you do not make a symbolic link of the /etc/nginx/sites-available/question-board file inside the sites-enabled folder, it’ll not work. To do so, execute the following command:

sudo ln -s /etc/nginx/sites-available/question-board /etc/nginx/sites-enabled/question-board
sudo rm /etc/nginx/sites-enabled/default

The second command gets rid of the default configuration file to avoid any unintended conflict. To test if the configuration code is okay or not, execute the following command:

sudo nginx -t

If everything’s alright, reload the NGINX configuration by executing the following command:

sudo nginx -s reload

If you visit your server IP address, you’ll see that NGINX is serving your application correctly but the application is throwing a 500 internal server error.

As you can see, the application is trying to write to the logs folder but fails. It happens because the root user owns the /srv/question-board directory, and the www-data user owns the NGINX process. To make the /srv/question-board/storage directory writable by the application, you’ll have to alter the directory permissions.

Configuring Directory Permissions

There are different ways of configuring directory permissions in a Laravel project but, I’ll show you the one I use. First, you’ll have to assign the www-data user that owns the NGINX process as the owner of the /srv/question-board directory as well. To do so, execute the following command:

sudo chown -R :www-data /srv/question-board

Then, set the permission of the /srv/question-board/storage to 755, which means read and execute access for all users and write access for the owner by executing the following command:

sudo chmod -R 775 /srv/question-board/storage

Finally, there is one more subdirectory that you have to make writable. That is the /srv/question-board/bootstrap/cache directory. To do so, execute the following command:

sudo chmod -R 775 /srv/question-board/bootstrap/cache

If you go back to the server IP address now and refresh, you should see that the application is working fine.

Installing and Configuring MySQL

Now that you’ve successfully installed and configured the NGINX web server, it’s time for you to install and configure MySQL. To do so, install the MySQL server by executing the following command:

sudo apt install mysql-server -y

After the installation process finishes, execute the following command to make your MySQL installation more secure:

sudo mysql_secure_installation

First, the script will ask if you want to use the validate password component or not. Input “Y” as the answer and hit enter. Then, you’ll have to set the desired level of password difficulty. I recommend setting it as high. Although picking a hard-to-guess password every time you want to create a new user can be annoying, but for the sake of security, roll with it. In the next step, set a secure password for the root user. You can put “Y” as the answer for the rest of the questions. Give the questions a read if you want to.

Now, before you can log into your database server as root, you’ll have to switch to the root user. To do so, execute the following command:

sudo su

Log into your database server as root by executing the following command:

mysql -u root

Once you’re in, create a new database for the question board application by executing the following SQL code:

CREATE DATABASE question_board;

Next, create a new database user by executing the following SQL code:

CREATE USER 'nonroot'@'localhost' IDENTIFIED BY 'password';

Again, I used the name nonroot to clarify that this is a non-root user. You can use whatever you want as the name. Also, replace the word password with something more secure.

After that, provide the user full privilege of the question_board database to the newly created user by executing the following SQL code:

GRANT ALL PRIVILEGES ON question_board . * TO 'nonroot'@'localhost';

In this code, question_board.* means all the tables of the question_board database. Finally, quit the MySQL client by executing the \q command and exit the root shell by invoking the exit command.

Now, try logging in as the nonroot user by executing the following command:

mysql -u nonroot -p

The MySQL client will ask for the password. Use the password you put in when creating the nonroot user. If you manage to log in successfully, exit the MySQL client by executing the \q command.

Now that you have a working database server, it’s time to configure the question board project to make use of it. First, cd into the /srv/question-board directory and open the env file using the nano text editor:

cd /srv/question-board
sudo nano .env

Update the database configuration as follows:

DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=question_board
DB_USERNAME=nonroot
DB_PASSWORD=password

Make sure to replace the username and password with yours. Save the file by pressing Ctrl + O and exit nano by pressing Ctrl + X key combination. To test out the database connection, try migrating the database by executing the following command:

php artisan migrate --force

If everything goes fine, that means the database connection is working. The project comes with two seeder classes, one for seeding the admin user and another for the categories. Execute the following commands to run them:

php artisan db:seed --class=AdminUserSeeder
php artisan db:seed --class=CategoriesSeeder

Now, if you visit the server IP address and navigate to the /questions route, you’ll see the list of categories. You’ll also be log in as the admin user using the following credentials:

email: [email protected]
password: password

If you’ve been working with Laravel for a while, you may already know that it is common practice to add new migration files when there is a database change. To automate the process of running the migrations on every deployment, open the /sbin/post-deploy script using nano once again and append the following line at the end of the file:

php artisan migrate --force

The –force option will suppress an artisan warning about running migrations on a production environment. Unlike migrations, seeders should run only once. If you add new seeders on later deployments, you’ll have to run them manually.

Configure Laravel Horizon

The question board project comes with Laravel Horizon pre-installed and pre-configured. Now that you have Redis up and running, you’re ready to start processing jobs.

The official docs suggest using the supervisor program for running Laravel Horizon on a production server. To install the program, execute the following command:

sudo apt install supervisor -y

Supervisor configuration files live within your server’s /etc/supervisor/conf.d directory. Create a new file /etc/supervisor/conf.d/horizon.conf and open it using the nano text editor:

sudo touch /etc/supervisor/conf.d/horizon.conf
sudo /etc/supervisor/conf.d/horizon.conf

Update the file’s content as follows:

[program:horizon]
process_name=%(program_name)s
command=php /srv/question-board/artisan horizon
autostart=true
autorestart=true
user=root
redirect_stderr=true
stdout_logfile=/var/log/horizon.log
stopwaitsecs=3600

Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + X key combination. Now, execute the following commands to update the supervisor configuration and starting the horizon process:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start horizon

To test out if Laravel Horizon is running or not, visit your server’s IP address and navigate to the /login page. Log in as the admin user and navigate to the /horizon route. You’ll see Laravel Horizon in the active state.

I’ve configured Laravel Horizon to only let the admin user in, so if you log in with some other user credential, you’ll see a 403 forbidden error message on the /horizon route.

One thing that catches many people off guard is that if you make changes to your jobs, you’ll have to restart Laravel Horizon to read those changes. I recommend adding a line to the /sbin/post-deploy script to reinitiate the Laravel Horizon process on every deployment.

To do so, open the /sbin/post-deploy using the nano text editor and append the following line at the end of the file:

sudo supervisorctl restart horizon

This command will stop and restart the Laravel Horizon process on every deployment.

Configuring a Domain Name With HTTPS

For this step to work, you’ll have to own a custom domain name of your own. I’ll use the questionboard.farhan.dev domain name for this demonstration.

Log into your domain name provider of choice and go to the DNS settings for your domain name. Whenever you want a domain name to point to a server’s IP address, you need to create a DNS record of type A.

To do so, add a new DNS record with the following attributes:

Type: A Record

Host: questionboard

Value: 104.248.157.172

Make sure to replace my IP address with yours. If you want your top-level domain to point to an IP address instead of a subdomain, just put a @ as the host.

Now go back to your server and open the /etc/nginx/sites-available/questionboard config file using the nano text editor. Remove the IP address from the server_name directive and write your domain name. Do not put HTTP or HTTPS at the beginning.

You can put multiple domain names such as the top-level domain and the www subdomain separated by spaces. Save the configuration file by pressing Ctrl + O and Ctrl + X key combination. Reload NGINX configuration by executing the following command:

sudo nginx -s reload

Now you can visit your application using your domain name instead of the server’s IP address. To enable HTTPS on your application, you can use the certbot program.

To do so, install certbot by executing the following command:

sudo snap install --classic certbot

It is a python program that allows you to use free SSL certificates very easily. After installing the program, execute the following command to get a new certificate:

sudo certbot --nginx

First, the program will ask for your email address. Next, it’ll ask if you agree with the terms and agreements or not.

Then, It’ll ask you about sharing your email address with the Electronic Frontier Foundation.

In the third step, the program will read the NGINX configuration file and extract the domain names from the server_name directive. Look at the domain names it shows and press enter if they are all correct. After deploying the new certificate, the program will congratulate you, and now you’ve got free HTTPS protection for 90 days.

After 90 days, the program will attempt to renew the certificate automatically. To test the auto-renew feature, execute the following command:

sudo certbot renew --dry-run

If the simulation succeeds, you’re good to go.

Configuring a Firewall

Having a properly configured firewall is very important for the security of your server. In this article, I’ll show you how you can configure the popular UFW program.

UFW stands for uncomplicated firewall, and it comes by default in Ubuntu. You’ll configure UFW to, by default, allow all outgoing traffic from the server and deny all incoming traffic to the server. To do so, execute the following command:

sudo ufw default deny incoming
sudo ufw default allow outgoing

Denying all incoming traffic means that no one, including you, will be able to access your server in any way. The next step is to allow incoming requests in three specific ports. They are as follows:

Port 80, used for HTTP traffic.

Port 443, used for HTTPS traffic.

Port 22, used for SSH traffic.

To do so, execute the following commands:

sudo ufw allow http
sudo ufw allow https
sudo ufw allow ssh

Finally, enable UFW by executing the following command:

sudo ufw enable

That’s pretty much it. Your server now only allows HTTP, HTTPS, and SSH traffic coming from the outside, making your server a bit more secure.

Laravel Post-deployment Optimizations

Your application is now almost ready to accept requests from all over the world. One last step that I would like to suggest is caching the Laravel configuration, views, and routes for better performance.

To do so, open the /sbin/post-deploy script using the nano text editor and append the following lines at the end of the file:

php artisan config:cache
php artisan route:cache
php artisan view:cache

Now, on every deployment, the caches will be cleared and renewed automatically. Also, make sure to set the APP_ENV to production and APP_DEBUG to false inside the env file. Otherwise, you may unintentionally compromise sensitive information regarding your server.

Conclusion

I would like to thank all Laravel developers for the time they’ve spent reading this article. I hope you’ve enjoyed it and have learned some handy stuff regarding application deployment. If you want to learn more about NGINX, consider checking out my open-source NGINX Handbook with tons of fun content and examples.

Also, if you want to broaden your knowledge of Laravel, you can check the Laravel vs Symfony, the Laravel Corcel, and Laravel Blockhain articles.

If you have any questions or confusion, feel free to reach out to me. I’m available on Twitter and LinkedIn and always happy to help. Till the next one, stay safe and keep on learning.

Laravel News Links

Synchronize Tables on the Same Server with pt-table-sync

https://www.percona.com/blog/wp-content/uploads/2021/10/Synchronize-Tables-on-the-Same-MySQL-Server-300×157.pngSynchronize Tables on the Same MySQL Server

Synchronize Tables on the Same MySQL ServerIt is a common use case to synchronize data in two tables inside MySQL servers. This blog post describes one specific case: how to synchronize data between two different tables on the same MySQL server. This could be useful, for example, if you test DML query performance and do not want to affect production data. After few experiments, tables get out of sync and you may need to update the test one to continue working on improving your queries. There are other use cases when you may need to synchronize the content of the two different tables on the same server, and this blog will show you how to do it.

Table Content Synchronization

The industry-standard tool for table content synchronization – pt-table-sync – is designed to synchronize data between different MySQL servers and does not support bulk synchronization between two different databases on the same server yet. If you try it, you will receive an error message:

$ pt-table-sync D=db1 D=db2 --execute --no-check-slave
You specified a database but not a table in D=db1.  Are you trying to sync only tables in the 'db1' database?  If so, use '--databases db1' instead.

However, it is possible to synchronize two individual tables on the same server by providing table names as DSN parameters:

$ pt-table-sync D=db1,t=foo D=db2,t=foo --execute --verbose
# Syncing D=db2,t=foo
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      5      0 GroupBy   03:24:26 03:24:26 2    db1.foo

You may even synchronize two tables in the same database:

$ pt-table-sync D=db2,t=foo D=db2,t=bar --execute --verbose
# Syncing D=db2,t=bar
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      5      0 GroupBy   03:25:34 03:25:34 2    db2.foo

We can use this feature to perform bulk synchronization.

First, we need to prepare a list of tables we want to synchronize:

$ mysql --skip-column-names -se "SHOW TABLES IN db2" > db1-db2.sync

$ cat db1-db2.sync
bar
baz
foo

Then we can invoke the tool as follows:

$ for i in `cat db1-db2.sync`; do pt-table-sync D=db1,t=$i D=db2,t=$i --execute --verbose; done
# Syncing D=db2,t=bar
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      0      0 GroupBy   03:31:52 03:31:52 0    db1.bar
# Syncing D=db2,t=baz
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      5      0 GroupBy   03:31:52 03:31:52 2    db1.baz
# Syncing D=db2,t=foo
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      0      0 GroupBy   03:31:52 03:31:52 0    db1.foo

If you have multiple database pairs to sync, you can agree on the file name and parse it before looping through table names. For example, if you use pattern

SOURCE_DATABASE-TARGET_DATABASE.sync

  you can use the following loop:

$ for tbls in `ls *.sync`
>   do dbs=`basename -s .sync $tbls`
>   source=${dbs%-*}
>   target=${dbs##*-}
>   for i in `cat $tbls`
>     do pt-table-sync D=$source,t=$i D=$target,t=$i --execute --verbose 
>   done
> done
# Syncing D=cookbook_copy,t=limbs
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      4      0 GroupBy   04:07:07 04:07:07 2    cookbook.limbs
# Syncing D=cookbook_copy,t=limbs_myisam
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      5       0      5      0 GroupBy   04:07:08 04:07:08 2    cookbook.limbs_myisam
# Syncing D=db2,t=bar
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      0       0      5      0 GroupBy   04:07:08 04:07:08 2    db1.bar
# Syncing D=db2,t=baz
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      5       0      5      0 GroupBy   04:07:08 04:07:08 2    db1.baz
# Syncing D=db2,t=foo
# DELETE REPLACE INSERT UPDATE ALGORITHM START    END      EXIT DATABASE.TABLE
#      5       0      0      0 GroupBy   04:07:08 04:07:08 2    db1.foo

Note that

pt-table-sync

synchronizes only tables that exist in both databases. It does not create tables that do not exist in the target database and does not remove those that do not exist in the source database. If your schema could be out of sync, you need to synchronize it first.

I used option

--verbose

in all my examples, so you can see what the tool is doing. If you omit this option the tool still is able to synchronize tables on the same server.

Complete the 2021 Percona Open Source Data Management Software Survey

Have Your Say!

Planet MySQL

Larger Laravel Projects: 12 Things to Take Care Of

https://laraveldaily.com/wp-content/uploads/2021/09/Larger-Laravel-Projects.png

Probably the most difficult step in the dev career is to jump from simple CRUD-like projects in the early years into some senior-level stuff with bigger architecture and a higher level of responsibility for the code quality. So, in this article, I tried to list the questions (and some answers) to think about, when working with large(r) Laravel projects.

This article will be full of external links to my own content and community resources, so feel free to check them out.

Disclaimer: What is a LARGE project?

First, I want to explain what I mean by “large”. Some people measure that in the number of database records, like million rows in users table is large. Yes, but it’s a large database, not the Laravel project itself.

What I mean by a larger project is mostly the number of entities to manage. In simple terms, how many Eloquent Models your project has. If you have many models, it usually means complexity. With that, as secondary measurement numbers, you may count the number of routes or public Controller methods.

Example from an open-source Monica CRM project that has 300+ lines of code in routes/web.php file:

With the scope of work this big, there are usually multiple developers working on the project, which brings the complexity to manage the codebase.

Also, a third non-tech feature of a large project is the price of the error. I would like to emphasize those projects where your inefficient or broken code may cause real money to be lost: like 30 minutes of downtime in an e-shop may lose $10,000 to the business easily. Or, some broken if-statement may lead real dozens of people to NOT place the orders.

So yes, I’ll be talking about those large projects below.

1. Automated Tests

In smaller projects, there’s usually a smaller budget and a stronger push to launch “something” quicker, so automated tests are often ignored as a “bonus feature”.

In larger projects, you just cannot physically manually test all the features before releasing them. You could test your own code, yes, but you have no idea how it may affect the old code written by others. Heck, you may even have no idea how that other code or modules work because you’re focused on your parts of the application.

So, how else would you ensure that the released code doesn’t cause bugs? Quite often a new code is just a refactoring of the old code, so if you change something in the project structure, how would you be able to test that nothing is broken? Don’t fall into the mindset I call “fingers-crossed driven development“.

Also, getting back to the definition of a larger project – remember, the price of the bug is high. So, literally, your broken code may cause financial loss to the business. If that argument still doesn’t convince you to cover the code with tests, probably nothing else will.

Yes, I know that typical argument that “we don’t have time to write tests“. I have a full video about it.

But this is where you need to find that time. It involves some communication: evaluate the deadlines thinking about the time to write tests, also talk to the managers about what would happen if you don’t write tests. They will then understand and allow that extra time. If they don’t, it means they don’t care about quality that much, so then maybe time to find another company?

Now, I’m not necessarily talking about a mythical “100% test coverage”. If you are really pressured on time, pick the functions to test that are crucial for your app to work. As Matt Stauffer famously said, “first, write tests for features, which, if they break, would cause you to lose your job“. So, anything related to payments, user access, stability of the core most used functionality.

2. Architecture and Project Structure

Ahh, yes, a million-dollar question: how to structure a Laravel project? I even published a 5-hour course on that topic, back in 2019, and I still feel I only scratched the surface there.

There are many different models or ideas that you may follow: divide the project into modules, use the DDD approach, pick some from the design patterns, or just follow SOLID principles. It is all a personal preference.

The thing is there’s no silver bullet and a one-size-fits-all approach. No one can claim that, for example, all bigger Laravel projects should follow DDD. Even SOLID principles sometimes are busted as not the best for some cases.

But the problem is clear: as your project structure grows, you need to change something, and re-structure the files/folders/classes into something more manageable. So what are the essential things you should do?

First, move things into sub-folders and namespace everything accordingly. Again, the example from the Monica CRM is pretty good.

Then, make sure that your classes/methods are not too large. There’s no magic number to follow, but if you feel that you need to scroll up&down too often, or spend too much time figuring out what the class/method does, it’s time to refactor and move the parts of the code somewhere else. The most common example of this is too big Controller files.

These are just two pieces of advice, but just those two changes make your code massively more readable, maintainable, and even more testable.

And yes, sometimes it requires a big “risky” refactoring of classes, but hey, you probably have automated tests to check everything, right? Right?

3. “Fake Data” with Factories and Seeds

A topic related to the automated testing we’ve already talked about. If you want to stress-test your application features, you need a large amount of data. And factories+seeds are a perfect combination to achieve that pretty easily.

Just get into the habit of, when creating a new Eloquent model, create a factory and a seed immediately, from the very beginning. Then, whoever will use it in the future to generate some fake data, will thank you very much.

But it’s not only about testing. Also, think about the fresh installation of your application. Large successful projects tend to grow only larger, so you would definitely have to onboard new developers. How much would they struggle with the installation process and getting up to speed, if they don’t have any sample data to work with?

You will also probably need to install your application multiple times on various servers – local, staging, some Docker-based environments, etc. You can customize the seeds to run under the condition of whether it’s a production or local environment.

4. Database structure

Although I mentioned in the beginning that database size is not the definition of a large Laravel project, but database structure is a hugely important thing for long-term performance and maintainability.

Which relationships to use? In Laravel terms, should it be a HasOne? HasMany? BelongsToMany? Polymorphic?

Also, other questions. One larger table or several smaller ones? ENUM field or a relationship? UUID or ID column? Of course, each case is individual, and I have a full course on structuring databases, but here is my main short tip.

Try to ask your “future self” about what potential SQL queries will there be on these DB tables, and try to write those queries first.

In other words, think about the end goal, and reverse engineer the structure from that. It would help you to “feel” the correct structure.

If you have factories and seeds ready (notice the pattern of how the topics in this article help each other?), you would be able to easily simulate the future usage, maybe even measure A vs B options, and decide on which is the correct one. This moment is actually very important: changing the DB structure in the future, with a large amount of live data, is probably one of the most complex/expensive/risky changes to make. So you better make a good decision up front.

That said, you shouldn’t be afraid to refactor the database if there’s a real need for that. Move some data into a separate less-used table, change HasMany into Polymorphic, choose other column types, etc.

Just make sure you don’t lose any customer data.

5. External Packages and Laravel Upgrades

When you choose what Laravel/PHP packages to include in your composer.json, in the very beginning it’s pretty easy: just use the latest versions of everything, and make sure the package is useful.

But later, when the project is alive for a year or two, there’s a need to upgrade the versions. Not only Laravel itself but also the packages, too.

Luckily, Laravel switched to a yearly release schedule from 6-months (and later moved Laravel 9 release to be in sync with Symfony), so developers don’t have that headache every 6 months anymore.

Generally, the framework itself has a pretty stable core, and the upgrades to new versions are relatively easy, should take only a few hours. Also, a service called Laravel Shift is a huge helper for developers who want to save time on this.

But the problem arises from the packages you use.

Pretty typical scenario: you want to upgrade the project to a new Laravel version, but a few packages from your composer file haven’t released their new versions yet to support that Laravel upgrade. So, in my experience, project upgrades are happening at least a few months after the official Laravel release, when the package creators catch up.

And, there are worse scenarios: when the package creator doesn’t have time to release the upgrade (remember, most of them do it for free, in their spare time), or even abandon the package. What to do then?

First, of course, you can help the creator, and submit a Pull Request with the suggested upgrade (don’t forget to include automated tests). But even then, they need to review, test, and approve your PR, so I rarely see that happening in real life. The packages are either actively maintained, or close to abandoned status. So, the only reasonable solution then is to fork the package and use your own version in the future.

But, an even better decision, is to think deeper at the time of choosing what packages to use. Questions to ask are: “Do we REALLY need that package?” and “Does the package creator have a reputation of maintaining their packages?

6. Performance of everything

If the project becomes successful, its database grows with more data, and the server needs to serve more users at a time. So then, the loading speed becomes an important factor.

Typically, in the Laravel community, we’re talking about performance optimization of Eloquent queries. Indeed, that’s the no.1 typical reason of performance issues.

But Eloquent and database are only one side of the story. There are other things you need to optimize for speed:

Queue mechanism: your users should not be waiting for 5 minutes for the invoice email to arrive
Loading front-end assets: you shouldn’t serve 1 MB of CSS/JS if you can minimize it
Running automated tests suite: you can’t wait for an hour to deploy new changes
Web-server and PHP configuration: users shouldn’t be “waiting in line” while other 10,000 users are browsing the website
– etc.

Of course, each of those topics is a separate world to dive deep in, but the first thing you should do is set up a measurement and reporting system, so you would be notified if there’s a slow query somewhere, a spike in visitors at some time or your server is near CPU limit.

7. Deployment Process and Downtime

In a typical smaller project, you can deploy new changes by just SSHing to the server and running a few git and artisan commands manually.

But if you have bigger traffic and a larger team, you need to take care of two things:
Zero-downtime deployment: to avoid any angry visitors that would see the “deploying changes…” screen, and collisions for visitors pre-post deployment. There’s the official Envoyer project for this and a few alternatives.
Automatic deployments: not everyone on your team has (or should have) SSH access to production servers, so deployment should be a button somewhere, or happen automatically, triggered by some git action

Also, remember automated tests? So yeah, you should automate their automation. Sounds meta, I know. What I mean is that tests should be automatically run before any deployment. Or, in fact, they should be run whenever new code is pushed to the staging/develop branch.

You can schedule to perform even more automated actions at that point. In general, automation of this build/deploy process is called Continuous Integration or Continuous Delivery (CI/CD). It reduces some stress when releasing new features.

Recently, the most popular tool to achieve that became Github Actions, here are a few resources about it:

Build, Test, and Deploy Your Laravel Application With GitHub Actions
How to create a CI/CD for a Laravel application using GitHub Actions

But it’s not only about setting up the software tools. The important thing is the human factor: every developer should know the deployment process and their responsibility in it. Everyone should know what branch to work on, how to commit code, and who/how closes the issues. Things like “don’t push directly to the master branch” or “don’t merge until the tests passed” should be followed on the subconscious level.

There are also social norms like “don’t deploy on Fridays”, but that is debatable, see the video below.

8. Hardware Infrastructure for Scaling

If your project reaches the stage of being very popular, it’s not enough to optimize the code performance. You need to scale it in terms of hardware, by putting up more server power as you need it, or even upsizing/downsizing based on some expected spikes in your visitor base, like in the case of Black Friday.

Also, it’s beneficial to have load balancing between multiple servers, it helps even in case one of the servers goes down, for whatever reason. You can use Laravel Forge for this, see the screenshot below.

Also, don’t forget the scaling of external services. There are separate infrastructure hardware solutions to power your File Storage, Queues, Elasticsearch/Algolia, Socket real-time stuff, Databases, etc. It would be a huge article on each of those areas.

There are so many various tools out there that I can’t really recommend one, in particular, everything depends individually on your project needs, your budget, and your familiarity with a certain ecosystem.

The obvious server-power leader of the world is Amazon with their AWS Ecosystem, but often it’s pretty hard to understand its documentation, there are even explanation websites like AWS in Plain English.

Also, there’s a relatively new “player” in town, called serverless. It became a thing in the Laravel world with the release of Laravel Vapor – a serverless deployment platform for Laravel, powered by AWS.

Probably the best resource to get deeper into this whole scaling world is the course Scaling Laravel.

9. Backups and Recovery Strategy

Everyone probably knows that you need to perform regular backups of your database. And, on the surface, it’s pretty easy to do with a simple Spatie Laravel Backup package:

And, of course, you need to automate it, like “set it and forget it”. But, an important question is have you tried the recovery from that DB backup, at least once?

You need to actually test the scenario: what if your current DB server totally dies, or someone drops the whole production database, and all you have is that backup SQL. Try to actually run the import from it, and test if nothing breaks. If there’s a problem with a backup recovery, you better know it before the disaster happens.

Also, it gets more complicated when you have multiple Database servers, replication, and also you want to not slow down your server while the backup is in progress. So you may tweak the process or use some database backup tools directly, even outside the Laravel world.

10. Bug Monitoring Process

Of course, the larger the codebase, the bigger probability of bugs happening. Also, when there are dozens of features, developers can’t test them all themselves, and even automated tests don’t catch all the possible scenarios and cases. Bugs happen to real users of the system, in the wild.

Your goal as a team is to monitor them and be informed when they happen. There are various tools to help with that, I personally use Bugsnag, but there’s also Flare, Sentry, Rollbar – all of them perform pretty much the same thing: notify you about the bugs, with all possible information that helps to trace and fix that bug.

But again, it’s not only about setting up the tool, it’s about the human factor, as well. The team needs to know the process of who reacts to what bug and how exactly: which bugs are urgent, which ones can be postponed, or totally ignored.

Also, the question “Who’s on duty today” is pretty relevant: if the bug tracking software notifies about something, who needs to get that message and via which channel? In our team, we use Slack notifications, and then ideally the situation should be fixed by the developer responsible for that part of the application which is buggy. Of course, in reality, it doesn’t happen all the time, but at least the team needs to know the time-to-react goals.

There’s also another part of the team: non-tech people. Developers need to be in touch with customer support people, and with managers, informing them about the severity and the status of the situation, so the “front-facing” people would talk to the customers accordingly.

11. Security

This question is kinda obvious, so I won’t explain it in too much detail. In addition to generally avoid getting hacked, probably the most important thing is to secure the personal data of your users – both from other users in multi-tenant systems and from the outside world.

I recommend reading this article: How to Protect Your Laravel Web Application Against the OWASP Top 10 Security Risks

Also, I recommend trying to hack yourself. Yes, I’m not kidding – ask some trusted friend/company from the outside to break into your app and do some damage. Heck, even pay for that – there are companies specializing in this area. Of course, you could try to do it yourself, but, as the author of the code, you’re kinda biased, and you probably wouldn’t try something unusual as a typical hacker would.

Finally, I’d like to express my happiness about the fact that we don’t need to explain the need for an SSL certificate anymore: with browser warning changes, and with free tools like Let’s Encrypt, there’s no excuse to not have https:// in your website.

12. Docs for onboarding new devs

The final point in this big article is about people. If you work on the project not from its first day, remember the day when you were introduced to it. Do you remember the feeling of installing everything, reading the docs, playing around with testing data, trying to understand how things work?

Now, imagine the mind of a new developer doing that on the current project, which is not much more complex. So, you need to help those poor guys, as much as you can.

I would suggest to even become that “new developer” for a day. When was the last time you tried to install your application, from the ground up? On a new computer or re-installed OS, for example. So yeah, try that, you may get a few unpleasant “surprises” to fix.

Things like installation instructions in Readme (maybe even with Docker images), comments in the code, making the code “clickable” in the IDE, understandable messages in git commits – all of that should be taken care of. And, remember when we talked about factories and seeds? Yes, that applies here, massively.

By the way, there are tools to help you, like this Readme generator.

And it’s not only about totally new developers – the same may happen to any existing team member who needs to fix something in the module that they hadn’t seen before. Any help is appreciated.

Your Thoughts?

What do you think about these 12 questions? I tried to provide short comments and external links, but obviously, it’s just an overview. Would you add any more questions to this list? Or, maybe you have a particular question you want me to expand on, in future articles/videos? Shoot in the comments below.

Laravel News Links

Here Are All The Headlines The Babylon Bee Would Have Written If We Were Around In Bible Times

https://media.babylonbee.com/articles/article-9624-1.jpg

Here Are All The Headlines The Babylon Bee Would Have Written If We Were Around In Bible Times


Brought to you by: 


Sadly, The Babylon Bee has only been around for five years, which is 5,995 fewer years than the Earth has been around. Had we existed during Bible times, we definitely would have had some hilarious, scathing headlines to cover all the events that happened in ancient Israel and beyond.

But we wanted to bless you. We went back through the Bible archives and came up with our best headlines for what happened in the Bible. Here they are:

 

OLD TESTAMENT 

 

Closed-Minded God Only Creates Two Genders 

 

Crazy Young-Earth Creationist Adam Claims Earth Is Only 7 Days Old

 

Bigot Noah Only Allows Two Genders of Each Animal on Ark

 

Work On Tower Of Babel Near Completion, Grbizt Mcbkd Flimadpt Dipbdeth Swn

 

Friends Concerned for Job After Finding Him Sitting In A Cave Listening To Daniel Powter’s ‘Bad Day’ On Repeat

 

LGBTQ Community Beat: Things Heating Up In Sodom And Gomorrah

 

New Reality Show Follows Wild Misadventures Of Jacob, 2 Wives, And 13 Boys

 

Joseph Canceled For Wearing LGBTQ Coat Despite Being A Cishet Male

 

Angel Of Death Says Blood On Doorpost Booster May Be Necessary

 

Pharaoh Starting To Get Weird Feeling He Should Let Israelites go

 

Moses Arrested As He Did Not Have A Permit For Parting Of Red Sea

 

Moses Accidentally Drops Tablet Containing 11th Commandment Saying ‘Thou Shalt Not Start A Social Media Company’

 

God Says We Can’t Go Out For Manna Because We Have Manna At Home

 

Manna Renamed To More Inclusive ‘Theyna’

 

Israelites Spend 40 Years Wandering In Desert After Moses Forgets To Update Apple Maps

 

Jericho Wall Collapse Blamed On Failure To Pass Infrastructure Bill

 

Goliath Identifies As Female To Compete In Women’s MMA

 

Results Of David And Goliath Bout Bankrupts Numerous Bookies

 

God Confirmed Libertarian After Warning Israel Against Having A King

 

Saul Throws Spear At David ‘Cause He Keeps Playing ‘Moves Like Jagger’

 

‘Real Housewives Of Solomon’s Harem’ Reality Show Announced 

 

Breaking: King Solomon Diagnosed With Syphilis

 

Jonah Telling Crazy Stories Again

 

Israel Totally Going To Be Obedient And Follow God This T–Update: Never Mind They Blew It

 

Sources Confirm Ba’al Was Indeed On The Crapper While His Prophets Were Getting Owned

 

Bible Scholars Reveal: Lions Lost Appetite After Hearing Daniel’s Anti-Vax Conspiracy Rant

 

NEW TESTAMENT 

 

Choir Of Heavenly Hosts Cited For Violating Bethlehem’s 8pm Noise Ordinance

 

King Herod Calls For Destroying Any Clumps Of Cells Less Than Two Years Old

 

Pharisee Wears Phylactery So Large He Can’t Lift His Head

 

Zacchaeus Sues Jesus For Not Following ADA Guidelines At Event

 

Pharisees Condemn Jesus’s Miraculous Healings As Unapproved Treatment For Leprosy

 

Jesus Totally Owns Pharisees By Turning Their Tears Into Wine

 

Jesus Heals Your Mom Of Obesity

 

CNN Reports Jesus Only Able To Walk On Water Because Of Climate Change

 

Jesus Hatefully Slut-Shames Woman At Well

 

Pontius Pilate Diagnosed With Germaphobia For Frequent Hand-Washing

 

Jesus Uncancels The Whole World 

 

Local Stoner Named Saul Becomes Apostle 

 

Apostle John Praised For Isolating, Social Distancing On Island Of Patmos


NOT SATIRE: Trust in media is at an all-time low (shocking… we know) but let’s keep “walking around completely uninformed” as a backup plan.

The Pour Over provides concise, politically neutral, and entertaining summaries of the world’s biggest news paired with reminders to stay focused on eternity, and delivers it straight to your inbox. The Pour Over is 100% free for Bee readers.

Supplement your satire… try The Pour Over today!

(100% free. Unsubscribe anytime.)


The Babylon Bee

A Picture from History: St. Valentine’s Day Massacre

https://www.pewpewtactical.com/wp-content/uploads/2021/09/2.-Al-Capone.jpeg

In 1929, alcohol was an illegal item throughout the United States.

But a thriving bootleg liquor business sprang up underground.

And in Chicago, nobody had as much influence in the trade as gangster Al Capone.

Al Capone
Al Capone

For Capone, business boomed. He pulled in roughly $85 million per year in 1920’s money — close to $1.3 billion today.  

There was only one problem…Bugs Moran.

Bugs Moran

Bugs Moran
Bugs Moran

Moran’s attempts at moving into the liquor business aggravated Capone’s South Side Gang, who wanted to operate throughout Chicago, not just a section of the city.

Capone wasn’t happy…and Moran was about to make him even less so.

Location of Saint Valentines Day Masascre
Map of Chicago

Aside from attempting to assassinate Capone’s friend and mentor, Johnny Torrio, Bugs also sent hitmen after Capone.

John Torrio
John Torrio

But Moran took it further, targeting Capone’s chief hitman, “Machine Gun” Jack McGurn.

Jack McGurn
Jack McGurn

Bad blood built between the two and it culminated on Valentine’s Day 1929.

The Last Valentine’s Day

February 14, 1929 — seven of Maron’s men waited in a North Side garage for a shipment of bootlegged Canadian whiskey.

A police car pulled up with four men stepping out – two wearing police uniforms.

The police ordered Maron’s men up against a nearby wall, shoulder to shoulder. Thinking it was nothing more than a police raid, Bugs’ men complied.

Reenactment of the St. Valentine's Day Massacre.
Reenactment of the St. Valentine’s Day Massacre. (Photo: Chicago History Museum)

It would be the last thing they’d do.

Shots rang out from two Thompson submachine guns and a shotgun.

By the time the dust settled, all seven of the men laid dead on the ground.

Valentines Day Massacre Tommy Guns
The two Tommy guns used in the St. Valentine’s Day Massacre now reside in Berrien County, Michigan. (Photo: Chriss Lyon via Block Club Chicago)

Chicago Mourned

Public outcry was swift for what became known as the St. Valentine’s Day Massacre.

It proved to be a nightmare for Capone.

Before the shooting, he was seen as something of the common man’s hero — fighting against the system’s injustice.

St Valentines Day Masascre Brick
Saint Valentine’s Day Massacre brick displayed at the National Museum of Crime & Punishment, Washington, D.C. (Photo: David via WikiCommons)

But after, Capone became a violent criminal in the public’s eye. In short, it was a public relations disaster.

Furthermore, the massacre brought down the entire strength of the federal government on Capone’s head.

Capone was in Miami during the shooting, but the blame instantly fell to him. (Though the case technically remains unsolved.)

Al Capone
Al Capone

Valentine’s Day 1929 brought Capone into the limelight, and investigators seized the opportunity to lock him away.

The famed gangster was later sentenced to 11 years in federal prison for tax evasion.

This is a new style of article for Pew Pew Tactical; if you liked it — let us know in the comments! If you didn’t enjoy it…well phooey. To catch up on previous Pictures from History, click on over to our History Category.

The post A Picture from History: St. Valentine’s Day Massacre appeared first on Pew Pew Tactical.

Pew Pew Tactical

MySQL: Our MySQL in 2010, a hiring interview question

https://isotopp.github.io/uploads/2021/09/mysql-2010-1.jpg

I ranted about hiring interviews, and the canned questions that people have to answer.
One of the interviews we do is a systems design interview, where we want to see how (senior) people use components and patterns to design a system for reliability and scaleout.

A sample question (based on a Twitter thread in German):

It is 2010, and the company has a database structure where a fixed number front end machines form a cell.
Reads and writes are already split:
Writes go to the primary of a replication tree, and are being replicated to the read instance of the database in each cell.
Reads go to the database instance that is a fixed part of the cell.

Read and write handles are split in the application. Clients write to a primary MySQL database, which then replicates to a database instance that is fixed part of a cell. Clients from a cell read from this fixed replica.

Unfortunately, this is not very effective:
The data center has 10 cells, but when a cell overloads its database spare capacity from other cells cannot be utilized.
Also, the data center is not redundant.

We want to:

  1. Load balance database queries.
  2. Extend the architecture to more than a single data center (or AZ).
  3. Optionally be resilient against the loss of individual databases or a full AZ.

Possible topics or annotations from a candidate:

  • What kind of strategies are available for load balancing database connections?
    • DNS, Anycast or L2 techniques
    • Proxy (but not a web proxy)
    • Zookeeper or another consensus system with modified clients
  • What are the advantages or disadvantages of this?
    • L2. Huh. Don’t do that.
    • DNS updates are slow and complicated. It can be made to work, but you will always have very little control over what is balanced why and how. DNS is better used for a global load balancing of http requests, and not as a database load balancer.
    • Zookeeper could be used to do this with modified clients, but we would have to discuss how exactly that works. That’s an interesting subquestion of its own.
    • MySQL Router or ProxySQL are made for that, but have a lot of interesting subquestions of their own. See below.
  • What else may be different when load balancing database connections instead of http?
    • Webproxies are not good database proxies. The database protocol is not http, and it is a stateful protocol. This requires extra care when load balancing.
    • Database connections can be long lived. A load balancing action to a different client only ever happens on connect. If you disconnect and reconnect only every 100 web actions or so, it is possible for the system to rebalance slowly. On the other hand, if you are using TLS’ed connections, connection setup cost can be high, so longer lived connections amortize better.
    • Database connections have a highly variable result set size size. A single select may return a single value of a single row, or an entire 35 TB table. If the proxy tries to be too intelligent and does things with the result as it passes through, it can die from out of memory.
    • Proxies can become bottlenecks. Imagine 50 frontends talking to 10 databases via a single proxy on a typical 2010-box with a single (or two) 1 GBit/s network interface, and results contain BLOBs.
  • What else there is to know?
    • Replication scales only reads. As this is a shared nothing architecture, each instance eventually sees all writes. To scale writes, we have to split or shard the database. That is out of scope for this question.
    • In our specific scenario, the number of writes is not actually a problem. We can assume a few hundred writes per second.
  • Can we extend that to more than one AZ?
    • Yes, we can create an intermediate primary in each AZ, which takes the writes from the origin AZ into each sub-AZ. It then fans out to the local replicas. This saves on long distance data transfer. It also creates mildly interesting problems for measuring replication delay.
    • Because the replication tree is deeper, writes take longer to reach the leaves. Most applications should be able to accomodate that.
    • The alternative would be something with full Two-Phase-Commit, but that would be even slower, and would have scaling limits in the number of systems that participate in the 2PC.

This is usually how far we get in a single interview session, and only with touching only on some of these points.
To find all is completely unrealistic, even for experienced people.
We would now reach a point where we discuss failure scenarios.

But it would be highly unusual to get this far, and that is not actually the goal in an interview.
I do in fact hardly care about the solution we end up with.
My goal is to have a useful discussion about databases, scaleout and resiliency, and about stateful systems and their limits.
When there are remarks such as “Our environment is smaller, but for us … works” or “We tried this: … but observed that often …” that’s actual gold in an interview.

Even things such as “In HTTP one would do … but I can imagine that with stateful systems that does not work because …” is already old, because it shows a level of reflection and insight that is rare.

The objective is not to reinvent our 2021 setup. The objective is to use this clearly limited setup as a base for a common conversation about database probems.

“Database Reliability Engineer” is the hardest position to hire for in my environment, because it is an H-shaped qualification

The concept of H-shaped people is a metaphor used in job recruitment to describe the abilities of individuals in (or outside) the workforce. The vertical bars on the letter H represent the depth of related skills and expertise in a single field or discipline, whereas the horizontal bar is the ability to combine those two disciplines to create value in a way that was hitherto unknown.

The objective is to find a person that “Understands MySQL” and “Understands Python or Go” (“Understands any database” and “Understands a useful programming language”), so that I can throw them at our existing codebase and have them useful within 3-6 months – ugh.

If I can find one person per year, I am very lucky.

Planet MySQL

How the Bark to Make Cork is Harvested from Cork Oak Trees

https://s3files.core77.com/blog/images/1212461_81_110177_xEfTPsELF.jpg

Nearly every task you can perform on a tree, whether it’s cutting it down, de-limbing it or stripping the bark, can be done by a machine. Indeed they often require one.

However, one tree-borne task for a specific type of tree resists automation and can still only be performed by humans using hand tools . That task is stripping the bark from a cork oak tree, for the purpose of turning it into cork. The bark has to be removed cleanly, so as to avoid damaging the tree; the bark will regenerate, but it will be another nine years before one can harvest it again. If you’re too heavy with the axe, you risk leaving a gash in the tree beneath the bark, which then becomes an entrance for insects who will destroy the valuable cash crop.

Here’s a look at the process in Portugal, the world’s largest producer of cork. And while it might seem like fun to cleanly peel a tree, the work and conditions look absolutely grueling:

When I watch stuff like this, I can’t believe I ever just tossed a cork in the garbage.

Core77

The 10 ‘Seinfeld’ Episodes to Watch If You’ve Never Seen It Before (Not That There’s Anything Wrong With That)

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/2f487958a272d040a2052a218d833fcf.png

Graphic: Elena Scotti (Photos: Getty Images)

More than 30 years ago, Larry David and Jerry Seinfeld brought an impressive innovation to the TV sitcom: Protagonists who are uniformly terrible people.

Sure, Married… with Children’s deplorable Bundys had been on air for a couple of years, but that series was on Fox—then a small upstart network—and an explicit parody of family sitcom tropes, while Seinfeld was, at least on the surface, a more traditionally structured show. It was also in the big leagues, airing on NBC, and the terribleness of its central characters was a lot more subtle: Jerry, George, Elaine, and Kramer made us love them while also reflecting some of our worst flaws, overreacting to small slights and petty annoyances in all the horrible ways we’d probably like to, if we thought we could get away with it.

After a rocky start in the ratings, it broke out in a big salad way, and was a ratings monsters through the remainder of its nine-season run. It has remained a favorite in reruns (and, more recently, on streaming services) ever since—thanks in no small part to an all-time great cast. Its even darker spiritual sequel, Curb Your Enthusiasm (which featured a whole season revolving around a fictional Seinfeld reunion special) is still going strong. Oh, and Seinfeld’s coming to Netflix starting Oct. 1—the streaming giant having paid some $500 million to grab the rights away from rival Hulu—so you’ll be able to while away your fall visiting or revisiting the gang.

Maybe you’ve never watched the show, and are feeling left out of this particular pop culture moment, or maybe it has just been a couple decades and you need a refresher. If you don’t want to immediately dig into 180 episodes, you can take a representative sample and totally get the gist (not hard when the show is about nothing, am I right?). Like a lot of TV, this show peaked somewhere in the middle seasons, and despite some minor plot continuity and a few running gags, there’s no need to start at the beginning.

This isn’t a “best of” list. Rather, these are 10 episodes that represent the things that Seinfeld did particularly well—including a couple with iconic punchlines and catchphrases that will get you in with the cool crowd, circa 1995.

Lifehacker