How to update large data in Laravel using Commands, Chunking, and Database Transactions

https://42coders.com/storage/88/big_data_sets.jpg

Laravel

Max Hutschenreiter –

Sometimes you need to update the data in your Database.  The easiest possibility is to just run an update in your MySQL Database. This is not always working. Especially when you use events or you also want to update relations ….

Commands

In this case, I recommend creating a Command. Even for just one-time changes.

php artisan make:command YourCommandName

Progress Bar

The first tip would be to use a progress bar. In long-running commands, it’s helpful to see that there is progress.

To show you how I just copy the example from the

Laravel Documentation

.

$users = App\Models\User::all();

$bar = $this->output->createProgressBar(count($users));

$bar->start();

foreach ($users as $user) {
    $this->performTask($user);

    $bar->advance();
}

$bar->finish();

Chunking

This works fine until a couple of hundred entries with easy changes. If you want to change more entries with more complexity you should use chunking results.
The problem is if you load everything in your eloquent collection your ram will be a limitation. To avoid it you can use the build-in laravel function chunk on your queries to iterate through the table in sequences.

App\Models\User::chunk(200, function ($users){
    foreach($users as $user){
        $user->name .= ' :)';
        $user->save();
    }
});

One important thing to understand about the chunk function is to understand how the queries run. In this example after the 200 users got iterated through, the base query is being executed with the LMIT function on the table again. 
Imagine you have this case

App\Models\User::where('active', true)
    ->chunk(200, function ($users){
        foreach($users as $user){
            $user->active = false;
            $user->save();
        }
    });

In this code, it would go over the 200 users changing the active value to false.  In the second run, it would ask the Database again for the users which have active true. The problem is since we just changed the active status of 200 users we would get the list without them. But the Limit function would limit the result to start from 200 to 400 in the results. That means we would skip 200 users which we actually wanted to change.
Laravel has a function to overcome the problem it’s just important to understand when to use it. So the solution in this situation would be.

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        foreach($users as $user){
            $user->active = false;
            $user->save();
        }
    });

Database Transactions

Now we are able to execute a lot of changes to our Models and we avoid the problem that our Eloquent collections becoming too big. 
But in our last example, we would execute an updated Statement for every single user in our DB. To avoid this I found it a good tactic to use Transactions. 
This allows us to reuse our chunks and update the DB per chunk.

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        try {
            DB::beginTransaction();
            
            foreach($users as $user){
                $user->active = false;
                $user->save();
            }
           
            DB::commit();

        } catch (\Exception $e) {
            //handle your error (log ...)
            DB::rollBack();
        }
    });

In this code example, we combine the chunkById with Database Transactions. This can save a lot of time in updating the DB. You can read more about the

Database Transactions in the Laravel Documentation

.

Transactions can cause trouble if not used correctly. If you forget to commit or rollBack you will create nested transactions. You can read more in the Blogpost 

Combine it together

To finalize this code example we can bring in again the progress bar.

$count = App\Models\User::where('active', true)->count();

$bar = $this->output->createProgressBar($count);
$bar->start();

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        try {
            DB::beginTransaction();
            
            foreach($users as $user){
                $user->active = false;
                $user->save();
                $bar->advance();
            }
           
            DB::commit();

        } catch (\Exception $e) {
            //handle your error (log ...)
            DB::rollBack();
            $bar->finish();
        }
    });

$bar->finish();

So this is my strategy to handle updates on bigger data sets. You can change the Chunk size by your needs and experiments that get you good results. In my experience something from 200 – 1000 is ok.
Sometimes especially when the calculation for the single entry is more complicated I see the whole process getting slower after each processing. It starts with around 2sec per bar advance up to 30 or 40 seconds. Since I experienced it across different commands I am not sure if it’s a general topic. If anyone has any info on it let me know.

Hope this article helps you.

Laravel News Links

Kel-Tec CP33 Pistol – The CyberPunk Plinker You Now Must Own ~ VIDEO

https://www.ammoland.com/wp-content/uploads/2021/10/Shield-Plus-OR-THUMB2.jpg

AmmoLand News can’t get enough of the Kel-Tec CP33 Pistol and soon neither will you.

U.S.A. -(AmmoLand.com)- I always try to be objective in my reviews, but I was obsessed with the Kel-Tec CP33 the moment I saw it; From its Robocop pseudo-subgun/PDW appearance to its capacious 33-round magazine, it was everything the 18-year-old-me ever wanted in a 22 handgun. But now that I’ve had a chance to fire nearly 1,000-rounds through the futuristic little gun, is the honeymoon over, or is the CP33 everything I’ve ever wanted?

Kel-Tec CP33 Pistol in .22lr

If you’re read your fair share of gun reviews in the past, it will likely come as no surprise that the answer isn’t a simple yes or no. But if you’re sitting at the gun counter right now, money in hand, and wondering if you should buy one, I’d say go for it if you’re looking for a fun range toy. But if you have a different role in mind for the CP33, read on.

Kel-Tec CP33 Pistol
The Kel-Tec CP33 Pistol – American polymer sitting atop American steel. IMG Jim Grant

Before I get into the details of the review, let’s first take a look under the hood to see how the Kel-Tec works. First off, the CP33 is a standard direct blowback-operated, semi-automatic magazine-fed pistol chambered in .22lr. If you know anything about rimfire auto-loaders, this should come as no surprise. Virtually all semi-automatic rimfire guns are blowback-operated because it’s very simple to produce and generally less ammunition-sensitive than locked-breech firearms. So does this mean the Kel-Tec CP33 is no different than a more traditional-looking rimfire pistol like a Ruger MKIV or Browning Buckmark?

Absolutely not. It may share the same method of operation, but by that same measure, all bolt-action rifles are identical. But it’s not how the mechanics of the actual firearm that separates the Kel-Tec from other handguns, but rather its magazine.

Magical Magazine

It’s not just that the CP33’s magazine holds more rounds than virtually any other traditional rimfire handgun, but how the magazine accomplishes this that makes the new Kel-Tec pistol so interesting.

Most rimfire pistols utilize a single stack magazine to feed cartridges to the chamber. By this, I mean literally, a spring-loaded box that situates a straight row of rounds directly beneath one another, not unlike say an M1911. Higher-capacity centerfire pistols like the Glock utilize a staggered column of rounds inside of a magazine whose internal space is roughly 50% wider than the cartridges themselves, but this isn’t practical for rimfire rounds.

The Kel-Tec CP33 magazine is very unique both in function and appearance. IMG Jim Grant

Why? Because the rims themselves tend to snag on each other, leading to a malfunction referred to as rim-lock. This is why the Soviets utilized a pan magazine on their DP-28 LMG chambered in the rimmed 7.62x54r cartridge, and why the capacity of the British Bren gun is limited to 30-rounds. (Although the British did field a 100-round pan magazine like the Soviets in limited numbers.)

So how did Kel-Tec solve this issue? With a coffin-style, dual-staggered column magazine. It’s basically two staggered column magazines combined into one.

But wait, you just said rimfire rounds don’t play well with staggered column magazines!

Indeed I did. And the solution by the engineers at Kel-Tec was to add open side walls to the magazine to allow shooters to properly align any rounds that tend to work themselves into a rim lock situation.

If that seems like a bandaid solution to a much bigger issue, you’re not wrong. It definitely doesn’t completely prevent the issues of rimfire rounds in a stagger column magazine, but it should allow a shooter to alleviate the problem before it becomes one.

But does it actually work?

Kel-Tec CP33 Dozer
Yes, this looks ridiculous, but isn’t that really what the Kel-Tec CP33 is going for anyway? IMG Jim Grant

When loaded properly, absolutely. But that’s a bigger caveat than it sounds. It’s very easy for an inexperienced shooter to load the magazine in such a way that it looks like it’s properly aligned, only to find out 20-rounds in, that some of the lower rounds aren’t quite line up. And because the alignment of one round affects all the rest, performing a standard tap rack bang malfunction clearing procedure will result in another failure to chamber. Truth be told, getting all the rounds perfectly lined up is more difficult than it looks, but with practice becomes pretty simple. The best source for how to do so is in the Kel-Tec CP33’s user manual and spare Kel-Tec CP33 22LR 33rd Magazines are readily available.

But enough about the magazine, let’s get a rundown of all the CP33’s features.

Kel-Tec CP33 Handgun Ergonomics

Starting at the business end, the Kel-Tec CP33 ships with a 5.5-inch, 1/2×28 threaded stainless steel barrel. I tested this barrel with several muzzle devices, and everything from flash-hiders and linear compensators to my favorite new rimfire suppressor (The Rugged Suppressor Mustang 22 from SilencerShop fit and ran flawlessly.

Behind the muzzle, the CP33 includes a set of fiber-optic super-low-profile post and notch iron sights that are clearly designed to get out of the way of any mounted optics. This is because the entire top of the CP33 features a monolithic Picatinny rail. I found that if a shooter isn’t running a brace, then a pistol optic like a Holosun HE507C or Trijicon RMR on the lowest possible mount made for the most natural-feeling setup.

Under the front sight, the Kel-Tec CP33 features an M-Lok slotted dust cover that appears to be the perfect length to not fit any of the M-Lok rail segments I had on hand. So I needed to modify one by cutting off one of the alignment notches and only using a single mounting bolt. This is something I wouldn’t normally advise since it does compromise the mounting strength of the rail. But since the CP33 is only chambered in .22lr, I took that risk and it paid off handsomely. The Streamlight TLR-10 Flashlight I mounted on the handgun never budged, and its laser held zero after a few hundred rounds.

CP33 Safety
The CP33 features a thumb safety that is easily actuated without shifting the firing grip. IMG Jim Grant

Alternatively, a shooter could simply buy a super short rail segment or an accessory that directly mounts to the M-Lok slot.

But be advised, a hand-stop or angled grip are completely fine, but a vertical grip can get you in hot water with the ATF if you don’t have a tax stamp for the little polymer pistol.

Behind the dust cover, the CP33 features the iconic Kel-Tec molded grip pattern on its oblong grip. Despite the grip’s appearance, it’s actually fairly comfortable to hold, and it positions the shooter’s hands perfectly to toggle the ambidextrous safety lever behind and above it. But there’s one thing conspicuously absent between the grip and the trigger – a magazine release.

That’s because the engineers at Kel-Tec decided to depart from the gun’s overall very futuristic appearance and incorporate an old-school European-style heel release at the bottom of the grip. (Not unlike the one found on the Walther PPK.) I’m not normally a fan of this setup, but given that the CP33 isn’t a combat pistol, it doesn’t bother me.

Kel-Tec CP33 Pistol Grip
The Kel-Tec CP33 Pistol’s grip features the iconic Kel-Tec molded panels, and the magazine release is on the heel of the grip. IMG Jim Grant

Above the grip, the CP33 features an ambi bolt-release that some shooters have reported issues with. But the example I reviewed – which wasn’t a T&E from the factory, but a gun I bought at a local shop – never had an issue with the release whatsoever.

At the very back of the handgun is the charging latch that takes more than a few notes from both the AR-15 and the HK MP7 PDW. It’s non-reciprocating, which is awesome, but it is made of very thin steel with polymer handles at the rear. And to be honest, its construction doesn’t inspire a tremendous amount of confidence. And if that really bothers you, there’s a cottage industry of aftermarket parts makers who now offer more robust all-aluminum charging latches.

Performance

Now that you know everything about the gun and its features, let’s talk about how the gun actually ran.

After 1,000 rounds of various types of .22lr ammo, including a half dozen different varieties of standard and high-velocity 22LR ammunition, the Kel-Tec CP33 encountered around 30 malfunctions in my testing. Half of these were first-round failures to chamber either during the first 200 rounds fired through the gun, or after a hundred or so rounds fired suppressed. The former is because the gun needs a little break-in period, while the latter is 100% due to excess carbon build-up from running the gun suppressed.

NVG CP33
I even tested the Kel-Tec CP33 with my PVS-14 and PEQ-15, and it was glorious. IMG Jim Grant

On an interesting side note, the gun never malfunctioned on the first round when using the bolt release.

Accuracy was good bordering on great, with the Kel-Tec CP33 easily capable of hitting targets out to 100 yards with a reflex sight attached. Though I suspect the gun would be infinitely more capable with a low-powered magnified optic and a stabilizing brace attached. But as it comes, the CP33 makes short work of tin cans, squirrels, and clay pigeons out to 50 yards.

CP33 Action
Something about the Kel-Tec CP33’s design just makes it practically beg to be suppressed with a quality can like this Rugged Suppressors Mustang 22 from SilencerShop.com. IMG Jim Grant

Kel-Tec CP33 Space Gat Verdict

So, is the futuristic polymer pistol worth a buy? With an MSRP of $475 (and in my experience street prices are much lower), the Kel-Tec CP33 Pistol is a solid deal that when babied a little bit, runs like a champ. Yes, the magazine can be problematic if not loaded properly, but with some practice, the CP33 makes a solid plinking pistol that would work well in a role as a hiking gun or varment pistol. Its looks might not appeal to everyone, but for those of us who dream of blasting cyborgs beneath neon signs in a rain-soaked Neo-Tokyo, the CP33 is pretty damn slick.


About Jim Grant

Jim is one of the elite editors for AmmoLand.com, who in addition to his mastery of prose, can wield a camera with expert finesse. He loves anything and everything guns but holds firearms from the Cold War in a special place in his heart.

When he’s not reviewing guns or shooting for fun and competition, Jim can be found hiking and hunting with his wife Kimberly, and their dog Peanut in the South Carolina low country.

Jim Grant

AmmoLand.com

Comic for October 13, 2021

https://assets.amuniversal.com/24447c80ff7f01397aa1005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Comic for October 11, 2021

https://assets.amuniversal.com/1f02c2b0ff7f01397aa1005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Comic for October 10, 2021

https://assets.amuniversal.com/4b9300d0f2400139769e005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Led by founders who met at Microsoft, Chronosphere lands $200M, reaches unicorn status

https://cdn.geekwire.com/wp-content/uploads/2019/11/Chronosphere-Co-Founders-Martin-and-Rob-1260×945.jpeg

Chronosphere co-founders Martin Mao (left, CEO) and Rob Skillington (CTO). (Chronosphere Photo)

Chronosphere has reached unicorn status in less than three years.

The company this week announced a $200 million Series C round, propelling its valuation past $1 billion. It comes nine months after the startup raised a $43 million Series B round.

Founded in 2019 by former Uber and Microsoft engineers, Chronosphere offers “data observability” software that helps companies using cloud-native architecture monitor their data. Customers include DoorDash, Genius Sports, and Cudo. Its annual recurring revenue has grown by 9X in 2021.

Chronosphere CEO Martin Mao and CTO Rob Skillington first met in the Seattle area at Microsoft, where they worked on migrating Office to the cloud-based Office 365 format.

They both later spent time at Uber on engineering teams. Uber couldn’t find any products to meet its growing data demands, so Mao and Skillington helped the company build one. The result was M3, Uber’s open-source production metrics system, which is capable of storing and querying billions of data points per second.

With Chronosphere, Mao and Skillington are building an end-to-end solution on top of M3 that helps companies both gather and analyze their data in the cloud with the help of visualization and analytics tools. The product works across multiple cloud platforms, including AWS and Azure.

Chronosphere recently decided to be remote-first. Its largest hub is in New York City, and there are a handful of employees in Seattle, including Mao. The company has 80 total employees and expects to add another 35 people this year.

General Atlantic led the Series C round. Other backers include Greylock Partners; Lux Capital; Addition; Founders Fund; Spark Capital; and Glynn Capital. Total funding to date is $255 million.

“Sitting at the intersection of the major trends transforming infrastructure software – the rise of open-source and the shift to containers – Chronosphere has quickly become a transformative player in observability,” Anton Levy, managing director at General Atlantic, said in a statement.

GeekWire

Ben Cook: PyTorch DataLoader Quick Start

PyTorch comes with powerful data loading capabilities out of the box. But with great power comes great responsibility and that makes data loading in PyTorch a fairly advanced topic.

One of the best ways to learn advanced topics is to start with the happy path. Then add complexity when you find out you need it. Let’s run through a quick start example.

What is a PyTorch DataLoader?

The PyTorch DataLoader class gives you an iterable over a Dataset. It’s useful because it can parallelize data loading and automatically shuffle and batch individual samples, all out of the box. This sets you up for a very simple training loop.

PyTorch Dataset

But to create a DataLoader, you have to start with a Dataset, the class responsible for actually reading samples into memory. When you’re implementing a DataLoader, the Dataset is where almost all of the interesting logic will go.

There are two styles of Dataset class, map-style and iterable-style. Map-style Datasets are more common and more straightforward so we’ll focus on them but you can read more about iterable-style datasets in the docs.

To create a map-style Dataset class, you need to implement two methods: __getitem__() and __len__(). The __len__() method returns the total number of samples in the dataset and the __getitem__() method takes an index and returns the sample at that index.

PyTorch Dataset objects are very flexible — they can return any kind of tensor(s) you want. But supervised training datasets should usually return an input tensor and a label. For illustration purposes, let’s create a dataset where the input tensor is a 3×3 matrix with the index along the diagonal. The label will be the index.

It should look like this:

dataset[3]

# Expected result
# {'x': array([[3., 0., 0.],
#         [0., 3., 0.],
#         [0., 0., 3.]]),
#  'y': 3}

Remember, all we have to implement are __getitem__() and __len__():

from typing import Dict, Union

import numpy as np
import torch

class ToyDataset(torch.utils.data.Dataset):
    def __init__(self, size: int):
        self.size = size

    def __len__(self) -> int:
        return self.size

    def __getitem__(self, index: int) -> Dict[str, Union[int, np.ndarray]]:
        return dict(
            x=np.eye(3) * index,
            y=index,
        )

Very simple. We can instantiate the class and start accessing individual samples:

dataset = ToyDataset(10)
dataset[3]

# Expected result
# {'x': array([[3., 0., 0.],
#         [0., 3., 0.],
#         [0., 0., 3.]]),
#  'y': 3}

If happen to be working with image data, __getitem__() may be a good place to put your TorchVision transforms.

At this point, a sample is a dict with "x" as a matrix with shape (3, 3) and "y" as a Python integer. But what we want are batches of data. "x" should be a PyTorch tensor with shape (batch_size, 3, 3) and "y" should be a tensor with shape batch_size. This is where DataLoader comes back in.

PyTorch DataLoader

To iterate through batches of samples, pass your Dataset object to a DataLoader:

torch.manual_seed(1234)

loader = torch.utils.data.DataLoader(
    dataset,
    batch_size=3,
    shuffle=True,
    num_workers=2,
)
for batch in loader:
    print(batch["x"].shape, batch["y"])

# Expected result
# torch.Size([3, 3, 3]) tensor([2, 1, 3])
# torch.Size([3, 3, 3]) tensor([6, 7, 9])
# torch.Size([3, 3, 3]) tensor([5, 4, 8])
# torch.Size([1, 3, 3]) tensor([0])

Notice a few things that are happening here:

  • Both the NumPy arrays and Python integers are both getting converted to PyTorch tensors.
  • Although we’re fetching individual samples in ToyDataset, the DataLoader is automatically batching them for us, with the batch size we request. This works even though the individual samples are in dict structures. This also works if you return tuples.
  • The samples are randomly shuffled. We maintain reproducibility by setting torch.manual_seed(1234).
  • The samples are read in parallel across processes. In fact, this code will fail if you run it in a Jupyter notebook. To get it to work, you need to put it underneath a if __name__ == "__main__": check in a Python script.

There’s one other thing that I’m not doing in this sample but you should be aware of. If you need to use your tensors on a GPU (and you probably are for non-trivial PyTorch problems), then you should set pin_memory=True in the DataLoader. This will speed things up by letting the DataLoader allocate space in page-locked memory. You can read more about it here.

Summary

To review: the interesting part of custom PyTorch data loaders is the Dataset class you implement. From there, you get lots of nice features to simplify your data loop. If you need something more advanced, like custom batching logic, check out the API docs. Happy training!

The post PyTorch DataLoader Quick Start appeared first on Sparrow Computing.

Planet Python

Visualizing Ammo Cost Trends Across Nine Popular Calibers

https://www.thefirearmblog.com/blog/wp-content/uploads/2021/10/ammo-cost-trends-180×180.png

Ammo Cost TrendsIt’s no secret that the ammunition market has been volatile (to say the least) since the Covid pandemic took hold, but some calibers seem to be easing off while others rise. Redditor, Chainwaxologist, owns a web-based hunting and fishing retail store, FoundryOutdoors.com, so to keep his ammo cost and supply competitive, he set out to […]

Read More …

The post Visualizing Ammo Cost Trends Across Nine Popular Calibers appeared first on The Firearm Blog.

The Firearm Blog

Step-By-Step Guide to Deploying Laravel Applications on Virtual Private Servers

https://production.ams3.digitaloceanspaces.com/2021/09/Social-Sharing-photo-Template-1-14.png

Developing modern full-stack web applications has become much easier thanks to Laravel but deploying them on a real server is another story.

There are just so many options.

PaaS like Heroku or AWS Elastic Beanstalk, unmanaged virtual private servers, shared hosting, and so on.

Deploying a Laravel app on a shared server using cPanel is as easy as zipping up the source code along with all the dependencies and uploading it to the server. But on shared hosting, you don’t have much control over the server.

PaaS like Heroku or AWS Elastic Beanstalk strikes a good balance between ease of usage and control, but they can be expensive at times. A standard 1x dyno from Heroku, for example, costs $25 per month and comes with only 512MB of RAM.

Unmanaged virtual private servers are affordable and provide a lot of control on the server. You can avail a server with 2GB of RAM, 20GB of SSD space, and 2TB of transfer bandwidth, costing only $15 per month.

Now the problem with unmanaged virtual private servers is that they are unmanaged. You’ll be responsible for installing all necessary software, configuring them, and keeping them updated.

In this article, I’ll guide you step-by-step in the process of how to deploy a Laravel project on an unmanaged virtual private server (we’ll refer to it as VPS from now on). If you want to check out the benefits of the framework first, go ahead and get an answer to the question of why use the Laravel framework. If you are ready, without any further ado, let’s jump in.

Prerequisites

The article assumes that you have previous experience with working with the Linux command-line. The server will use Ubuntu as its operating system, and you’ll have to perform all the necessary tasks from the terminal. The article also expects you to understand basic concepts like Sudo, file permissions, differences between a root and non-root user, and git.

Project Code and Deployment Plan

I’ve built a dummy project for this article. It’s a simple question board application where users can post a question, and others can answer that question. You can consider this a dumbed-down version of StackOverflow.

The project source code is available on https://github.com/fhsinchy/guide-to-deploying-laravel-on-vps repository. Make a fork of this repository and clone it on your local computer.

Once you have a copy of the project on your computer, you’re ready to start the Laravel deployment process. You’ll start by provisioning a new VPS and setting up a way for pushing the source from your local computer to the server.

Provisioning a New Ubuntu Server

There are several VPS providers out there, such as DigitalOcean, Vultr, Linode, and Hetzner. Although working with an unmanaged VPS is more or less the same across providers, they don’t provide the same kind of services.

DigitalOcean, for example, provides managed database services. Linode and Vultr, on the other hand, don’t have such services. You don’t have to worry about these differences.

I’ll demonstrate only the unmanaged way of doing things. So regardless of the provider, you’re using, the steps should be identical.

Before provisioning a new server, you’ll have to generate SSH keys.

Generating New SSH Keys

According to Wikipedia – “Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network.” It allows you to connect to a remote server using a password or a key-pair.

If you’re already familiar with SSH and have previously generated SSH key-pairs on your computer, you may skip this subsection. To generate a new key-pair on macOS, Linux, or Windows 10 machines, execute the following command:

ssh-keygen -t rsa

You’ll see several prompts on the terminal. You can go through them by pressing enter. You don’t have to put any password either. Once you’ve generated the key-pair, you’ll find a file named id_rsa.pub inside the ~/.ssh/ directory. You’ll need this file when provisioning a new VPS.

Provisioning a New VPS

I’ve already said there are some differences between the VPS service providers, so if you want to be absolutely in line with this article, use DigitalOcean.

A single virtual private server on DigitalOcean is known as a droplet. On Vultr, it’s called an instance, and on Linode, it’s called a linode. Log into your provider of choice and create a new VPS. Use Ubuntu 20.04 LTS as the operating system.

For size, pick the one with 1GB of RAM and 25GB of SSD storage. It should cost you around $5 per month. For the region, choose the one closest to your users. I live in Bangladesh, and most of my users are from here, so I deploy my applications in the Singapore region.

Under the SSH section, create a new SSH key. Copy the content from the ~/.ssh/id_rsa.pub file and paste it as the content. Put a descriptive name for the key and save.

You can leave the rest of the options untouched. Most of the providers come with an automatic backup service. For this demonstration, keep that option disabled. But in a real scenario, it can be a lifesaver. After the process finishes, you’ll be ready to connect to your new server using SSH.

Performing Basic Setup

Now that your new server is up and running, it’s time to do some basic setup. First, use SSH with the server IP address to log in as the root user.

ssh [email protected]

You can find the server’s IP address on the dashboard or inside the server details. Once you’re inside the server, the first thing to do is create a new non-root user.

By default, every server comes with the root user only. The root user, as you may already know, is very mighty. If someone manages to hack your server and logs in as the root user, the hacker can wreak havoc. Disabling login for the root user can prevent such mishaps.

Also, logging in using a key-pair is more secure than logging in using a password. So, disabling logging in using a password should be disabled for all users.

To create a new user from the terminal, execute the following command inside your server:

adduser nonroot

The name nonroot can be anything you want. I used nonroot as the name to make the fact clear that this is a non-root user. The adduser program will ask for a password and several other information. Put a strong password and leave the others empty.

After creating the user, you’ll have to add this new user to the sudo group. Otherwise, the nonroot user will be unable to execute commands using sudo.

usermod -aG sudo nonroot

In this command, sudo is the group name, and nonroot is the username. Now, if you try to log into this account, you’ll face a permission denied error.

It happens because most of the VPS providers disable login using a password when you add an SSH key to the server, and you haven’t configured the new user to use SSH key-pairs. One easy way to fix this is to copy the content of /root/.ssh directory to the /home/nonroot/.ssh directory. You can use the rsync program to do this.

rsync --archive --chown=nonroot:nonroot /root/.ssh /home/nonroot

The –archive option for rsync copies directories recursively preserving symbolic links, user and group ownership, and timestamps. The –chown option sets the nonroot user as the owner in the destination. Now you should be able to log in as the new user using SSH.

After logging in as a non-root user, you should update the operating system, including all the installed programs on the server. To do so, execute the following command:

sudo apt update && sudo apt upgrade -y && sudo apt dist-upgrade -y

Downloading and installing the updates will take a few minutes. During this process, if you see a screen titled “Configuring openssh-server” asking about some file changes, select the “keep the local version currently installed” option and press enter.

After the update process finishes, reboot the server by executing the sudo reboot command. Wait a few minutes for the server to boot again and log back in as a non-root user.

Deploying Code on the Server

After completing the basic setups, the next thing you’ll tackle is deploying code on the server. I’ve seen people cloning the repository somewhere on the production server and logging into the server to perform a pull whenever there are some new changes to the code.

There is a much better way of doing this. Instead of logging into the server to perform a pull, you can use the server itself as a repository and push code directly to the server. You can also automate the post-deployment steps like installing dependencies, running the migrations, and so on, which will make the Laravel deploy to server an effortless action. But before doing all these, you’ll first have to install PHP and Composer on the server.

Installing PHP

You can find a list of PHP packages required by Laravel on the official docs. To install all these packages, execute the following command on your server:

sudo apt install php7.4-fpm php7.4-bcmath php7.4-json php7.4-mbstring php7.4-xml -y

Depending on whether you’re using MySQL or PostgreSQL, or SQLite in your project, you’ll have to install one of the following packages:

sudo apt install php7.4-mysql php7.4-pgsql php7.4-sqlite3 -y

The following package provides support for the Redis in-memory databases:

sudo apt install php7.4-redis

Apart from these packages, you’ll also need php-curl, php-zip, zip, unzip, and curl utilities.

sudo apt install zip unzip php7.4-zip curl php7.4-curl -y

The question bank project uses MySQL as its database system and Redis for caching and running queues, so you’ll have to install the php7.4-mysql and the php7.4-redis packages.

Depending on the project, you may have to install more PHP packages. Projects that work images, for example, usually depend on the php-gd package. Also, you don’t have to mention the PHP version with every package name. If you don’t specify a version number, APT will automatically install whatever is the latest.

At the writing of this article, PHP 7.4 is the latest one on Ubuntu’s package repositories but considering that the question board project requires PHP 7.4 and PHP 8 may become the default in the future, I’ve specified the version number in this article.

Installing Composer

After installing PHP and all the required packages on the server, now you’re ready to install Composer. To do so, navigate to the official composer download page and follow the command-line installation instructions or execute the following commands:

php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
sudo php composer-setup.php --install-dir /usr/local/bin --filename composer
php -r "unlink('composer-setup.php');"

Now that you’ve installed both PHP and Composer on your server, you’re ready to configure the automated deployment of your code.

Deploying Code Using Git

For automating code deployment on the server, log in as a non-root user and create a new directory under the /home/nonroot directory. You’ll use this directory as the repository and push production code to it.

mkdir -p /home/nonroot/repo/question-board.git

The -p option to the mkdir command will create any nonexistent parent repository. Next, cd into the newly created directory and initialize a new bare git repository.

cd /home/nonroot/repo/question-board.git
git init --bare

A bare is the same as a regular git repository, except it doesn’t have a working tree. The practical usage of such a git repository is as a remote origin. Don’t worry if you don’t understand what I said just now. Things will become lucid as you keep going.

Assuming you’re still inside the /home/nonroot/repo/question-board.git directory, cd inside the hooks subdirectory and create a new file called post-receive.

cd hooks
touch post-receive

Files inside this directory are regular shell scripts that git invokes when some major event happens on a repository. Whenever you push some code, git will wait until all the code has been received and then call the post-receive script.

Assuming you’re still inside the hooks directory, open the post-receive script by executing the following command:

nano post-receive

Now update the script’s content as follows:

#!/bin/sh

sudo /sbin/deploy

As you may have already guessed, /sbin/deploy is another script you’ll have to create. The /sbin directory is mainly responsible for storing scripts that perform administrative tasks. Go ahead and touch the /sbin/deploy script and open it using the nano text editor.

sudo touch /sbin/deploy
sudo nano /sbin/deploy

Now update the script’s content as follows:

#!/bin/sh

git --work-tree=/srv/question-board --git-dir=/home/nonroot/repo/question-board.git checkout -f

Evident by the #!/bin/sh line, this is a shell script. After that line, the only line of code in this script copies the content of the /home/nonroot/repo/question-board.git repository to the  /srv/question-board directory.

Here, the –work-tree option specifies the destination directory, and the –git-dir option specifies the source repository. I like to use the /srv directory for storing files served by this server. If you want to use the /var/www directory, go ahead.

Save the file by hitting Ctrl + O and exit nano by hitting Ctrl + X key combination. Make sure that the script has executable permission by executing the following command:

sudo chmod +x post-receive

The last step to make this process functional is creating the work tree or the destination directory. To do so, execute the following command:

sudo mkdir /srv/question-board

Now you have a proper work tree directory, a bare repository, and a post-hook that in turn calls the /sbin/deploy script with sudo. But, how would the post-receive hook invoke the /sbin/deploy script using sudo without a password?

Open the /etc/sudoers file on your server using the nano text editor and append the following line of code at the end of the file:

nonroot ALL=NOPASSWD: /sbin/deploy

This line of code means that the nonroot user will be able to execute the /sbin/deploy script with sudo on ALL hosts with NOPASSWD or no password. Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination.

Finally, you’re ready to push the project source code. Assuming that you’ve already forked and cloned the https://github.com/fhsinchy/guide-to-deploying-laravel-on-vps repository on your local system, open up your terminal on the project root and execute the following command:

git remote add production ssh://[email protected]/home/nonroot/repo/question-board.git

Make sure to replace my IP address with the IP address from your server. Now assuming that the stable code is no the master branch, you can push code to the server by executing the following command:

git push production master

After sending the code to the server, log back in as a non-root user and cd into the /srv/question-board directory. Use the ls command to list out the content, and you should see that git has successfully checked out your project code.

Automating Post Deployment Steps

Congratulations on you being able to deploy Laravel project on the server directly but, is that enough? What about the post-deployment steps? Tasks like installing or updating dependencies, migrating the database, caching the views, configs, and routes, restarting workers, and so on.

Honestly, automating these tasks is much easier than you may think. All you’ve to do is create a script that does all these for you, set some permissions, and call that script from inside the post-receive hook.

Create another script called post-deploy inside the /sbin directory. After creating the file, open it inside the nano text editor.

sudo touch /sbin/post-deploy
sudo nano /sbin/post-deploy

Update the content of the post-deploy script as follows. Don’t worry if you don’t clearly understand everything. I’ll explain each line in detail.

#!/bin/sh

cd /srv/question-board

cp -n ./.env.example ./.env

COMPOSER_ALLOW_SUPERUSER=1 composer install --no-dev --optimize-autoloader
COMPOSER_ALLOW_SUPERUSER=1 composer update --no-dev --optimize-autoloader

The first line changes the working directory to the /srv/question-board directory. The second line makes a copy of the .env.example file. The -n option makes sure that the cp command doesn’t override a previously existing file.

The third and fourth commands will install all the necessary dependencies and update them if necessary. The COMPOSER_ALLOW_SUPERUSER environment variable disables a warning about running the composer binary as root.

Save the file by pressing Ctrl + O and exit nano by pressing Ctrl + X key combination. Make sure that the script has executable permission by executing the following command:

sudo chmod +x /sbin/post-deploy

Open the /home/nonroot/repo/question-board.git/hooks/post-receive script with nano and append the following line after the sudo /sbin/deploy script call:

sudo /sbin/post-deploy

Make sure that you call the post-deploy script after calling the deploy script. Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination.

Open the /etc/sudoers file on your server using the nano text editor once again and update the previously added line as follows:

nonroot ALL=NOPASSWD: /sbin/deploy, /sbin/post-deploy

Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + K key combination. You can add more post deploy steps to this script if necessary.

To test the new post-deploy script, make some changes to your code, commit the changes and push to the production master branch. This time you’ll see composer packages installation progress on the terminal and outputs from other artisan calls.

Once the deployment process finishes, log back into the server, cd into the /srv/question-board directory, and list the content by executing the following command:

ls -la

Among other files and folders, you’ll see a newly created vendor directory and an env file. At this point, you can generate the application encryption key required by Laravel. To do so, execute the following command:

sudo php artisan key:generate

If you look at the content of the .env file using the nano text editor, you’ll see the APP_KEY value populated with a long string.

Installing and Configuring NGINX

Now that you’ve successfully pushed the source code to the server, the next step is to install a web server and configure it to serve your application. I’ll use NGINX in the article. If you want to use something else like Apache, you’ll be on your own.

This article will strictly focus on configuring the webserver for serving a Laravel application and will not discuss NGINX-related stuff in detail. NGINX itself is a very complex software, and if you wish to learn NGINX from the ground up, The NGINX Handbook is a solid resource.

To install NGINX on your Ubuntu server, execute the following command:

sudo apt install nginx -y

This command should install NGINX and should also register as a systemd service. To verify, you can execute the following command:

sudo systemctl status nginx

You should see something as follows in the output:

You can regain control of the terminal by hitting q on your keyboard. Now that NGINX is running, you should see the default welcome page of NGINX if you visit the server IP address.

You’ll have to change the NGINX configuration to serve your Laravel application instead. To do so, create a new file /etc/nginx/sites-available/question-board and open the file using the nano text editor.

sudo touch /etc/nginx/sites-available/question-board
sudo nano /etc/nginx/sites-available/question-board

This file will contain the NGINX configuration code for serving the question board application. Configuring NGINX from scratch can be difficult, but the official Laravel docs have a pretty good configuration. Follows is the code copied from the docs:

server {
listen 80;
server_name 104.248.157.172;
root /srv/question-board/public;

add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";

index index.php;

charset utf-8;

location / {
    try_files $uri $uri/ /index.php?$query_string;
}

location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt  { access_log off; log_not_found off; }

error_page 404 /index.php;

location ~ \.php$ {
    fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
    fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
    include fastcgi_params;
}

location ~ /\.(?!well-known).* {
    deny all;
}
}

You don’t have to make any changes to this code except the first two lines. Make sure you’re using the IP address from your server as the server_name , and the root is pointing to the correct directory. You’ll replace this IP address with a domain name in a later section.

Also, inside the location ~ \.php$ { } block, make sure that the fastcgi_pass directive is pointing to the correct PHP version. In this demonstration, I’m using PHP 7.4, so this configuration is correct. If you’re using a different version, like 8.0 or 8.1, update the code accordingly.

If you cd into the /etc/nginx directory and list out the content using the ls command, you’ll see two folders named sites-available and sites-enabled.

The sites-available folder holds all the different configuration files serving applications (yes, there can be multiple) from this server.

The sites-enabled folder, on the other hand, contains symbolic links to the active configuration files. So if you do not make a symbolic link of the /etc/nginx/sites-available/question-board file inside the sites-enabled folder, it’ll not work. To do so, execute the following command:

sudo ln -s /etc/nginx/sites-available/question-board /etc/nginx/sites-enabled/question-board
sudo rm /etc/nginx/sites-enabled/default

The second command gets rid of the default configuration file to avoid any unintended conflict. To test if the configuration code is okay or not, execute the following command:

sudo nginx -t

If everything’s alright, reload the NGINX configuration by executing the following command:

sudo nginx -s reload

If you visit your server IP address, you’ll see that NGINX is serving your application correctly but the application is throwing a 500 internal server error.

As you can see, the application is trying to write to the logs folder but fails. It happens because the root user owns the /srv/question-board directory, and the www-data user owns the NGINX process. To make the /srv/question-board/storage directory writable by the application, you’ll have to alter the directory permissions.

Configuring Directory Permissions

There are different ways of configuring directory permissions in a Laravel project but, I’ll show you the one I use. First, you’ll have to assign the www-data user that owns the NGINX process as the owner of the /srv/question-board directory as well. To do so, execute the following command:

sudo chown -R :www-data /srv/question-board

Then, set the permission of the /srv/question-board/storage to 755, which means read and execute access for all users and write access for the owner by executing the following command:

sudo chmod -R 775 /srv/question-board/storage

Finally, there is one more subdirectory that you have to make writable. That is the /srv/question-board/bootstrap/cache directory. To do so, execute the following command:

sudo chmod -R 775 /srv/question-board/bootstrap/cache

If you go back to the server IP address now and refresh, you should see that the application is working fine.

Installing and Configuring MySQL

Now that you’ve successfully installed and configured the NGINX web server, it’s time for you to install and configure MySQL. To do so, install the MySQL server by executing the following command:

sudo apt install mysql-server -y

After the installation process finishes, execute the following command to make your MySQL installation more secure:

sudo mysql_secure_installation

First, the script will ask if you want to use the validate password component or not. Input “Y” as the answer and hit enter. Then, you’ll have to set the desired level of password difficulty. I recommend setting it as high. Although picking a hard-to-guess password every time you want to create a new user can be annoying, but for the sake of security, roll with it. In the next step, set a secure password for the root user. You can put “Y” as the answer for the rest of the questions. Give the questions a read if you want to.

Now, before you can log into your database server as root, you’ll have to switch to the root user. To do so, execute the following command:

sudo su

Log into your database server as root by executing the following command:

mysql -u root

Once you’re in, create a new database for the question board application by executing the following SQL code:

CREATE DATABASE question_board;

Next, create a new database user by executing the following SQL code:

CREATE USER 'nonroot'@'localhost' IDENTIFIED BY 'password';

Again, I used the name nonroot to clarify that this is a non-root user. You can use whatever you want as the name. Also, replace the word password with something more secure.

After that, provide the user full privilege of the question_board database to the newly created user by executing the following SQL code:

GRANT ALL PRIVILEGES ON question_board . * TO 'nonroot'@'localhost';

In this code, question_board.* means all the tables of the question_board database. Finally, quit the MySQL client by executing the \q command and exit the root shell by invoking the exit command.

Now, try logging in as the nonroot user by executing the following command:

mysql -u nonroot -p

The MySQL client will ask for the password. Use the password you put in when creating the nonroot user. If you manage to log in successfully, exit the MySQL client by executing the \q command.

Now that you have a working database server, it’s time to configure the question board project to make use of it. First, cd into the /srv/question-board directory and open the env file using the nano text editor:

cd /srv/question-board
sudo nano .env

Update the database configuration as follows:

DB_CONNECTION=mysql
DB_HOST=localhost
DB_PORT=3306
DB_DATABASE=question_board
DB_USERNAME=nonroot
DB_PASSWORD=password

Make sure to replace the username and password with yours. Save the file by pressing Ctrl + O and exit nano by pressing Ctrl + X key combination. To test out the database connection, try migrating the database by executing the following command:

php artisan migrate --force

If everything goes fine, that means the database connection is working. The project comes with two seeder classes, one for seeding the admin user and another for the categories. Execute the following commands to run them:

php artisan db:seed --class=AdminUserSeeder
php artisan db:seed --class=CategoriesSeeder

Now, if you visit the server IP address and navigate to the /questions route, you’ll see the list of categories. You’ll also be log in as the admin user using the following credentials:

email: [email protected]
password: password

If you’ve been working with Laravel for a while, you may already know that it is common practice to add new migration files when there is a database change. To automate the process of running the migrations on every deployment, open the /sbin/post-deploy script using nano once again and append the following line at the end of the file:

php artisan migrate --force

The –force option will suppress an artisan warning about running migrations on a production environment. Unlike migrations, seeders should run only once. If you add new seeders on later deployments, you’ll have to run them manually.

Configure Laravel Horizon

The question board project comes with Laravel Horizon pre-installed and pre-configured. Now that you have Redis up and running, you’re ready to start processing jobs.

The official docs suggest using the supervisor program for running Laravel Horizon on a production server. To install the program, execute the following command:

sudo apt install supervisor -y

Supervisor configuration files live within your server’s /etc/supervisor/conf.d directory. Create a new file /etc/supervisor/conf.d/horizon.conf and open it using the nano text editor:

sudo touch /etc/supervisor/conf.d/horizon.conf
sudo /etc/supervisor/conf.d/horizon.conf

Update the file’s content as follows:

[program:horizon]
process_name=%(program_name)s
command=php /srv/question-board/artisan horizon
autostart=true
autorestart=true
user=root
redirect_stderr=true
stdout_logfile=/var/log/horizon.log
stopwaitsecs=3600

Save the file by pressing Ctrl + O and exit nano by pressing the Ctrl + X key combination. Now, execute the following commands to update the supervisor configuration and starting the horizon process:

sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start horizon

To test out if Laravel Horizon is running or not, visit your server’s IP address and navigate to the /login page. Log in as the admin user and navigate to the /horizon route. You’ll see Laravel Horizon in the active state.

I’ve configured Laravel Horizon to only let the admin user in, so if you log in with some other user credential, you’ll see a 403 forbidden error message on the /horizon route.

One thing that catches many people off guard is that if you make changes to your jobs, you’ll have to restart Laravel Horizon to read those changes. I recommend adding a line to the /sbin/post-deploy script to reinitiate the Laravel Horizon process on every deployment.

To do so, open the /sbin/post-deploy using the nano text editor and append the following line at the end of the file:

sudo supervisorctl restart horizon

This command will stop and restart the Laravel Horizon process on every deployment.

Configuring a Domain Name With HTTPS

For this step to work, you’ll have to own a custom domain name of your own. I’ll use the questionboard.farhan.dev domain name for this demonstration.

Log into your domain name provider of choice and go to the DNS settings for your domain name. Whenever you want a domain name to point to a server’s IP address, you need to create a DNS record of type A.

To do so, add a new DNS record with the following attributes:

Type: A Record

Host: questionboard

Value: 104.248.157.172

Make sure to replace my IP address with yours. If you want your top-level domain to point to an IP address instead of a subdomain, just put a @ as the host.

Now go back to your server and open the /etc/nginx/sites-available/questionboard config file using the nano text editor. Remove the IP address from the server_name directive and write your domain name. Do not put HTTP or HTTPS at the beginning.

You can put multiple domain names such as the top-level domain and the www subdomain separated by spaces. Save the configuration file by pressing Ctrl + O and Ctrl + X key combination. Reload NGINX configuration by executing the following command:

sudo nginx -s reload

Now you can visit your application using your domain name instead of the server’s IP address. To enable HTTPS on your application, you can use the certbot program.

To do so, install certbot by executing the following command:

sudo snap install --classic certbot

It is a python program that allows you to use free SSL certificates very easily. After installing the program, execute the following command to get a new certificate:

sudo certbot --nginx

First, the program will ask for your email address. Next, it’ll ask if you agree with the terms and agreements or not.

Then, It’ll ask you about sharing your email address with the Electronic Frontier Foundation.

In the third step, the program will read the NGINX configuration file and extract the domain names from the server_name directive. Look at the domain names it shows and press enter if they are all correct. After deploying the new certificate, the program will congratulate you, and now you’ve got free HTTPS protection for 90 days.

After 90 days, the program will attempt to renew the certificate automatically. To test the auto-renew feature, execute the following command:

sudo certbot renew --dry-run

If the simulation succeeds, you’re good to go.

Configuring a Firewall

Having a properly configured firewall is very important for the security of your server. In this article, I’ll show you how you can configure the popular UFW program.

UFW stands for uncomplicated firewall, and it comes by default in Ubuntu. You’ll configure UFW to, by default, allow all outgoing traffic from the server and deny all incoming traffic to the server. To do so, execute the following command:

sudo ufw default deny incoming
sudo ufw default allow outgoing

Denying all incoming traffic means that no one, including you, will be able to access your server in any way. The next step is to allow incoming requests in three specific ports. They are as follows:

Port 80, used for HTTP traffic.

Port 443, used for HTTPS traffic.

Port 22, used for SSH traffic.

To do so, execute the following commands:

sudo ufw allow http
sudo ufw allow https
sudo ufw allow ssh

Finally, enable UFW by executing the following command:

sudo ufw enable

That’s pretty much it. Your server now only allows HTTP, HTTPS, and SSH traffic coming from the outside, making your server a bit more secure.

Laravel Post-deployment Optimizations

Your application is now almost ready to accept requests from all over the world. One last step that I would like to suggest is caching the Laravel configuration, views, and routes for better performance.

To do so, open the /sbin/post-deploy script using the nano text editor and append the following lines at the end of the file:

php artisan config:cache
php artisan route:cache
php artisan view:cache

Now, on every deployment, the caches will be cleared and renewed automatically. Also, make sure to set the APP_ENV to production and APP_DEBUG to false inside the env file. Otherwise, you may unintentionally compromise sensitive information regarding your server.

Conclusion

I would like to thank all Laravel developers for the time they’ve spent reading this article. I hope you’ve enjoyed it and have learned some handy stuff regarding application deployment. If you want to learn more about NGINX, consider checking out my open-source NGINX Handbook with tons of fun content and examples.

Also, if you want to broaden your knowledge of Laravel, you can check the Laravel vs Symfony, the Laravel Corcel, and Laravel Blockhain articles.

If you have any questions or confusion, feel free to reach out to me. I’m available on Twitter and LinkedIn and always happy to help. Till the next one, stay safe and keep on learning.

Laravel News Links