Scott Adams for the win

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtSGP9X7ZodLNSRkNVZJuJbHoWt8KQDCKbpTzzz2TwGL4Sou4qVJvWl1ErbIsa_JstmzGZ0vc-wpm1bnxN7Fys9R6ZktPC_VF_m4ojcbJ_AB0lGORo-vTXKUvaBhzISyDa0Y4KbeD2pWYfBs1MbTSU0odeC01Ral8frU0f2s_pc_oQixTXpOs65JnBdQ/w640-h404/90mimb_c3a06a18c0cf762291a57343244df67e_deb9101f_500.jpg

According to CNN...

From 90 Miles From Tyranny

Gun Free Zone

So much unscrambling will need to be done

https://media.notthebee.com/articles/62ac05c64bf7462ac05c64bf75.jpg

It’s a heated discussion in my history classroom every year. Shortly after the semester break, I share an excerpt from the classic sociological study by Robert and Helen Lynd, "Middletown." The snippet I use depicts the changes that come to the quintessential American town around the turn of the century, with the introduction of the automobile into daily life.

Not the Bee

How to Create Short URLs in Laravel

https://ashallendesign.ams3.digitaloceanspaces.com/public/blog/56/how-to-create-short-urls-in-laravel.png

Introduction

Short URL is an open-source Laravel package that you can use to create short URLs for your web apps. It comes with different options for tracking users that click your short URL and it only takes a couple of minutes to add it to your Laravel project.

In this article, we’re going to step through how to install Short URL (ashallendesign/short-url) in your Laravel projects and then take a look at a few of the different customisation options that are available. If you’re interested in checking out the code for the package and seeing what other functionality the package provides, you can view it in the GitHub repository.

There’s also a really useful, quick review video by Povilas Korop (Laravel Daily) at the end of this article to show how you can use the package.

To get a better idea of what the package does, let’s take a quick look at a basic example. Imagine that you have a Laravel app hosted on https://my-web-app.com and you want to create a short URL to redirect the user to https://ashallendesign.co.uk. To do this, your code might look something like this:

        

1use AshAllenDesign\ShortURL\Facades\ShortURL;

2 

3$shortUrl = ShortURL::destinationUrl('https://ashallendesign.co.uk')->make();

We can then imagine that this code would create a short URL similar to this: http://my-web-app.com/short/abc123. Now, if you were to navigate to this URL, you’d be redirected to https://ashallendesign.co.uk and your visit would be recorded in the database (if the tracking features are enabled).

In fact, if you read my monthly Round Up articles (such as “Round Up: March 2022“), you’ll likely know that I’m using Short URL as the basis for building a small privact-first, open source URL shortening service. The service is called Mango Two and is something that I’m slowly building as a side project so that I can have some practice with using Typescript to build a Chrome extension. If you’re interested in checking it out, there’s already an early version of the extension available for installing that you can install in under 30 seconds! Any feedback on the Mango Two Chrome extension is greatly appreciated.

Installing the Package

To get started with the Short URL package, you’ll need to make sure that your Laravel app is using at least Laravel 8.0 and PHP 8.0.

You can install the package via Composer using the following command:

        

1composer require ashallendesign/short-url

After installing the package, you can then publish the package’s config file and database migrations by using the following command:

        

1php artisan vendor:publish --provider="AshAllenDesign\ShortURL\Providers\ShortURLProvider"

This package contains several migrations that add two new tables to the database: short_urls and short_url_visits. To run these migrations, simply run the following command:

Congratulations, Short URL should now be installed in your Laravel app and ready to use!

Creating Short URLs

Now that we’ve installed Short URL, let’s take a look at how we can create our own short URLs.

The quickest way would be to use something similar to the snippet below. We simply need to choose the destination URL that the visitors would be redirected to, and then use the make method to store the Short URL in database.

        

1use AshAllenDesign\ShortURL\Facades\ShortURL;

2 

3$shortURLObject = ShortURL::destinationUrl('https://destination.com')->make();

4 

5$shortURL = $shortURLObject->default_short_url;

The make method returns a AshAllenDesign\ShortURL\Models\ShortURL model that extends the default Laravel Illuminate\Database\Eloquent\Model class. So, all of the usual methods that you’d typically call on your Laravel models can also be used here if you’d like.

Using Custom Short URL Keys

By default, the shortened URL that is generated will contain a random key (the key is the unique identifier that is placed at the end of short URLs). For example, if a short URL is https://webapp.com/short/abc123, the key would be abc123.

Sometimes, you may wish to define a custom key yourself for that URL that is more meaningful to your visitors than a randomly generated one. This is perfect for if you’re using the short URLs for things like marketing or advertising campaigns.

To define a custom short URL key, you use the urlKey() method, like in the example below:

        

1use AshAllenDesign\ShortURL\Facades\ShortURL;

2 

3$shortUrl = ShortURL::destinationUrl('https://destination.com')

4 ->urlKey('custom-key')

5 ->make()

6 ->default_short_url;

7 

8// $shortUrl will be equal to: "https://webapp.com/short/custom-key"

Tracking Visitors

Depending on what you’re using the short URLs for, you may want to track some data about the visitors that have used the short URL. This can be particularly useful for analytics.

By default, tracking is enabled and all of the available tracking fields are also enabled. You can toggle the default options for the different parts of the tracking in the package’s short-url.php config file that you published when installing the package.

If you want to override the default option set in the config file whether tracking is enabled or not when creating a shortened URL, you can use the trackVisits() method.

For example, if we wanted to force tracking to be enabled for the URL, our code might look something like this:

        

1$shortURLObject = ShortURL::destinationUrl('https://destination.com')

2 ->trackVisits()

3 ->make();

Likewise, if we wanted to force tracking to be disabled for the URL, our code might look something like this:

        

1$shortURLObject = ShortURL::destinationUrl('https://destination.com')

2 ->trackVisits(false)

3 ->make();

Enabling Tracking Fields

If tracking is enabled for a shortened URL, each time the link is visited, a new ShortURLVisit row in the database will be created. By default, the package will record the following fields of a visitor:

  • IP Address
  • Browser Name
  • Browser Version
  • Operating System Name
  • Operating System Version
  • Referer URL (the URL that the visitor originally came from)
  • Device Type (can be: desktop/mobile/tablet/robot)

Each of these fields can be toggled in the config files so that you only record the fields you need. However, if you want to override any of the default options, you can do so when creating your short URL.

For example, if we wanted to force all of the tracking fields to be enabled when creating our short URLs, our code might look something like this:

        

1ShortURL::destinationUrl('https://destination.com')

2 ->trackVisits()

3 ->trackIPAddress()

4 ->trackBrowser()

5 ->trackBrowserVersion()

6 ->trackOperatingSystem()

7 ->trackOperatingSystemVersion()

8 ->trackDeviceType()

9 ->trackRefererURL()

10 ->make();

It’s worth noting that each of the tracking methods also allows you to pass false as the argument to force a specific fields to not be tracked. For example, if we wanted to force the IP address to not be tracked, our code could look something like so:

        

1ShortURL::destinationUrl('https://destination.com')

2 ->trackVisits()

3 ->trackIPAddress(false)

4 ->make();

Creating Single-use Short URLs

By default, all of the short URLs that you create can be visited for as long as you leave them available in your database. However, depending on how you’re using them in your applications, you may want to only allow access to a short URL once. This would then mean that any subsequent visitors who visit the URL after it has already been viewed will get a HTTP 404 response.

To create a single use shortened URL, you can use the ->singleUse() method.

The example below shows how to create a single use shortened URL:

        

1ShortURL::destinationUrl('https://destination.com')->singleUse()->make();

Setting Activation and Deactivation Times

By default, all short URLs that you create are active and accessible as soon as you create them and until you delete them from your database. However, the package provides functionality for you to set activation and deactivation times for your URLs when you’re creating them.

Doing this can be useful for things like marketing or advertising campaigns. For example, you may want to launch a new URL for a marketing campaign on a given date and then automatically deactivate that URL when the campaign comes to an end.

The example below shows how to create a short URL that will be active from this time tomorrow onwards:

        

1ShortURL::activateAt(now()->addDay())->make();

The example below shows how to create a short URL that will be active from this time tomorrow onwards and then is deactivated the day after:

        

1ShortURL::activateAt(now()->addDay())

2 ->deactivateAt(now()->addDays(2))

3 ->make();

If a user was to visit a short URL before it was activated or after it was deactivated, they would receive a HTTP 404 response.

Customising the Short URL Prefix

The Short URL package comes with a route that you can use for your short URLs without any further setup. By default, this route is /short/{shortURLKey}.

You might want to keep using this default route but change the /short/ prefix to something else. To do this, you can change the prefix field in the config.

For example, if we wanted to change the default short URL to /s, we could change the config value like so:

        

config/short-url.php

1return [

2 

3 // ...

4 

5 'prefix' => 's',

6 

7 // ...

8 

9];

Likewise, you may also remove the prefix from the default route completely. For example, if you want your short URL to be accessible via /{shortUrlKey}, then we could update the prefix config value to null like so:

        

config/short-url.php

1return [

2 

3 // ...

4 

5 'prefix' => null,

6 

7 // ...

8 

9];

Using the Short URLs

Now that we know how to create the short URLs, let’s take a look at how to visit them in our applications.

The package makes using the short URLs super simple because it ships with it’s own route and controller that are automatically available without any set up.

Unless you changed the prefix field in the short-url.php config file, the package’s route is available at short/{urlKey}. This route uses the single-use controller that is found at \AshAllenDesign\ShortURL\Controllers\ShortURLController.

That’s it, there’s nothing more to it (as long as you want to use the package’s route)! You can start sharing your short URLs and they can be instantly accessed by your visitors.

Using a Custom Route

There may be times when you wish to use your own route or controller for your short URLs other than the default URLs that are created.

If you want to use a different route but use the same controller, you’ll just need to add your new route to your web.php field and point it to the controller like so:

        

web.php

1Route::get('/custom/{shortURLKey}', '\AshAllenDesign\ShortURL\Controllers\ShortURLController');

It’s important to remember that your route must include a {shortURLKey} field.

If you do choose to use your own route or controller, you might want to disable the default route that the app provides. By doing this, any visitors who try to use the packages default route (when you don’t want them to), will receive a HTTP 404 response. To do disable the route, you can set the disable_default_route field in your short-url.php config file to true, like so:

        

config/short-url.php

1return [

2 

3 // ...

4 

5 'disable_default_route' => true,

6 

7 // ...

8 

9];

Laravel Daily Review

Povilas Korop also made a quick review video of the Short URL package. So, you can see it in action here:

Conclusion

Hopefully, this post should have shown you how you can use the Short URL package in your Laravel apps to create shortened URLs. If you’re interested in checking out the code for the Short URL package, you can view it in the GitHub repo.

If you enjoyed reading this post, I’d love to hear about it. Likewise, if you have any feedback to improve the future ones, I’d also love to hear that too.

If you’re interested in getting updated each time I publish a new post, feel free to sign up for my newsletter below.

Keep on building awesome stuff! 🚀

Laravel News Links

A Comprehensive Guide to Deploying Laravel Applications on AWS Elastic Beanstalk

https://www.honeybadger.io/images/pull_image.png

When it comes to deploying web applications on the cloud, AWS Elastic Beanstalk is one of the most popular choices. It is a platform-as-a-service (PaaS) from Amazon that makes deploying web applications much easier. In this article, I’ll explain the entire step-by-step process of deploying a Laravel application on AWS Elastic Beanstalk using a practical example.

This article assumes that you have some familiarity with Amazon Web Services and know about the most common ones, such as Amazon EC2, Amazon RDS, and Amazon ElastiCache. Simply knowing what these services are used for will suffice. The article also assumes you’ve worked with common Laravel features, such as Queues, Cache, and Mail.

Table of Contents

Project Code

This article comes with a reference Laravel project. It is a simple question board application where users can post questions and ask for answers.

Question Board Application

It’s like a dumbed-down version of StackOverflow but good enough for this article. The project can be found in the fhsinchy/guide-to-deploying-laravel-on-elastic-beanstalk repository. Make a fork of the repository and clone it to your local system. There are two branches: master and completed. You’ll work on the master branch throughout this article.

Instructions for running the project on your local system can be found in the repository’s README file.

Getting Started With AWS Elastic Beanstalk

According to the AWS Elastic Beanstalk overview page,

AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker on familiar servers such as Apache, Nginx, Passenger, and IIS.

It’s an orchestration service that controls how various resources running on Amazon’s servers work together to power your application. I’m assuming that you already have an AWS account set up, but if you don’t have a free account already, follow the instructions shown here to create one.

Log into your AWS account, and the first thing to do after logging in is ensure that you’re in the right region. You can find the list of regions on the top-right corner of the navigation bar. The correct region is the one closest to your application users. I live in Bangladesh and most of my users are from here, so I use Asia Pacific (Singapore) as my default region.

Select the correct region from the list of regions

Once you’ve changed the region, you’re ready to create your first application on AWS Elastic Beanstalk.

select-elastic-beanstalk-from-service-list.jpg

Creating a New Application

Start by clicking the Services drop-down button on the top-left corner of the navigation bar and selecting Elastic Beanstalk from the menu. It should be listed under the Compute category.

If you do not have any previously created applications on AWS Elastic Beanstalk, you should land directly on the welcome page. Click the Create Application button on the right side to open up the application creation wizard.

Press the "Create Application" button on the welcome page

If you have previously created applications, however, you’ll land on the All environments page by default. To visit the welcome page, click on the bold Elastic Beanstalk link on the top of navigation pane on the left side of the page.

bold-elastic-beanstalk-link.jpg

You can also land on the application creation wizard directly by following this link – https://console.aws.amazon.com/elasticbeanstalk/home#/gettingStarted

The Application name can be anything you want, but to be absolutely in line with this article, put laravel-on-beanstalk as the application name.

Put "laravel-on-beanstalk" as the "application name"

Leave the Application tags empty for now. On the Platform section, select PHP as the Platform and PHP 7.4 running on 64bit Amazon Linux 2 as the Platform Branch because the reference project doesn’t work with PHP8. Finally, for the Platform version, select whatever is the latest.

Select PHP 7.4 in the "platform" section

Select Sample code under the Application code section and click on the Create application button. The application creation process takes a while. Once it’s finished, you should land on a page that looks as follows:

Laravelonbeanstalk-env environment dashboard

This is the environment dashboard. Whenever you create a new application on AWS Elastic Beanstalk, it automatically creates a new environment with a similar name. An application can have multiple environments, but an environment can only be attached to one application. In the context of AWS EB, an environment is a collection of AWS resources. By default, every environment comes with the following resources:

  • Amazon EC2 Instance (for running your application)
  • Amazon EC2 Security Group (controls traffic flow from and to the instance)
  • Amazon S3 Bucket (holds your uploaded source code archives)
  • Domain Name (makes your application easily accessible)
  • Amazon CloudWatch Alarms (notifies you if resource usage exceeds a certain threshold)
  • AWS CloudFormation Stack (binds all these resources into a single unit for easier management)

You can even see the Amazon EC2 instance associated with this environment. To do so, click on the Services dropdown once again and select EC2. It’ll be listed under the Compute category. You can also use the search function on the navigation bar to quickly look for services. Just click on the search bar at the top or press the key combination Alt + S to activate it. Search for EC2, and it should show up as the first one on the list.

Search for EC2 in the search bar

On the EC2 management console, click on Instances from the Resources section or the navigation pane on the left side of the page.

List of instances on the EC2 management console

As you can see, AWS Elastic Beanstalk (we’ll call it AWS EB from now on) has created a new EC2 instance with the same name as the environment. Your application will actually run inside this virtual machine. AWS EB will just manage this instance as necessary. When you deploy on AWS EB, you do not pay for AWS EB itself. You pay for all these resources orchestrated by AWS EB to power your application. It’s an orchestration service.

Apart from the concept of an environment, AWS EB also has the concept of an Application. In the context of AWS EB, an application is the combination of the environments attached to it, a copy of your application source code, and several parameters commonly known as the environment configuration.

Consider that you have two different environment for your application. One is the master or production environment, and the other is the staging environment. The master environment is the one your users connect to, and the staging one is for testing. AWS EB makes it very easy to create such arrangements with multiple environments.

Don’t worry if you do not clearly understand these concepts. Everything will become much clearer as you keep going and start to work with the reference project.

Creating a New Environment

The default environment that came with the new application is okay, but I don’t like how it’s named. I usually name my environments following the <application name>-<source code branch> syntax. So, for an application named laravel-on-beanstalk, the environment will be laravel-on-beanstalk-master or lob-master for short.

I also don’t like the fact that it comes with Elastic Load Balancing (ELB) enabled by default. ELB is very useful when you’re running multiple instances of your application. However, for most of the small to medium-scale Laravel projects, a single instance should be enough in the beginning. Once you start getting a lot of users, you can add load balancing to your application by creating a new environment.

To create a new environment, navigate to the list of environments by clicking on Environments from the left sidebar. Start the environment creation wizard by clicking on the Create a new environment button on the top-right corner. You’ll create a Web server environment, so choose that and click Select.

The next step should look familiar to you. In the Application name input box, make sure to type the application name exactly as you did in the application creation step, which was laravel-on-beanstalk. If you write a different name, then AWS EB will create a new application instead of attaching this environment to the previous one.

The Environment name under the Environment information section should be lob-master, which follows the naming convention I taught you a few paragraphs above. I’ll leave the domain name field blank here and let AWS EB decide for me.

Put "laravel-on-beanstalk" as the application name and "lob-master" as the environment name

Under the Platform section, select PHP 7.4 like you did previously. Then, select Sample application under the Application code section and click the Configure more options button.

Select PHP  7.4 in the "Platform" section

For the next step, select Single instance (Free Tier eligible) under the Presets section and click on the Create environment button. Environment creation will take some time. Once it’s done, navigate to the application list by clicking on the Applications link from the navigation pane on the left side of the page.

Click on "Application" from the navigation pane

On the All applications page, you should see both the old and new environment listed beside the application name.

application-list.jpg

To get rid of the old environment, navigate to the environment dashboard by clicking on the Laravelonbeanstalk-env name. On the dashboard, click on the Actions drop-down button on the top-right corner and select Terminate environment.

Terminate the environment using the "Actions" menu

A confirmation modal will show up. Write the environment name on the input box and click on the Terminate button. Terminating an environment will remove all the associated resources with it. So, if you go back to Amazon EC2 management console, you’ll see that the Laravelonbeanstalk-dev instance has been terminated and a new instance named lob-master has shown up. From now on, you’ll work with the newly created lob-master environment.

Deploying an Application Version

Make sure you’ve cloned the project that comes with this article and that you’re on the master branch. To upload your application source code to AWS EB, you’ll have to put it in a zip archive. To do so, make sure your terminal is opened inside the project directory and execute the git archive -v -o deploy.zip --format=zip HEAD command. This command will create a new deploy.zip archive on your project directory with all the committed files from your currently active branch.

To upload this file to AWS EB, go back to the lob-master environment dashboard and click on the Upload and Deploy button.

Use the "Upload and Deploy" button to deploy a new version

On the next step, click on the Choose button and use the file browser to select your deploy.zip archive. Put something like v0.1 in the Version label field and hit Deploy.

Put a version label and press the "Deploy" button

The deployment process will take a few minutes. During the process, the health status will transition from Ok to Info. If the deployment process succeeds, the health status will go back to Ok, and if the process fails, it’ll transition to Degraded. A definitive list of all the statuses can be found in the official docs.

Once the deployment process has finished, you will see the status transition back to Ok. You will also see the Application version change from Sample Application to v0.1. If it doesn’t, perform a page refresh.

Now if you try to visit the application using the environment domain name, you’ll be presented with a 403 Forbidden error message from NGINX. The status should transition to Severe at this point, indicating the application is failing to respond.

This happens because NGINX on AWS EB is configured to use the /var/app/current directory as its document_root by default, but in case of a Laravel application, it has to be the /var/app/current/public directory. To solve this issue, click on Configuration from the navigation pane on the left side of the page.

Click on "Configuration" from the navigation pane

You’ll land on the environment configuration page. Edit the Software configuration, and under the Container Options section, put /public as the Document root.

Scroll down to the bottom until you see the Environment properties section. Here, you can define various environment variables that you usually define inside a Laravel project’s .env file.

Every Laravel application requires an application key to run. So, create a new property named APP_KEY and put a 32-character-long random string as the Value. You can use the CodeIgniter Encryption Keys from RandomKeygen website.

Apart from this application key, the question board application is configured to send a confirmation email to new users. If you try to register a new account without configuring a proper mail server, the application will crash. For the sake of simplicity, I’ll use MailTrap in this article. Navigate to https://mailtrap.io/ and create a free account. From the default inbox, note down the following configuration parameters:

Note down the configuration variables from your MailTrap inbox

Now create individual environment parameters on AWS using the names and values noted from your MailTrap inbox.

Add the MailTrap configuration variables in the "Environment properties" section

Now all the emails sent from your application will end up in the MailTrap inbox. Finally, click on the Apply button and wait until the environment update process is finished.

Once the environment has been updated, try visiting the application once again; this time, you should be welcomed by the question board application itself.

Question Board Welcome

However, the trouble doesn’t end there. If you try to navigate to some other page, such as /login, you’ll see a big 404 Not Found error from NGINX. This problem also has to do with the NGINX configuration, but sadly, it can not be fixed from the AWS EB management console. To solve this problem, you’ll have to extend the AWS EB platform by adding some additional code as a part of your application source code.

Extending an AWS Elastic Beanstalk Platform

One of the biggest advantages of AWS EB is that it strikes a nice balance between accessibility and flexibility. By default, AWS EB platforms are intelligent enough to install all the necessary dependencies for your project and try to run it with the default settings. However, for a Laravel project, there are additional post-deployment steps, such as running database migrations, seeding the database (if needed), and setting proper directory permissions. To do all these, you’ll have to extend the AWS EB platform.

AWS EB allows you to extend the default platform in four ways:

  • Buildfile and Procfile (allows you to build your application before deployment)
  • Platform hooks (custom scripts or executable files that AWS EB executes during various stages of deployment)
  • Configuration files (allows you to configure various aspects of the environment such as the document_root and lets you execute simple commands such as php artisan migrate)
  • Reverse proxy configuration (allows you to include custom configuration for NGINX or Apache web server)

In this article you’ll learn about the last three. Let’s begin with reverse proxy configuration. Create a directory .platform/nginx/conf.d/elasticbeanstalk on the root of the project directory. In this new directory, create a new file called .platform/nginx/conf.d/elasticbeanstalk/laravel.conf or anything else that you like with following content:

add_header X-Frame-Options "SAMEORIGIN";
add_header X-Content-Type-Options "nosniff";

index index.php;

charset utf-8;

location / {
    try_files $uri $uri/ /index.php?$query_string;
}

location = /favicon.ico { access_log off; log_not_found off; }
location = /robots.txt  { access_log off; log_not_found off; }

error_page 404 /index.php;

location ~ /\.(?!well-known).* {
    deny all;
}

During deployment, AWS EB will copy all the content of the .platform/nginx directory to the /etc/nginx directory of the EC2 instance. This custom config file will end up in the /etc/nginx/conf.d/elasticbeanstalk directory, and NGINX will automatically include this in the server context.

Now that you have a custom reverse proxy configuration in place, let me introduce you to the configuration files. Create another directory called .ebextensions on the root of your project directory, and inside that directory, create a new file called deploy.config or whatever you like with the following content:

option_settings:
    -
        namespace: aws:elasticbeanstalk:application:environment
        option_name: COMPOSER_HOME
        value: /root

    -
        namespace: aws:elasticbeanstalk:container:php:phpini
        option_name: document_root
        value: /public
    -
        namespace: aws:elasticbeanstalk:container:php:phpini
        option_name: memory_limit
        value: 256M

container_commands:
    00_install_composer_dependencies:
        command: "sudo php -d memory_limit=-1 /usr/bin/composer.phar install --no-dev --no-interaction --prefer-dist --optimize-autoloader"
        cwd: "/var/app/staging"

Although the file ends with a .config extension, it is a YAML file. You can use the option_settings section to configure the software environment. The first entry in this section defines a new environment variable named COMPOSER_HOME with the value of /root. This sets /root as the composer’s home directory. The second entry configures the document_root to be /public by default, eliminating the necessity of setting it manually in the AWS EB console. The third entry sets the memory limit for the PHP process. It’s not strictly necessary, but I’ve added this for demonstration purposes.

Under the container_commands section, you can define shell commands that will execute during application deployment. The 00_install_composer_dependencies command removes all the dependencies installed by AWS EB by default, except the ones required for production. Commands defined in this section are executed in alphabetic order. Appending 00 in front of this command will cause it to execute first. The next command will begin with 01, followed by 02, etc.

When you deploy a new version of your application, AWS EB extracts the source code inside the /var/app/staging directory and then goes through several stages to make the application ready for production. This is why any command you run during deployment has to be run considering /var/app/staging as the current working directory.

Once all the deployment stages have completed, AWS EB puts the application inside the /var/app/current directory. Every time you perform a deployment, all the content of the /var/app/current directory will be lost. This is one of the reasons to never use the default storage disk when deploying on AWS EB. Make sure you always use a S3 bucket to upload files. Otherwise, you’ll lose all the data upon new deployments. The use of Amazon S3 as file storage is well documented in the official Laravel docs.

Apart from the custom NGINX configuration and the environment changes, Laravel also requires the storage and bootstrap/cache directories to be writable on the server. You can make these directories writable using AWS EB platform hooks. According to the official AWS EB docs,

These are custom scripts and other executable files that you deploy as part of your application’s source code, and Elastic Beanstalk runs during various instance provisioning stages.

There are three kinds of hooks:

  • prebuild – These files run after the AWS EB platform engine downloads and extracts the application source code, before it sets up and configures the application and web server.
  • predeploy – These files run after the AWS EB platform engine sets up and configures the application and web server, before it deploys them to their final runtime location. The predeploy files run after running commands found in the container_commands section of any configuration file.
  • postdeploy – These files run after the AWS EB platform engine deploys the application and proxy server. This is the last deployment workflow step.

In this article, you’ll work with prebuild and postdeploy hooks. Go back to the application source code. Create a new directory called .platform/hooks/postdeploy. This directory will contain the scripts to be executed after the application has been deployed. Create a new post deploy script .platform/hooks/postdeploy/make_directories_writable.sh and put following content in it:

#!/bin/sh

# Laravel requires some directories to be writable.

sudo chmod -R 777 storage/
sudo chmod -R 777 bootstrap/cache/

The first two lines make the aforementioned directories writable by setting their permissions to 777, which means giving read, write, and execute permission to the owner, group, and public. This is enough, but since you’re already at it, create one more file .platform/hooks/postdeploy/optimize_laravel.sh and put the following content in it:

#!/bin/bash

# Optimizing configuration loading, route loading and view loading
# https://laravel.com/docs/8.x/deployment#optimization

php artisan config:cache

php artisan route:cache

php artisan view:cache

These three artisan commands will cache Laravel configs, routes, and views, making your application a tad bit faster. Before you commit these files, make sure they all have executable permission. To do so, execute the chmod +x .platform/hooks/prebuild/*.sh && chmod +x .platform/hooks/postdeploy/*.sh command on the root of your project directory.

Now zip up the updated source code by executing the git archive -v -o deploy.zip --format=zip HEAD command. Before you zip up the updated source code, make sure you’ve committed all the changes. If you don’t, the git archive command will not pick up the updated code. Redeploy on AWS EB like you did before. Use a different version label, such as v0.2, this time. Wait until the new version has been deployed, and then you should be able to navigate to other pages.

Setting-Up a Database on Amazon RDS

Now that you’ve successfully configured NGINX to play well with your project, it’s time to set-up a database. In this section, you’ll learn about setting up resources manually in other AWS services, troubleshooting deployment issues, and creating custom security groups.

There are two ways to set up a database on AWS EB. You can either go into the environment configuration page and set up an integrated database or create a database from the RDS management console separately. The integrated database is a good way to get started, but it ties the database with your environment lifecycle. If you delete the environment at some point in the future, the database will go down with it.

In this article, I’ll show you the second way. If you want to learn more about using an integrated database, it’s already well documented in the official AWS EB docs with an example Laravel project.

To create a new database, open the list of services once again and choose RDS. It should be listed under the Database section. Alternatively, you can use the search bar. RDS stands for Relational Database Service, and it lets you run relational database servers (e.g., MySQL and PostgreSQL) managed by AWS. Once you land on the RDS management console, click the Create database button to start the database creation wizard.

Press the "Create database" button

On the wizard, select Standard create as the creation method. Select the database engine you would like to use. In this case, it’s MySQL. Leave the version as it is, and under the Templates section, select Free tier for now. Try out the other templates once you’re a bit more experienced with AWS.

Under the Settings section, put lob-master as the DB instance identifier, clearly referring to my AWS EB environment name. This is the name of this RDS instance. You can leave the username as admin. Set a secure password as the Master password and make note of it; you’ll need this soon.

Under the DB instance class section, keep it as db.t2.micro to stay within the limitations of the free tier. Under the Storage section, uncheck the Eatable storage autoscaling option. Autoscaling is useful when you have lots of users coming to your application. Leave the Availability & durability section as it is. Under the Connectivity section, if you enable Public access, the database will be accessible from anywhere in the world. Leave this as disabled for now, but I’ll show you how to control access using security groups very soon.

Leave the Database authentication section, but make sure Password authentication is the selected method. Expand the Additional configuration section. Write a name inside the Initial database name input box. This is the name of the database that you’ll read from and write to. I usually put ebdb, which means Elastic Beanstalk DB.

Do not get confused by DB instance identifier and Initial database name. DB instance identifier is the name of this RDS instance, and Initial database name is the name of the database that AWS RDS will create inside this server.

Uncheck Enable automated backup for now. Automated backups are a must-have for real-world production applications. For this one, you won’t need it. Leave the remaining options unchanged and hit the Create database button. You’ll see a blue creating database notification at the top of the page.

You'll see a "Creating database lob-master" notification

Creating a database can take up to 10 minutes. In the meantime, let’s go back to AWS EB and update the environment variables to enable database connection. From the lob-master environment dashboard, go to Configuration and edit the Software configuration. Scroll down to the Environment properties section and add the following environment variables:

  • DB_CONNECTION – mysql
  • DB_HOST – < endpoint from RDS >
  • DB_PORT – 3306
  • DB_DATABASE – ebdb
  • DB_USERNAME – admin
  • DB_PASSWORD – < master password from RDS >

Use the master password you set during the database creation process as the value of the DB_PASSWORD property. You’ll have to fetch the value of the DB_HOST property from Amazon RDS. Go back to the Amazon RDS management console. If the database creation process is done, you’ll see the database instance status set as Available on the list of databases. Go to instance details by clicking on the lob-master name from the list.

database-endpoint.jpg

Under the Connectivity & port section, you’ll find the Endpoint. This is the value for the DB_HOST environment property. Copy the entire string and go back to AWS EB. Paste the copied string as the value of the DB_HOST environment property and hit the Apply button.

Set database-related environment properties

The environment will take a few minutes to update. Meanwhile, go back to the .ebextensions/deploy.config file on the root of your project directory and update its content as follows:

option_settings:
    -
        namespace: aws:elasticbeanstalk:application:environment
        option_name: COMPOSER_HOME
        value: /root

    -
        namespace: aws:elasticbeanstalk:container:php:phpini
        option_name: document_root
        value: /public

container_commands:
    00_install_composer_dependencies:
        command: "sudo php -d memory_limit=-1 /usr/bin/composer.phar install --no-dev --no-interaction --prefer-dist --optimize-autoloader"
        cwd: "/var/app/staging"

    02_run_migrations:
        command: "php artisan migrate --force"
        cwd: "/var/app/staging"
        leader_only: true

    03_run_admin_user_seeder:
        command: "php artisan db:seed --class=AdminUserSeeder --force"
        cwd: "/var/app/staging"
        leader_only: true

    04_run_categories_seeder:
        command: "php artisan db:seed --class=CategoriesSeeder --force"
        cwd: "/var/app/staging"
        leader_only: true

You’ve added three new container commands for running database migrations, as well as seeding the admin user and the default categories. Commit all the new changes and zip up the updated source code by executing git archive -v -o deploy.zip --format=zip HEAD command.

Redeploy the newly created code archive with a version label of v0.3 or something similar. After a few minutes, the deployment should fail, but it’s completely intentional. In the next subsection, you’ll learn about debugging deployment issues like this one.

Debugging Deployment Issues

Deployment failures are common, so learning how to troubleshoot is very important. Whenever one of your deployments fails, don’t panic. First, try to think of the changes you made to your code or environment that could cause this failure.

In this case, you’ve added three new container commands and some database-related environment properties. The environment properties shouldn’t cause any problem, at least they didn’t until you tried to deploy the updated code, so the issue is clearly in the code.

Also, have a look at the Recent events on the environment dashboard and try to find something that may indicate the cause of the problem.

Take a look at the "Recent events" list

The fourth entry says Unsuccessful command execution …, which leads me to assume that one of the recently added container commands may have failed. However, you need to obtain more information. To do so, click on Logs from the navigation pane on the left side of the page. On the logs page, click the Request Logs dropdown button on the top right corner and select Full Logs.

After a few seconds, a new log file will be ready for download. Hit the Download link and wait until your web browser finishes downloading the small zip archive. Extract the archive, and inside, you’ll find a copy of the /var/log directory from the EC2 instance.

Content of the log bundle archive

You will find all sorts of important log files in this archive. Open the cfn-init.log file in a text editor and scroll down to the bottom. You should see what went wrong.

2021-07-23 08:26:47,240 [INFO] -----------------------Build complete-----------------------
2021-07-23 08:26:53,317 [INFO] -----------------------Starting build-----------------------
2021-07-23 08:26:53,324 [INFO] Running configSets: Infra-EmbeddedPostBuild
2021-07-23 08:26:53,326 [INFO] Running configSet Infra-EmbeddedPostBuild
2021-07-23 08:26:53,329 [INFO] Running config postbuild_0_laravel_on_beanstalk
2021-07-23 08:26:55,207 [INFO] Command 00_install_composer_dependencies succeeded
2021-07-23 08:28:55,492 [ERROR] Command 02_run_migrations (php artisan migrate --force) failed
2021-07-23 08:28:55,492 [ERROR] Error encountered during build of postbuild_0_laravel_on_beanstalk: Command 02_run_migrations failed
Traceback (most recent call last):
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 573, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 273, in build
    self._config.commands)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
    raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command 02_run_migrations failed
2021-07-23 08:28:55,495 [ERROR] -----------------------BUILD FAILED!------------------------
2021-07-23 08:28:55,495 [ERROR] Unhandled exception during build: Command 02_run_migrations failed
Traceback (most recent call last):
  File "/opt/aws/bin/cfn-init", line 176, in <module>
    worklog.build(metadata, configSets)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 135, in build
    Contractor(metadata).build(configSets, self)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 561, in build
    self.run_config(config, worklog)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 573, in run_config
    CloudFormationCarpenter(config, self._auth_config).build(worklog)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/construction.py", line 273, in build
    self._config.commands)
  File "/usr/lib/python3.7/site-packages/cfnbootstrap/command_tool.py", line 127, in apply
    raise ToolError(u"Command %s failed" % name)
cfnbootstrap.construction_errors.ToolError: Command 02_run_migrations failed

Now you’re sure that the 02_run_migrations container command has failed, but it’s still unclear what caused it to fail. Open the cfn-init-cmd.log file. Unlike the previous file that only contains the name of the command that has failed, this one contains the output from the commands. If you scroll to the bottom, you’ll see the output from the 02_run_migrations command.

2021-07-23 08:26:55,221 P26787 [INFO] ============================================================
2021-07-23 08:26:55,221 P26787 [INFO] Command 02_run_migrations
2021-07-23 08:28:55,491 P26787 [INFO] -----------------------Command Output-----------------------
2021-07-23 08:28:55,491 P26787 [INFO]  
2021-07-23 08:28:55,491 P26787 [INFO]  In Connection.php line 692:
2021-07-23 08:28:55,491 P26787 [INFO]                                                                                 
2021-07-23 08:28:55,491 P26787 [INFO]    SQLSTATE[HY000] [2002] Connection timed out (SQL: select * from information  
2021-07-23 08:28:55,491 P26787 [INFO]    _schema.tables where table_schema = ebdb and table_name = migrations and ta  
2021-07-23 08:28:55,491 P26787 [INFO]    ble_type = 'BASE TABLE')                                                     
2021-07-23 08:28:55,491 P26787 [INFO]                                                                                 
2021-07-23 08:28:55,491 P26787 [INFO]  
2021-07-23 08:28:55,491 P26787 [INFO]  In Connector.php line 70:
2021-07-23 08:28:55,491 P26787 [INFO]                                                 
2021-07-23 08:28:55,491 P26787 [INFO]    SQLSTATE[HY000] [2002] Connection timed out  
2021-07-23 08:28:55,492 P26787 [INFO]                                                 
2021-07-23 08:28:55,492 P26787 [INFO]  
2021-07-23 08:28:55,492 P26787 [INFO] ------------------------------------------------------------
2021-07-23 08:28:55,492 P26787 [ERROR] Exited with error code 1

So, the 02_run_migrations has failed because your EC2 instance couldn’t connect to the RDS instance for some reason. At this point, people often think that maybe there’s something wrong with the environment properties. However, even if all your environment properties are correct, the deployment will still fail. The reason is that the AWS RDS instance you created doesn’t allow incoming traffic on port 3306.

Understanding Security Groups

While working with AWS resources, whenever you encounter a Connection timed out error, chances are that you haven’t configured the security groups properly. Security groups are like firewalls. You can set up inbound and outbound rules to control traffic coming into an instance or traffic going out from an instance. Here, an instance can be anything from an Amazon EC2 instance, an Amazon RDS instance, or an Amazon ElastiCache instance.

To create a new security group, you’ll have to navigate to the Amazon EC2 Management Console. You can do s by either using the search bar or the services menu. On the management console, click on Security Groups on the Resources section or on the navigation pane on the left side of the page.

Click on "Security groups"

On the security groups page, you’ll see all the currently available security groups.

List of security groups

As you can see, AWS EB has created a security group for the lob-master environment. If you go into the details of this security group, you’ll see it allows HTTP traffic on port 80. This is why you can access your application running inside the EC2 instance. You’ll have to create a similar security group for the AWS RDS instance.

To do so, click the Create security group button on the top-right corner of the security groups page. As the security group name, input rds-lob-master, where rds stands for Relational Database Service, and lob-master is the instance name. This is not mandatory; it’s just a naming convention that I personally follow. Put a description like Allows the lob-master EC2 instance to access the lob-master RDS instance and leave the VPC unchanged. VPCs are an advanced topic and out of the scope of this article. I’m assuming that you haven’t created any custom VPCs and are working with the default one.

In the Inbound rules section, press the Add rule button to create a new inbound rule. Select MYSQL/Aurora from the Type dropdown menu. The Protocol and Port range will be filled in automatically. Select Custom from the Source dropdown menu. There will be a small input box with a magnifying glass icon. Search for lob-master and select the one that has awseb-LONG_RANDOM_STRING in the name.

Search for the lob-master security group

Scroll down to the bottom and hit the Create security group button. What you just did is create a new security group that allows TCP traffic on port 3306 from any instance that has the lob-master security group assigned, allowing traffic to flow from the EC2 instance to the RDS instance freely. So, once you assign the newly created rds-lob-master security group to the RDS instance, your application will be able to connect without any issue.

Navigate to the Amazon RDS management console, and from the list of instances, go into the details of the lob-master instance. By default, you’ll see it has the default security group assigned to it.

The default security group is active

To assign the new security group, click the Modify button at the top-right corner. Scroll down to the Connectivity section. Now select rds-lob-master security group from the Security group dropdown menu and deselect the default one.

Select the "rds-lob-master" security group from the list

Leave everything else unchanged. Scroll down to the bottom and click on the Continue button. For the next step, under the Scheduling of modification section, select Apply Immediately and hit the Modify DB instance button. It’ll take a few moments to modify. Once it’s done, you’ll see that the default security groups are now gone, and the rds-lob-master security group is active. The database server is now ready to take connections.

Retrying Failed Deployments

Go back to AWS EB, and on the lob-master environment dashboard, click on Application versions from the navigation pane on the left side of the page. You’ll be presented with a list of all the available versions of the application.

List of all application versions

Select the latest one, and from the Actions dropdown menu at the top-right corner, select Deploy. On the Deploy Application Version modal, make sure lob-master is selected under Environment and hit the Deploy button. Go back to the lob-master environment dashboard and wait with your fingers crossed. If you’ve set the environment properties properly, the deployment should be successful this time.

Testing Out The Database Connection

If the deployment succeeds, it means that the application can access the database now, and you should be able to login as the admin user. To do so, visit your environment URL and go to the /login route. The admin user credentials are as follows:

  • email – farhan@laravel-on-beanstalk.site
  • password – password

Try logging in, and you should land on the dashboard.

Question Board Dashboard

If you’ve come this far, congratulations on making your deployment finally functional. However, there is still a lot to do.

Using Amazon ElastiCache As the Cache and Queue Driver

Now that you have a working database set up for your application, its time to set up a functional cache instance using Amazon ElastiCache. Amazon ElastiCache is a service similar to Amazon RDS, but it lets you run in-memory database servers, such as memcached or Redis, managed by AWS. Before creating a new Amazon ElastiCache instance, you’ll have to create a new security group.

Creating a New Security Group

To create a new security group, navigate to the EC2 management console. Go to security groups and click on the Create security group button like you did before. Put elasticache-lob-master as the Security group name and Allows lob-master EC2 instance to connect to the lob-master ElastiCache instance as the description.

Under the Inbound rules section, choose Custom TCP from the Type dropdown menu. Put 6379 in the Port range input box because the Redis server responds on port 6379 by default. Choose Custom from the Source dropdown menu and search for lob-master in the search bar. Select the one that has awseb-LONG_RANDOM_STRING in the name.

Search for the "lob-master" security group

Make sure to avoid selecting the rds-lob-master security group as the source by accident. Once the inbound rule is in place, scroll down to the bottom and hit the Create security group button.

Creating a New Amazon ElastiCache Redis Instance

Now go to the Amazon Elasticache management console using the search bar or from the services list. Once you’re on the management console, click the Get Started Now button to start the cluster creation wizard.

Select Redis as the Cluster engine and make sure Cluster Mode Enabled is unchecked. Leave the Location section unchanged, and under the Redis settings section, put lob-master as the Name. Leave the Description, Engine version compatibility, Port, and Parameter group unchanged. Change the Node type to cache.t2.micro to stay within the limits of the free tier.

Select "cache.t2.micro" in node type

Change the Number of replicas to 0, which will disable the Multi-AZ option automatically. Under Advanced Redis settings, select Create new from the Subnet group dropdown menu. Put default-vpc-subnet-group as the Name and Default VPC Subnet Group as the description. Select all three subnets from the Subnets list. Like VPCs, Subnets and Subnet Groups are advanced topics and out of the scope of this article. I’m assuming that you haven’t created any custom VPCs and are working with the default one.

Under the Security section, change the Security groups by using the little pencil icon. Select the elasticache-lob-master security group and deselect the default one. Leave the encryption settings disabled. The security group will prevent traffic from getting into the instance. Using encryption can make things even more secure, but for the sake of simplicity, I’ll let it pass for now.

ElastiCache security settings

Leave the Logs and Import data to cluster sections unchanged. Under the Backup section, uncheck Enable automatic backups because, just like the RDS instance, you won’t need backups for this project. Finally, hit the Create button at the bottom-right corner.

Connecting to Amazon ElastiCache from AWS Elastic Beanstalk

The AWS ElastiCache instance creation process will take a few minutes. Meanwhile, navigate back to the AWS Elastic beanstalk lob-master environment dashboard and add following environment properties like you did before.

  • REDIS_HOST – < endpoint from ElastiCache >
  • REDIS_PORT – 6379

You’ll have to extract the value of REDIS_HOST from the Amazon ElastiCache once your instance is up and running. Go back to the Amazon ElastiCache management console, and from the list of clusters, click on the lob-master name. Copy the entire string under Endpoint and paste it as the value of REDIS_HOST on AWS EB.

This is enough information to connect to the Redis cluster from your application, but you still have to configure the application to utilize Redis as the cache and queue driver. To do so, add following environment properties to the AWS EB environment:

  • CACHE_DRIVER – redis
  • QUEUE_CONNECTION – redis

Now your application should use Redis as the cache and queue driver. Initially, I wanted to use Amazon SQS or Simple Queue Service to run queued jobs in Laravel. However, I quickly realized that Laravel Horizon is a lot better when it comes to monitoring and troubleshooting Laravel queues. In the official Laravel docs, they recommend supervisor for running Laravel Horizon. In the next subsection, I’ll show you how you can install and configure supervisor using AWS EB platform hooks.

Installing and Configuring Supervisor to Run Horizon

You’ve already worked with configuration files, reverse proxy configuration, and a postdeploy hook in a previous section. In this one, you’ll work with a prebuild hook for the first time and add some new scripts to the postdeploy hook.

Start by creating a new directory .platform/hooks/prebuild and a new file .platform/hooks/prebuild/install_supervisor.sh with the following content:

#!/bin/sh

# Installs supervisor from EPEL repository
# http://supervisord.org/installing.html#installing-a-distribution-package

sudo amazon-linux-extras enable epel

sudo yum install -y epel-release

sudo yum -y update

sudo yum -y install supervisor

sudo systemctl start supervisord

sudo systemctl enable supervisord

sudo cp .platform/files/supervisor.ini /etc/supervisord.d/laravel.ini

sudo supervisorctl reread

sudo supervisorctl update

This is a simple shell script that installs supervisor on the EC2 instance and sets up the supervisord service to run on start up. The script also copies a file from .platform/files/supervisor.ini to the /etc/supervisord.d directory. This file contains the necessary configuration for starting Laravel Horizon.

Create a new directory .platform/files and a new file .platform/files/supervisor.ini with the following content:

[program:horizon]
process_name=%(program_name)s
command=php /var/app/current/artisan horizon
autostart=true
autorestart=true
user=root
redirect_stderr=true
stdout_logfile=/var/log/horizon.log
stopwaitsecs=3600

This configuration file is almost identical to the one from the official Laravel docs. Finally, create a script called .platform/hooks/postdeploy/restart_supervisorctl.sh with the following content:

#!/bin/bash

# Restarts all supervisor workers

sudo supervisorctl restart all

The script causes the supervisor to restart all the processes after deployment. This is pretty much all that you need to make Laravel Horizon work. Before you commit these files, make sure they all have executable permission. To do so, execute the chmod +x .platform/hooks/prebuild/*.sh && chmod +x .platform/hooks/postdeploy/*.sh command on the root of your project directory.

Now commit all the changes and zip up the updated source code by executing the git archive -v -o deploy.zip --format=zip HEAD command. Deploy the updated source code on AWS EB with a newer version label, such as v0.4, and wait until the deployment finishes.

Testing Out Laravel Horizon

To test whether Laravel Horizon is working correctly, log into the application as the admin user and navigate to the /horizon route. If everything works fine, you’ll see Laravel Horizon in Active status.

Laravel Horizon activated

Laravel Horizon is configured to only allow the admin user to see this dashboard. So, if you log in with some other account, you’ll see a 403 Forbidden error. To fire a new job, open a private/incognito window and visit the /register route on the application. Register a new account, and you’ll see a new VerifyEmailQueued job show up on Laravel Horizon.

Laravel Horizon job details

This is the confirmation mail for new users. If you’ve set up MailTrap properly in one of the previous sections, the mail should be received in your MailTrap inbox.

Assigning a Domain Name to Your Elastic Beanstalk Application

Assigning a domain name to your Elastic Beanstalk application is one of the tasks that confuse a lot of beginners. All the official AWS docs usually talk about using Amazon Route 53 for domains, but in this article, I’ll show you how to use a third-party domain provider, such as NameCheap, with your AWS EB application.

I’ve already bought the laravel-on-beanstalk.site domain on NameCheap for this article. If you use some other provider, such as GoDaddy or CloudFlare, the process should be almost the same.

Log into your domain provider account and go into DNS configuration. In NameCheap, it’s called Advanced DNS. In there, remove any previous DNS records and add the following ones:

DNS records

As you can see, I’ve added one ALIAS record for the naked laravel-on-beanstalk.site domain and one CNAME record for the www sub domain, both pointing to my AWS EB application domain address. Some providers, such as CloudFlare, don’t have ALIAS records; instead, they have CNAME flattening support. In these cases, both records will have to be CNAME records.

This is one of the ways that you can configure your domain. You can also define only one CNAME record for the www subdomain and redirect the naked domain to it. As long as you know what you’re doing, follow whatever configuration you like.

Usually, providers say that it may take a few hours for the DNS settings to activate, but I’ve found that they work almost instantly. So, you should now be able to access your AWS EB application using your custom domain, but only on HTTP protocol.

Provisioning a Free SSL Certificate From Let’s Encrypt Using Certbot

Assuming you’ve successfully set up your domain, go back to your application source code and create a new file .platform/hooks/install_certbot.sh with the following content:

#!/bin/sh

# Installs certbot from EPEL repository
# https://certbot.eff.org/instructions

sudo amazon-linux-extras enable epel

sudo yum install -y epel-release

sudo yum -y update

sudo yum install -y certbot python2-certbot-nginx

This script only installs the certbot program from the Electronic Frontier Foundation, which makes obtaining free SSL certificates very easy. Now create another file .platform/hooks/postdeploy/get_ssl_certificate.sh with the following content:

#!/bin/sh

sudo certbot \
    -n \
    --nginx \
    --agree-tos \
    -d $(/opt/elasticbeanstalk/bin/get-config environment -k CERTBOT_DOMAINS) \
    --email $(/opt/elasticbeanstalk/bin/get-config environment -k CERTBOT_EMAIL)

This is a single shell command for the certbot program that automatically provisions a SSL certificate for the domain names passed after the -d option. I don’t like hard coding the domain names and email address in this script. Amazon provides a very nifty program called get-config that lets you extract environment properties from platform hooks. This means that you’ll have to define the domain names and the email address where you’ll receive notification emails regarding the SSL certificate in the AWS EB environment properties.

To do so, navigate to the lob-master environment dashboard on AWS EB and edit the Software configuration. Add the following environment variables under the Environment properties section and hit apply.

Certbot domains and email environment properties

The CERTBOT_DOMAINS has to be a comma separated string with no spaces in between. If you have only one domain name, then just write that. The CERTBOT_EMAIL can be any email address to which you have regular access.

This is enough to obtain a new SSL certificate, but certificates issued by Let’s Encrypt are only valid for 90 days. You’ll have to set up a cron job for renewing the certificate. Create one last file .platform/hooks/postdeploy/auto_renew_ssl_certificate.sh with the following content:

#!/bin/sh

echo "0 0 1 * * certbot renew --no-self-upgrade" \
    | sudo tee /etc/cron.d/renew_ssl_cert_cron

This creates a new cron job under /etc/cron.d/renew_ssl_cert_cron that executes the certbot renew --no-self-upgrade command on the 1st day of every month. The certificates are valid for much longer than one month, but renewing them earlier causes no harm. If you do not know how to configure cron jobs, you can use the crontab guru website.

There is one more thing to do before you deploy. I hope that you remember that the security group assigned to the lob-master EC2 instance allows traffic on port 80, which is the default port for HTTP. HTTPS traffic, however, uses port 443. To allow HTTPS traffic into your EC2 instance, go to the Amazon EC2 management console and then go to Security Groups like you did before.

From the list of security groups, select the lob-master security group and select Edit inbound rules from the Actions dropdown menu at the top-right corner. Press the Add rule button to add a new rule. Select HTTPS from the Type dropdown menu and Anywhere-IPv4 from the Source dropdown menu. Finally, hit the Save rules button.

Make sure they all have executable permission. To do so, execute the chmod +x .platform/hooks/prebuild/*.sh && chmod +x .platform/hooks/postdeploy/*.sh command on the root of your project directory.

Now commit all the changes and zip up the updated source code by executing the git archive -v -o deploy.zip --format=zip HEAD command. Deploy the updated source code on AWS EB with a newer version label, such as v0.5, and wait until the deployment finishes. Try visiting your custom domain, and you should now have a valid SSL certificate and HTTPS.

Automating Deployments Using GitHub Actions

Manually deploying the application every time you’ve made some changes can be boring. One way to automate this process is by using GitHub Actions. In this section, you’ll learn about setting up a very simple workflow to deploy the application automatically whenever you push new code to the master branch.

The first step to setting up this workflow is letting GitHub actions access your AWS account on your behalf. To do so, you’ll need to create a new user on AWS with programmatic access to AWS EB and use it to perform the deployments.

Navigate to the AWS Identity and Access Management (IAM) service using the search bar or from the services menu. Click on Users on the navigation pane on the left side of the page. On the Users page, click the Add users button at the top-right corner. Put lob-github-actions as the User name in the Set user details section and select Programmatic access on the Select AWS access type section. Press the Next: Permissions button.

For the second step, select the Attach existing policies directly option. From the list of policies, search for AWSELasticBeanstalk and select AdministratorAccess-AWSElasticBeanstalk from the list.

Set permissions for the new user

You don’t have to do anything for steps 3 and 4. For step 5, you’ll receive an Access key ID and Secret access key from Amazon. Make note of them or press the Download .csv button.

Access Key ID and Secret access key from AWS

I hope that you’ve forked the reference project repository on your GitHub profile by now. Navigate to this fork and go to the Settings tab. Go to Secrets from the navigation pane on the left side of the page and use the New repository secret button to create two secrets: one for the access key ID and another for the secret access key. Name them AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, respectively, with the values that you got from AWS IAM.

GitHub Repository Secrets

GitHub actions have workflows consisting of multiple jobs. These jobs usually have multiple steps. To create a new workflow for your repository, go back to your application source code and create a new file .github/deploy.yml with the following content:

name: Laravel Deploy

on:
  push:
    branches: [ master ]

jobs:
  deploy:
    runs-on: ubuntu-latest

    steps:

      - name: Checkout source code
        uses: actions/checkout@v2

      - name: Generate deployment package
        run: zip -r deploy.zip . -x '*.git*'

      - name: Deploy to AWS Elastic Beanstalk
        uses: einaregilsson/beanstalk-deploy@v17
        with:
          aws_access_key: $
          aws_secret_key: $
          application_name: laravel-on-beanstalk
          environment_name: lob-master
          version_label: $
          region: ap-southeast-1
          deployment_package: deploy.zip

This is a very simple workflow named Laravel Deploy. The workflow will be triggered whenever you push some code on the master branch. There is only one job called deploy, and it has three steps.

The first step checks out your source code, making a copy of it available to the workflow. The second step creates a new archive deploy.zip by compressing the application source code. Finally, the third step deploys the generated archive to AWS EB. You can access repository secrets using the secrets.<secret name> syntax. The github.sha environment variable refers to the ID of the commit that has triggered this workflow. Make sure to replace the region value with the one you’re using. Commit this file and push it to the master branch.

On the GitHub repository page, switch to the actions tab to see the running workflow. It may take around five minutes to finish. Once done, go to the lob-master environment dashboard on AWS EB and you’ll see a new version has been deployed with a version label identical to the last commit ID.

Conclusion

That’s a wrap on this article. I’ve covered almost all the aspects of deploying a Laravel project on AWS Elastic Beanstalk, except for log aggregation. I’ve already said that every time you deploy a new version of your application, all the files inside the storage directory will be lost, including the log files and user uploads. Although it’s possible to stream Laravel logs to Amazon CloudWatch, people often use a third-party tool, such as Papertrail or the ELK Stack. If you want to learn more about log streaming to Amazon CloudWatch, check out the completed branch of the reference project. It contains the necessary configuration inside the .ebextensions/stream_logs_to_cloudwatch.config file. The branch also contains additional platform hooks for installing Node.js and compiling the static assets.

Thank you for being patient throughout the entire article. I hope that you’ve gained a solid understanding of deploying web applications on AWS Elastic Beanstalk. The techniques shown in this article can also be applied to other frameworks or languages. If you have any confusion regarding anything shown in this article, feel free to connect with me on Twitter, LinkedIn or GitHub. Till the next one, stay safe and keep on learning.

Honeybadger has your back when it counts.

We’re the only error tracker that combines exception monitoring, uptime monitoring, and cron monitoring into a single, simple to use platform. Our mission: to tame production and make you a better, more productive developer.

Learn more

Laravel News Links

API Integrations using Saloon in Laravel

https://laravelnews.imgix.net/images/saloon-featured.png?ixlib=php-3.3.1

We have all been there, we want to integrate with a 3rd party API in Laravel and we ask ourselves “How should I do this?”. When it comes to API integrations I am no stranger, but still each time I wonder what is going to be the best way. Sam Carré built a package early in 2022 called Saloon that can make our API integrations amazing. This article however is going to be very different, this is going to be a walk through on how you can use it to build an integration, from scratch.

Like all great things it starts with a laravel new and goes from there, so let’s get started. Now when it comes to installing Laravel you can use the laravel installer or composer – that part is up to you. I would recommend the installer if you can though, as it provides easy options to do more than just create a project. Create a new project and open it in your code editor of choice. Once we are there, we can get started.

What are we going to build? I am glad you asked! We are going to be building an integration with the GitHub API to get a list of workflows available for a repo. Now this could be super helpful if you, like me, spend a lot of time in the command line. You are working on an app, you push changes to a branch or create a PR – it goes though a workflow that could be running one of many other things. Knowing the status of this workflow sometimes has a huge impact on what you do next. Is that feature complete? Were there issues with our workflow run? Are our tests or static analysis passing? All of these things you would usually wait and check the repo on GitHub to see the status. This integration will allow you to run an artisan command, get a list of available workflows for a repo, and allow you to trigger a new workflow run.

So by now, composer should have done its thing and installed the perfect starting point, a Laravel application. Next we need to install Saloon – but we want to make sure that we install the laravel version, so run the following inside your terminal:

1composer require sammyjo20/saloon-laravel

Just like that, we are a step closer to easier integrations already. If you have any issues at this stage, make sure that you check both the Laravel and PHP versions you are using, as Saloon requires at least Laravel 8 and PHP 8!

So, now we have Saloon installed we need to create a new class. In Saloons terminology these are “Connectors” and all a connector does is create an object focused way to say – this API is connected through this class. There is a handy artisan command that allows you to create these, so run the following artisan command to create a GitHub connector:

1php artisan saloon:connector GitHub GitHubConnector

This command is split into 2 parts, the first argument is the Integration you are creating and the second is the name of the connector you want to create. This means that you can create multiple connectors for an integration – which gives you a lot of control to connect in many different ways should you need to.

This will have created a new class for you under app/Http/Integrations/GitHub/GitHubConnector.php, let’s have a look at this a moment, and understand what is going on.

The first thing we see is that our connector extends the SaloonConnector, which is what will allow us to get our connector working without a lot of boilerplate code. Then we inherit a trait called AcceptsJson. Now if we look at the Saloon documentation, we know that this is a plugin. This basically adds a header to our requests telling the 3rd party API that we want to Accept JSON responses. The next thing we see is that we have a method for defining the base URL for our connector – so let’s add ours in:

1public function defineBaseUrl(): string

2{

3 return 'https://api.github.com';

4}

Nice and clean, we could even take this a little further so we are dealing with less loose strings hanging around in our application – so let’s look at how we can do that. Inside your config/services.php file add a new service record:

1'github' => [

2 'url' => env('GITHUB_API_URL', 'https://api.github.com'),

3]

What this will do is allow us to override this in different environments – giving us a better and more testable solution. Locally we could even mock the GitHub API using their OpenAPI specification, and test against that to ensure that it works. However, this tutorial is about Saloon so I digress… Now let us refactor our base URL method to use the configuration:

1public function defineBaseUrl(): string

2{

3 return (string) config('services.github.url');

4}

As you can see we are now fetching the newly added record from our configuration – and casting it to a string for type safety – config() returns a mixed result so we want to be strict on this if we can.

Next we have default headers and default config, now right now I am not going to worry about the default headers, as we will approach auth on it’s own in a little while. But the configuration is where we can define the guzzle options for our integration, as Saloon uses Guzzle under the hood. For now let’s set the timeout and move on, but feel free to spend some time configuring this as you see fit:

1public function defaultConfig(): array

2{

3 return [

4 'timeout' => 30,

5 ];

6}

We now have our Connector as configured as we need it for now, we can come back later if we find something we need to add. The next step is to start thinking about the requests we want to be sending. If we look at the API documentation for GitHub Actions API we have many options, we will start with listing the workflows for a particular repository: /repos/{owner}/{repo}/actions/workflows. Run the following artisan command the create a new request:

1php artisan saloon:request GitHub ListRepositoryWorkflowsRequest

Again the first argument is the Integration, and the second argument is the name of the request we want to create. We need to make sure we name the integration for the request we are creating so it lives in the right place, then we need to give it a name. I called mine ListRepositoryWorkflowsRequest because I like a descriptive naming approach – however, feel free to adapt this to how you like to name things, as there is no real wrong way here. This will have created a new file for us to look at: app/Http/Integrations/GitHub/Requests/ListRepositoryWorkflowsRequest.php – let us have a look at this now.

Again we are extending a library class here, this time the SaloonRequest which is to be expected. We then have a connector property and a method. We can change the method if we need to – but the default GET is what we need right now. Then we have a method for defining the endpoint. Refactor your request class to look like the below example:

1class ListRepositoryWorkflowsRequest extends SaloonRequest

2{

3 protected ?string $connector = GitHubConnector::class;

4 

5 protected ?string $method = Saloon::GET;

6 

7 public function __construct(

8 public string $owner,

9 public string $repo,

10 ) {}

11 

12 public function defineEndpoint(): string

13 {

14 return "/repos/{$this->owner}/{$this->repo}/actions/workflows";

15 }

16}

What we have done is add a constructor which accepts the repo and owner as arguments which we can then use within our define endpoint method. We have also set the connector to the GitHubConnector we created earler. So we have a request we know we can send, we can take a small step away from the integration and think about the Console Command instead.

If you haven’t created a console command in Laravel before, make sure you check out the documentation which is very good. Run the following artisan command to create the first command for this integration:

1php artisan make:command GitHub/ListRepositoryWorkflows

This will have created the following file: app/Console/Commands/GitHub/ListRespositoryWorkflows.php. We can now start working with our command to make this send the request and get the data we care about. The first thing I always do when it comes to console commands, is think on the signature. How do I want this to be called? It needs to be something that explains what it is doing, but it also needs to be memorable. I am going to call mine github:workflows as it explains it quite well to me. We can also add a description to our console command, so that when browsing available commands it explains the purpose better: “Fetch a list of workflows from GitHub by the repository name.”

Finally we get to the handle method of our command, the part where we actualy need to do something. In our case we are going to be sending a request, getting some data and displaying that data in some way. However before we can do that, there is one thing we have not done up until this point. That is Authentication. With every API integration, Authentication is one of the key aspects – we need the API to know not only who we are but also that we are actually allowed to make this request. If you go to your GitHub settings and click through to developer settings and personal access tokens, you will be able to generate your own here. I would recommand using this approach instead of going for a full OAuth application for this. We do not need OAuth we just need users to be able to access what they need.

Once you have your access token, we need to add it to our .env file and make sure we can pull it through our configuration.

1GITHUB_API_TOKEN=ghp_loads-of-letters-and-numbers-here

We can now extends our service in config/services.php under github to add this token:

1'github' => [

2 'url' => env('GITHUB_API_URL', 'https://api.github.com'),

3 'token' => env('GITHUB_API_TOKEN'),

4]

Now we have a good way of loading this token in, we can get back to our console command! We need to ammend our signature to allow us to accept the owner and repository as arguments:

1class ListRepositoryWorkflows extends Command

2{

3 protected $signature = 'github:workflows

4 {owner : The owner or organisation.}

5 {repo : The repository we are looking at.}

6 ';

7 

8 protected $description = 'Fetch a list of workflows from GitHub by the repository name.';

9 

10 public function handle(): int

11 {

12 return 0;

13 }

14}

Now we can turn our focus onto the handle method:

1public function handle(): int

2{

3 $request = new ListRepositoryWorkflowsRequest(

4 owner: $this->argument('owner'),

5 repo: $this->argument('repo'),

6 );

7 

8 return self::SUCCESS;

9}

Here we are starting to build up our request by passing the arguments straight into the Request itself, however what we might want to do is create some local variables to provide some console feedback:

1public function handle(): int

2{

3 $owner = (string) $this->argument('owner');

4 $repo = (string) $this->argument('repo');

5 

6 $request = new ListRepositoryWorkflowsRequest(

7 owner: $owner,

8 repo: $repo,

9 );

10 

11 $this->info(

12 string: "Fetching workflows for {$owner}/{$repo}",

13 );

14 

15 return self::SUCCESS;

16}

So we have some feedback to the user, which is always important when it comes to a console command. Now we need to add our authentication token and actually send the request:

1public function handle(): int

2{

3 $owner = (string) $this->argument('owner');

4 $repo = (string) $this->argument('repo');

5 

6 $request = new ListRepositoryWorkflowsRequest(

7 owner: $owner,

8 repo: $repo,

9 );

10 

11 $request->withTokenAuth(

12 token: (string) config('services.github.token'),

13 );

14 

15 $this->info(

16 string: "Fetching workflows for {$owner}/{$repo}",

17 );

18 

19 $response = $request->send();

20 

21 return self::SUCCESS;

22}

If you ammend the above and do a dd() on $response->json(), just for now. Then run the command:

1php artisan github:workflows laravel laravel

This will get a list of workflows for the laravel/laravel repo. Our command will allow you to work with any public repos, if you wanted this to be more specific you could build up an option list of repos you want to check against instead of accepting arguments – but that part is up to you. For this tutorial I am going to focus on the wider more open use case.

Now the response we get back from the GitHub API is great and informative, but it will require transforming for display, and if we look at it in isolation, there is no context. Instead we will add another plugin to our request, which will allow us to transform responses into DTOs (Domain Transfer Objects) which is a great way to handle this. It will allow us to loose the flexible array we are used to getting from APIs, and get something that is more contextually aware. Let’s create a DTO for a Workflow, create a new file: app/Http/Integrations/GitHub/DataObjects/Workflow.php and add the follow code to it:

1class Workflow

2{

3 public function __construct(

4 public int $id,

5 public string $name,

6 public string $state,

7 ) {}

8 

9 public static function fromSaloon(array $workflow): static

10 {

11 return new static(

12 id: intval(data_get($workflow, 'id')),

13 name: strval(data_get($workflow, 'name')),

14 state: strval(data_get($workflow, 'state')),

15 );

16 }

17 

18 public function toArray(): array

19 {

20 return [

21 'id' => $this->id,

22 'name' => $this->name,

23 'state' => $this->state,

24 ];

25 }

26}

We have a constructor which contains the important parts of our workflow that we want to display, a fromSaloon method which will transform an array from a saloon response into a new DTO, and a to array method for displaying the DTO back to an array when we need it. Inside our ListRepositoryWorkflowsRequest we need to inherit a new trait and add a new method:

1class ListRepositoryWorkflowsRequest extends SaloonRequest

2{

3 use CastsToDto;

4 

5 protected ?string $connector = GitHubConnector::class;

6 

7 protected ?string $method = Saloon::GET;

8 

9 public function __construct(

10 public string $owner,

11 public string $repo,

12 ) {}

13 

14 public function defineEndpoint(): string

15 {

16 return "/repos/{$this->owner}/{$this->repo}/actions/workflows";

17 }

18 

19 protected function castToDto(SaloonResponse $response): Collection

20 {

21 return (new Collection(

22 items: $response->json('workflows'),

23 ))->map(function ($workflow): Workflow {

24 return Workflow::fromSaloon(

25 workflow: $workflow,

26 );

27 });

28 }

29}

We inherit the CastsToDto trait, which allows this request to call the dto method on a response, and then we add a castToDto method where we can control how this is transformed. We want this to return a new Collection as there is more than one workflow, using the workflows part of the response body. We then map over each item in the collection – and turn it into a DTO. Now we can either do it this way, or we can do it this way where we build our collection with DTOs:

1protected function castToDto(SaloonResponse $response): Collection

2{

3 return new Collection(

4 items: $response->collect('workflows')->map(fn ($workflow) =>

5 Workflow::fromSaloon(

6 workflow: $workflow

7 ),

8 )

9 );

10}

You can choose what works best for you here. I prefer the first approach personally as I like to step through and see the logic, but there is nothing wrong with either approach – the choice is yours. Back to the command now, we now need to think about how we want to be displaying this information:

1public function handle(): int

2{

3 $owner = (string) $this->argument('owner');

4 $repo = (string) $this->argument('repo');

5 

6 $request = new ListRepositoryWorkflowsRequest(

7 owner: $owner,

8 repo: $repo,

9 );

10 

11 $request->withTokenAuth(

12 token: (string) config('services.github.token'),

13 );

14 

15 $this->info(

16 string: "Fetching workflows for {$owner}/{$repo}",

17 );

18 

19 $response = $request->send();

20 

21 if ($response->failed()) {

22 throw $response->toException();

23 }

24 

25 $this->table(

26 headers: ['ID', 'Name', 'State'],

27 rows: $response

28 ->dto()

29 ->map(fn (Workflow $workflow) =>

30 $workflow->toArray()

31 )->toArray(),

32 );

33 

34 return self::SUCCESS;

35}

So we create a table, with the headers, then for the rows we want the response DTO and we will map over the collection returned, casting each DTO back to an array to be displayed. This may seem counter intuative to cast from a response array to a DTO and back to an array, but what this will do is enforce types so that the ID, name and status are always there when expected and it won’t give any funny results. It allows consistency where a normal response array may not have it, and if we wanted to we could turn this into a Value Object where we have behaviour attached instead. If we now run our command we should now see a nice table output which is easier to read than a few lines of strings:

1php artisan github:workflows laravel laravel

1Fetching workflows for laravel/laravel

2+----------+------------------+--------+

3| ID | Name | State |

4+----------+------------------+--------+

5| 12345678 | pull requests | active |

6| 87654321 | Tests | active |

7| 18273645 | update changelog | active |

8+----------+------------------+--------+

Lastly, just listing out these workflow is great – but let’s take it one step further in the name of science. Let’s say you were running this command against one of your repos, and you wanted to run the update changelog manaually? Or maybe you wanted this to be triggered on a cron using your live production server or any event you might think of? We could set the changelog to run once a day at midnight so we get daily recaps in the changelog or anything we might want. Let us create another console command to create a new workflow dispatch event:

1php artisan saloon:request GitHub CreateWorkflowDispatchEventRequest

Inside of this new file app/Http/Integrations/GitHub/Requests/CreateWorkflowDispatchEventRequest.php add the following code so we can walk through it:

1class CreateWorkflowDispatchEventRequest extends SaloonRequest

2{

3 use HasJsonBody;

4 

5 protected ?string $connector = GitHubConnector::class;

6 

7 public function defaultData(): array

8 {

9 return [

10 'ref' => 'main',

11 ];

12 }

13 

14 protected ?string $method = Saloon::POST;

15 

16 public function __construct(

17 public string $owner,

18 public string $repo,

19 public string $workflow,

20 ) {}

21 

22 public function defineEndpoint(): string

23 {

24 return "/repos/{$this->owner}/{$this->repo}/actions/workflows/{$this->workflow}/dispatches";

25 }

26}

We are setting the connector, and inheriting the HasJsonBody trait to allow us to send data. The method has been set to be a POST request as we want to send data. Then we have a constructor which accepts the parts of the URL that builds up the endpoint. Finally we have dome default data inside defaultData which we can use to set defaults for this post request. As it is for a repo, we can pass either a commit hash or a branch name here – so I have set my default to main as that is what I usually call my production branch. We can now trigger this endpoint to dispatch a new workflow event, so let us create a console command to control this so we can run it from our CLI:

1php artisan make:command GitHub/CreateWorkflowDispatchEvent

Now let’s fill in the details and then we can walk through what is happening:

1class CreateWorkflowDispatchEvent extends Command

2{

3 protected $signature = 'github:dispatch

4 {owner : The owner or organisation.}

5 {repo : The repository we are looking at.}

6 {workflow : The ID of the workflow we want to dispatch.}

7 {branch? : Optional: The branch name to run the workflow against.}

8 ';

9 

10 protected $description = 'Create a new workflow dispatch event for a repository.';

11 

12 public function handle(): int

13 {

14 $owner = (string) $this->argument('owner');

15 $repo = (string) $this->argument('repo');

16 $workflow = (string) $this->argument('workflow');

17 

18 $request = new CreateWorkflowDispatchEventRequest(

19 owner: $owner,

20 repo: $repo,

21 workflow: $workflow,

22 );

23 

24 $request->withTokenAuth(

25 token: (string) config('services.github.token'),

26 );

27 

28 if ($this->hasArgument('branch')) {

29 $request->setData(

30 data: ['ref' => $this->argument('branch')],

31 );

32 }

33 

34 $this->info(

35 string: "Requesting a new workflow dispatch for {$owner}/{$repo} using workflow: {$workflow}",

36 );

37 

38 $response = $request->send();

39 

40 if ($response->failed()) {

41 throw $response->toException();

42 }

43 

44 $this->info(

45 string: 'Request was accepted by GitHub',

46 );

47 

48 return self::SUCCESS;

49 }

50}

So like before we have a signature and a description, our signature this time has an optional branch incase we want to override the defaults in the request. So in our handle method, we can simple check if the input has the argument ‘branch’ and if so, we can parse this and set the data for the request. We then give a little feedback to the CLI, letting the user know what we are doing – and send the request. If all goes well at this point we can simply output a message informing the user that GitHub accepted the request. However if something goes wrong, we want to throw the specific exception, at least during develoment.

The main caveat with this last request is that our workflow is set up to be triggered by a webhook by adding a new on item into the workflow:

1on: workflow_dispatch

That is it! We are using Saloon and Laravel to not only list repository workflows, but if configured correctly we can also trigger them to be ran on demand :muscle:

As I said at the beginning of this tutorial, there are many ways to approach API integrations, but one thing is for certain – using Saloon makes it clean and easy, but also quite delightful to use.

Laravel News

‘Stormgate’ is a new free-to-play RTS from the director of ‘Starcraft 2’

http://img.youtube.com/vi/lLMEIMCmS44/0.jpg

In 2020, Starcraft 2 production director Tim Morten left Blizzard to start Frost Giant Studios. At Summer Game Fest, he finally showed off what he and his team have been working on for the past two years. We got our first look at Stormgate, a new free-to-play real-time strategy game that runs on Unreal Engine 5. Morten didn’t share too many details on the project but said the game would feature two races at launch.  

Frost Giant features some serious talent. In addition to Morten, former Warcraft 3: The Frozen Throne campaign designer Tim Campbell is part of the team working on Stormgate. Frost Giant plans to begin beta testing the game next year. 

Engadget

Matt Layman: You Don’t Need JavaScript

https://www.mattlayman.com/img/2022/bbkPxxxCV6M.jpgWhat If I Told You… You Don’t Need JavaScript.
This talk explores why JavaScript is not good fit for most web apps.
I then show how most web apps can do dynamic things using htmx. htmx is an extension library to make HTML markup better.
I present examples of AJAX fetching and deletion. The presentation includes a dynamic search and how to implement infinite scrolling with a trivial amount of code.Planet Python

You Thought ‘Bugdom’ and ‘Nanosaur’ Were Lost Forever

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/3f2bea8263f27df10227d92520762547.jpg

These days, the only pre-installed game you’ll find on Mac is an exciting, strategy-based war simulator pitting royal factions against one another. And by that, I mean Chess. But now you can get unique, fun, classic games like Bugdom, Nanosaur, and Cro-Mag Rally. I thought these titles were lost for good, but as it turns out, you can still play them.

I grew up with the iMac G3. To the outside world it certainly wasn’t a gaming machine, but to me it was a premiere PC. I was able to play the games I wanted to play, which were usually the Mac’s two Harry Potter ports (those soundtracks, though), but my favorite part of the G3 was the pre-installed titles: I didn’t have an N64, PlayStation, or GameCube, but I had Bugdom, Nanosaur, and Cro-Mag Rally. And that was alright with me.

What happened to Bugdom, Nanosaur, and Cro-Mag Rally?

In case you don’t have the fond memories of these titles, here’s a quick summary: Bugdom has you playing as a pill bug traversing 10 levels to save your world from an invasion of enemy ants. (It’s great, I promise.) In Nanosaur, you’re a dinosaur armed to the teeth, outrunning other dinosaurs in an effort to steal their eggs. (Again, it’s great.) And Cro-Mag Rally is a kart racer game that’s set in the ancient world, complete with “time-appropriate” karts and weapons.

These games wouldn’t be a sell in 2022, but they did push some boundaries for Mac gaming and 3D development back in the day.

It doesn’t end there, though: The iMac G5 also shipped with two unique titles: Nanosaur 2, a sequel to the original dino shooter (this time starring a murderous pterodactyl), and Marble Blast Gold, in which you controlled a marble through a series of progressively challenging race tracks to the finish line. To give credit where credit’s due, Pangea Software developed most of these games, plus plenty of other games you could purchase separately, while Marble Blast Gold was developed by GarageGames.

The problem with these old games is they were written for Mac hardware (PowerPC) that is no longer supported. The original game files exist, but if you download them to your Mac today, you won’t be able to open them. With the exception of mobile ports (which I’ll cover below), I thought most of these games were essentially lost forever. Luckily, there’s a way to replay them on your current hardware, through both mobile ports, as well as total rewrites of the games’ original code.

How to download classic Mac games, or play online

The two titles available as Mac downloads right now are Bugdom and Nanosaur. These games have been rewritten by developer Jorio for macOS, Windows, and Linux, which allows you to play the original games as they were on your current machine. To install these games on your computer, follow the links above, then choose your particular OS from the list of download links. It’s a nostalgia trip, for sure. Do I miss playing these things on that classic CRT display with the matching keyboard and hockey puck mouse? Sure. But after years of not being able to play Bugdom outside of my own memories, I’ll take it.

The easiest one to play, though, is Marble Blast Gold. The game and its sequel were ported by developer Vanilagy as web apps, meaning you can play right in your browser. Head to this site, then click the marble in the top left to choose Marble Blast Gold. You’ll find all levels already unlocked, plus over 2,000 custom levels designed by other players.

Cro-Mag Rally and Nanosaur 2 haven’t been rewritten for modern Macs, unfortunately, but you can play the games’ ports on iOS and iPadOS as a $1.99 download (there’s also a free version of Nanosaur 2 with ads). Nanosaur 2 is mostly how I remember it, but I’m a bit disappointed with the Cro-Mag Rally situation. Don’t get me wrong—I’m thrilled this game is ready to play in 2022 in any form, but this version isn’t the one I really want. Cro-Mag Rally on Mac OS X came with additional game modes, plus a settings pane that let you adjust the physics of the game. I’d love to experience those parts of Cro-Mag Rally again, but it doesn’t look like that’s happening anytime soon.

 

Lifehacker

One Reporter’s Road Trip Nightmare Proves the Electric Vehicle Skeptics Right

https://www.louderwithcrowder.com/media-library/image.png?id=29955272&width=980

As much as the environmentalist crowd and proponents of "green" renewable energy enjoy touting the newest technological innovations as something of a godsend, many Americans remain skeptical of the advances. Even the seemingly unstoppable climb in gas prices fails to move many who simply don’t believe their neighbor’s Prius is the solution to their fiscal woes. Perhaps it’s just intransigence. Maybe Americans simply aren’t prepared to adopt the new technology simply because we’re stuck in our ways–who doesn’t enjoy hearing the purr of a finely tuned vehicle or the roar as you stomp the gas at a light that has only just turned green.

But it may also be that people have weighed to pros and cons, looked into the capabilities, and made an educated choice based on all the relevant factors. If they haven’t, or if they are still thinking about making the move to an electric vehicle, the story of one journalist’s nightmare road trip might be the final bit of information they need to make a decision.

Writing for the Wall Street Journal, Rachel Wolfe prepared and planned for a recent trip with all the glee of a child counting down the days to Christmas the year before finding out Santa doesn’t actually exist. She is hopeful to the point of giddiness, unaware that the fantasy she’s been told is all about to come crashing down, in due time.

She’s responsible about the planning, outlining the entire itinerary, and planning every stop to charge her rented Kia EV6. She’s so sure about her plan that she invites along her friend who has a hard time to meet–a shift at work at the end of the trip.

What Wolfe and her friend Mack find out, however, is the truth.

The reality of the electric vehicle infrastructure immediately slaps the duo in the face. Not only are chargers apparently divided into quick chargers and, well, not, but among those chargers there exists an extremely and ultimately disconcerting caveat to the moniker "quick charger." This categorization is given to those machines capable of supplying from 24-350kW, a range that proves troublesome as it translates into far longer charge times when the machine you’ve stopped at is on the lower end of that spectrum and even worse when it can’t even meet the minimum standard, like the machine Wolfe came across in the first leg of her trip.

From there, it only spirals. Suffice it to say, deficiencies in the charging infrastructure as well as flaws in the vehicle itself, which especially suffers through inclement weather, repeatedly deal blow after blow, heartache after headache all the way to the end. What’s more, it would seem the universe was attempting to warn the two women about their decision, as person after person along the way voiced apprehension, skepticism, and regret regarding the purchase of electric vehicles.

At one point, to conserve energy, Wolfe and her friend frantically work to cut power consumption to prevent a breakdown on the road in the middle of a storm. "To save power, we turn off the car’s cooling system and the radio, unplug our phones and lower the windshield wipers to the lowest possible setting while still being able to see. Three miles away from the station, we have one mile of estimated range."

Don’t worry. This isn’t about to turn into a horror movie where they break down in the middle of the night or something. They make it to the next charging station but only right in the nick of time.

"At zero miles, we fly screeching into a gas-station parking lot. A trash can goes flying and lands with a clatter to greet us."

They also manage to make it back to Chicago in just enough time for an emotionally drained and physically exhausted Mack to walk into a shift at work, at least she didn’t miss it.

In the end, even Wolfe was forced to come to terms with the reality of the present state of EVs and their support, obviously coming to the conclusion that they aren’t all they’re cracked up to be.

"The following week, I fill up my Jetta at a local Shell station. Gas is up to $4.08 a gallon. I inhale deeply. Fumes never smelled so sweet."

While I’ve editorialized quite a bit and condensed her story down to just a few snippets, you can rest assured the entire story is there. And for those of us who have honestly kept an eye on the burgeoning electric vehicle industry, absolutely none of this comes as a shock. The tech is getting there, and I will even concede that it may well become something great and reliable in the future, but that future is not yet upon us. So, while politicians and celebrities laugh at Americans still driving around in gasoline-powered vehicles, pointing at the skyrocketing gas prices and laughing at those forced to pay them, the truth is that even those financially capable of making the change to an EV will find themselves wrestling with the same issues encountered by Wolfe during her brief trip.

Now, I just moved. The drive was about 350 miles one way, and I did it on a single tank of gas. And while even my trip suffered from a few spats of rainy weather, I never had to stop or sacrifice my AC or unplug my phone, turn off the radio, or worry about if my windshield wipers were going to suck up the last bit of fuel in the tank. And if I had run low on fuel, I knew that any gas station could fill me up. And the high prices notwithstanding, that’s a level of peace of mind no EV driver can say to have. Or rather, not if they want to pull out of the station in under an hour.

Perhaps what is necessary is not to force Americans to make the transition to EVs; this would only serve to stress an already weak infrastructure. What we need is more responsible policies to lower gasoline prices, make driving more affordable, and provide the requisite amount of time to build that infrastructure if and or when that transition occurs naturally.

The Louder with Crowder Dot Com Website is on Instagram now! Follow us at @lwcnewswire and tell a friend!


Kamala DODGES Question On The Idiotic ‘Green New Deal’ | Louder With Crowder

www.youtube.com

Louder With Crowder