Laravel create/show Route Doesn’t Work? A Typical Mistake.

https://laraveldaily.com/storage/322/Wrong-Way.png

If you have a few similar GET routes, there’s a danger of one overriding another.

Example 1: Two GET Routes

routes/web.php:

Route::get('posts/{post}', [PostController::class, 'show']);

Route::get('posts/create', [PostController::class, 'create']);

The problem with this pair is that “create” will match the same way as {post}, so if you load /posts/create in the browser, it will execute the first route of /posts/{post}, assigning {post} == "create".

In that case, you need to change the order:

routes/web.php:

Route::get('posts/create', [PostController::class, 'create']);

Route::get('posts/{post}', [PostController::class, 'show']);

Then, both routes would start working.


Example 2: Resource Controller with Extra Method

routes/web.php:

Route::resource('posts', PostController::class);

Route::get('posts/export', [PostController::class, 'export']);

If you launch /posts/export in the browser, it will instead execute the show() method of the Controller, and not the export() method.

The reason: as a part of Resource Controller, there’s a show() method, which has a signature of posts/{post}. So it will match the same way as posts/export.

If you want to add extra methods to Resource Controller, you need to add them before the Route::resource():

routes/web.php:

Route::get('posts/export', [PostController::class, 'export']);

Route::resource('posts', PostController::class);

The general rule of thumb is to have specific no-parameter routes earlier in the Routes file than the “wildcard” routes with parameters.

Laravel News Links

Setting up a WebSocket server for your Laravel app


This article has been cross-posted on the Knack Engineering blog.

Introduction

WebSockets are a ubiquitous tool for building real-time experiences for users. They provide a persistent, bidirectional connection between a client and a server. For example, say the frontend of your app needs to stay up-to-date within seconds of your backend to do things like deliver notifications, status updates, etc. You may have implemented this as a periodic check, or poll, of the backend via some API like REST or GraphQL. Now that you have more users, and/or your processes have become more complex, these requests can start to weigh on your infrastructure capacity. That’s a problem that WebSockets can help solve.

Laravel and its ecosystem play a significant role in our stack at Knack, and recently we embarked on a new WebSocket server implementation which would work closely with our Laravel-based core API. For the uninitiated, Laravel is a popular PHP framework and ecosystem that provides a rich set of tools and features for building modern web applications. We decided to use the swooletw/laravel-swoole library to build our server-side implementation of a custom subprotocol and I had the opportunity to document some of my experiences along the way. I’ll show how to get started with the library, provide some background information, some caveats to consider, and drop a few tips. Let’s get started!

WebSocket Server Solutions for Laravel

First, we are working with the assumption that you want, or need, to host your own WebSocket server, and that you want to do so in PHP. With a stateful protocol like WebSockets we need to be able to run PHP concurrently. There are basically two different approaches: language-level and extension-level.

Language-level libraries and frameworks implement concurrency in pure PHP. Examples include AMPHP and ReactPHP. In contrast, extensions are written in a lower-level language, typically C/C++, and expose APIs at the PHP-level to access the extension-level functionality. Swoole and OpenSwoole are examples of PHP extensions. In short, Swoole is an event-driven, asynchronous, multithreaded framework that makes it easy to build performant, concurrent servers.

Now let’s evaluate pre-packaged WebSocket Server solutions that easily plug into a Laravel app. On the language-level side we have the seemingly most popular choice: the beyondcode/laravel-websockets, a package by Beyond Code. It is built on Ratchet (which is built on ReactPHP), has a Pusher protocol implementation, and plugs well into the Laravel framework’s broadcasting API. While this supports a lot of use cases and does so in a familiar “Laravel way” it is not very flexible when you need something like a custom subprotocol implementation, and it is inherently limited in performance due to being built on a PHP server core rather than a C/C++ one.

Stargazer Statistics

On the extension-level side we have two libraries that are built around Swoole: swooletw/laravel-swoole and hhsxv5/laravel-s. They are both established packages with a wide contributor base, but my colleagues and I ultimately concluded that the sandbox and safety features within laravel-swoole were a bit more mature and reliable in our testing.

There are a few blog posts out there on getting started with laravel-s but not as many for laravel-swoole, so I felt particularly motivated to write and publish this article. The laravel-swoole default WebSocket implementation is built with Socket.io in mind, where laravel-s is a bit more agnostic in its implementation. Both packages have some feature overlap but some differences as well. I would consider both of them for your project and evaluate the best fit!

Getting started with swooletw/laravel-swoole

First, let’s install either Swoole or OpenSwoole. I’ll be using Swoole for this example. Be sure to enable support for WebSockets on installation when prompted like so:

Enable Sockets Support

pecl install swoole

If you run into any issues during the Swoole build process I included a few troubleshooting steps in the appendix.

Now, create a new Laravel app. Continue bootstrapping the app using Laravel Jetstream. I used the Intertia template, but I believe the API feature should work with Livewire as well.

laravel new
composer require laravel/jetstream
php artisan jetstream:install inertia

Let’s set up that feature: just enable it by uncommenting the line in config/jetstream.php:

'features' => [
    ...
    Features::api(),
    ...
],

Next, require the laravel-swoole package and publish its config files:

composer require swooletw/laravel-swoole
php artisan vendor:publish --tag=laravel-swoole

Here we’ll also want to adjust the config to enable the WebSocket server by default.

'websocket' => [
    'enabled' => env('SWOOLE_HTTP_WEBSOCKET', true),
],

Now, for this basic example we’ll just have our handlers defined inline as closures in the websockets “routes” definition. For anything that is more than a simple exploration I would recommend creating controllers or Laravel Actions to store the handler code.

<?php

declare(strict_types=1);

use App\Models\User;
use Laravel\Sanctum\PersonalAccessToken;
use SwooleTW\Http\Websocket\Facades\Websocket;

Websocket::on('whoami', function (SwooleTW\Http\Websocket\Websocket $websocket) {
    $websocket->emit(
        'message',
        $websocket->getUserId() === null
            ? 'You are not authenticated'
            : "Your userID is {$websocket->getUserId()}"
    );
});

Websocket::on('login', function (SwooleTW\Http\Websocket\Websocket $websocket, mixed $data) {
    if (is_string($data)) {
        $tokenInstance = PersonalAccessToken::findToken($data);

        if (
            $tokenInstance instanceof PersonalAccessToken &&
            $tokenInstance->tokenable instanceof User
        ) {
            $websocket->loginUsing($tokenInstance->tokenable);
            $websocket->emit('message', $tokenInstance);
        }
    }
});

Now we can test it out! Let’s start the Swoole server and test things out.

Note: You will need to disable Xdebug before you can run any Swoole code. If you skip this step your terminal will be overwhelmed with warning logs when you run this next command and things might not behave properly. If you expect to be doing a lot of Swoole development then you may want to configure a tool or script to enable or disable loading the Xdebug extension. For macOS there’s an open source tool to add a toggle button in the menu bar called xDebugToggler.

php artisan swoole:http start

Try navigating to http://127.0.0.1:1215 and seeing if you are able to register and login just like if you were to run php artisan serve. From there, head to the API Tokens section:

Jetstream API Tokens

Create a token and save it for the next step.

Create API Token

Now, let’s connect to the server via WebSocket and test our handlers! Open up your WebSocket client of choice. I’ll be using Postman here.

Let’s get connected and make sure everything works. Enter your hostname and port (likely 127.0.0.1:1215) and hit connect. You should see two messages come through and the status should still read CONNECTED.

Connect to WebSocket with Postman

If you’re wondering what those 1-2 digit numbers prefixing the server messages are, those are Socket.io packet type encodings. You can remove these and adjust anything about frame parsing by creating your own implementation of \SwooleTW\Http\Websocket\Parser.

Let’s send a whoami payload using this JSON:

[
  "whoami"
]

WebSocket whoami test

As expected we get the unauthenticated message. So let’s log in! Refer to the following JSON payload:

[
  "login",
  "YOUR_API_TOKEN"
]

WebSocket login test

And we get the $tokenInstance we emitted in our routes/websocket.php handler for login, and the default parser was nice enough to JSON-encode it for us! We can send another whoami to check our work:

WebSocket whoami second test

We now have a WebSocket server and a way to authenticate connections! This example is very basic, but I hope it gives a background to build something great with.

Appendix

Notes on using Swoole and laravel-swoole

Try Using Swoole (or RoadRunner) to serve your app

Swoole benchmark comparison

If you end up using a Swoole-based solution for your WebSocket server, why not serve HTTP requests using Swoole as well? Laravel’s request lifecycle by default is actually pretty redundant: for each incoming request that your Laravel app handles it needs to be fully bootstrapped every time: framework, service providers and all. There is a first-party solution to this: Laravel Octane, which uses your choice of Swoole (or OpenSwoole) or RoadRunner to provide a relatively easy way to serve many more requests by booting and loading the application into memory. Laravel Octane with RoadRunner is very likely to be a drop-in replacement for Nginx/PHP FPM for your Laravel app if you’d rather not add Swoole to your stack. Either way, it could be an easy win to dramatically increase your throughput for typical HTTP server needs using Laravel Octane, laravel-swoole, laravel-s, or another similar solution.

Choosing a WebSocket Room Driver

Be sure to choose your room driver, which associates WebSocket connections with users as well as membership in channels, appropriately for your use case. By default, the library will use a Swoole table to store these records. Swoole tables are an incredibly performant data store compared to Redis, as Cerwyn Cahyono concluded in May 2021. One consideration however is horizontal scaling: if you have more than one WebSocket server behind a load balancer, for instance, you need to consider that Swoole tables are only accessible by the Swoole server process and its forks.

If you need to have a common record among all of your WebSocket server instances with laravel-swoole then you may want to consider the provided Redis driver (\SwooleTW\Http\Websocket\Rooms\RedisRoom), or creating your own implementation. Be sure to add a prefix unique to the server for all records, that way you don’t end up sending a message intended for one WebSocket connection on Server A to another, unrelated WebSocket connection on Server B.

Implementing middleware for WebSocket frames/messages

You may also find that you’d like to have middleware for all incoming WebSocket messages. If all you need to do is interact with parts of the Frame then you can add this handling in a custom handler or parser. These are defined in config/swoole_websocket.php. If you need to get the user ID and interact with a WebSocket instead then you’ll have to override and extend the \SwooleTW\Http\Concerns\InteractsWithWebsocket trait to directly modify the Swoole onMessage handler.

Adding Swoole to an existing stack/system

If you have a system where a Swoole WebSocket server stands alongside other pieces like an HTTP server, queue worker, etc. then you will need to implement some kind of global pub/sub infrastructure that coordinates between the Swoole WebSocket server and everything else. Redis is one way to fill that need. You could have an async Redis client boot with the Swoole server and SUBSCRIBE to a given channel. The client could listen for a JSON payload which could simply have a user ID and data to emit to the authenticated WebSocket connection. That way, you could issue WebSocket messages from non-Swoole contexts by simply PUBLISHing the JSON payload to the corresponding channel. This does have the added overhead of having to establish a Redis connection for each emit from a non-Swoole context, but you get the flexibility of cooperating well with your existing system.

Troubleshooting installation of Swoole extension

When installing the Swoole extension on both ARM64 and x64 macOS machines I’ve run into a few issues and wanted to share how I resolved them.

fatal error: ‘pcre2.h’ file not found

If you get this error when installing Swoole then you need to be sure you have pcre2 installed. On macOS, you can do this using Brew:

brew install pcre2

Then you can use the Brew CLI to grep the location of the pcre2.h file and link it under your current PHP version’s include/php/ext/pcre/ directory.

brew list pcre2 | grep 'pcre2\.h$'

ln -s /opt/homebrew/Cellar/pcre2/xx.yy/include/pcre2.h /opt/homebrew/Cellar/php@x.y/x.y.z/include/php/ext/pcre/pcre2.h

fatal error: ‘openssl/ssl.h’ file not found

This guide doesn’t require building Swoole with OpenSSL support, but if you do you may need to set your OpenSSL directory during the build config. You can do so by first ensuring that you have OpenSSL installed locally, on macOS you can also do this using Brew:

brew install openssl

Then you can once again use the Brew CLI to get your OpenSSL directory and pass it in during the extension build configuration, right after executing pecl install. When prompted to enable OpenSSL support, type out “yes” and then add the --with-openssl-dir flag inline like so:

brew --prefix openssl
pecl install swoole


enable openssl support? [no] : yes --with-openssl-dir=/opt/homebrew/opt/openssl@3

Notes on WebSockets

Subprotocol gotcha

Be sure to pay attention to the Sec-WebSocket-Protocol HTTP header on the initial connection request. If the client request specifies a protocol in the header and the server doesn’t respond with that protocol, or any, then some browsers like Chrome will just drop the connection, which technically follows the WebSocket spec more closely, and others like Firefox and Safari will connect without hesitation.

Which WebSocket dev client to use?

For me this has been Firecamp, but I have gotten frustrated with the bugs and poor performance of the app. It has a lot of potential, so I’m definitely going to keep watching it! Insomnia just added WebSocket support, but it is still immature and lacking features. As of Jan 2023 I recommend using Postman, though note that even for them WebSockets support is somewhat of a beta.

Give your WebSocket server a heart(beat)

Be sure to implement some kind of heartbeat function for persistent WebSocket connections. It’s a good general practice, and helps when you have or want infrastructure configurations that close inactive connections. With many subprotocols the heartbeat is client-initiated.

Authentication Patterns

There are many patterns for authenticating WebSocket connections, but can be categorized as either during the initial connection request (authentication-on-connection) or afterwards within the established connection (authentication-after-connection). During the initial connection request the two most common approaches are to either use a cookie, or a token in the header or body. Typically, with the authentication-on-connection approach the connection is closed by the server if the authentication check fails. With the authentication-after-connection approach typically servers have a timeout which closes connections that don’t authenticate within a given timeout. At Knack, we created a WebSocket backend that implemented the graphql-transport-ws subprotocol which frontend library enisdenjo/graphql-ws supports.

For reference, laravel-swoole is configured by default for authentication and other middleware to be run on the initial connection request. While this is a valid approach, my impression is that the dominant WebSocket authentication pattern is to support authentication-after-connection. For this guide I implemented an authentication-after-connection flow as a barebones custom subprotocol. Before proceeding with your project be sure to consider what kinds of clients will be connecting to your WebSocket server and choose your approach accordingly.

Laravel News Links

HBO’s Mario Kart

https://theawesomer.com/photos/2023/02/hbo_mario_kart_pedro_pascal_too_t.jpg

HBO’s Mario Kart

Link

With the success of The Last of Us, HBO has proven that video games can be turned into quality television programming. During his guest spot on SNL, TLOU star Pedro Pascal joins the cast of another famous video game franchise turned prestige TV drama.

The Awesomer

‘Legend of Zelda: A Link to the Past’ Reverse-Engineered for Linux, Switch, Mac, and Windows

More than 30 years ago Nintendo released the third game in its Legend of Zelda series — appropriately titled, "A Link to the Past." This week Neowin called it "one of the most beloved video games of all time," reporting that it’s now been reverse-engineered by a GitHub user named Snesrev, "opening up the possibility of Link to the Past on other platforms, like Sega’s 32X or the Sony Playstation."
This reimplementation of Link to the Past is written in C and contains an astonishing 80,000 lines of code. This version is also content complete, with all the same levels, enemies, and puzzles that fans of the original game will remember. In its current state, the game requires the PPU and DSP libraries from LakeSNES, a fast SNES emulator with a number of speed optimizations that make the game run faster and smoother than ever before. Breaking from the LakeSNES dependency, which allows for compatibility on modern operating systems, would allow the code to be built for retro hardware. It also offers one of the craziest features I have seen in a long time; the game can run the original machine code alongside the reverse-engineered C implementation. This works by creating a save-state on both versions of the game after every frame of gameplay, comparing their state and proving that the reimplementation works…. Snesrev now works alongside 19 other contributors. Despite the immense amount of work that went into this project, the result is brilliant. Not only does the game play just like the original, it also includes a number of new features that were not present in the original. For example, the game now supports pixel shaders, which allow for even more stunning visuals. It also supports widescreen aspect-ratios, giving players a wider field of view, making the game even more immersive on modern displays. Another new feature of this reimplementation is the higher quality world map. The new map is much more detailed and gives players a better sense of the world they are exploring…. The amount of time, effort, and talent that went into creating this is simply astonishing. Thanks to Slashdot reader segaboy81 for sharing the article.


Read more of this story at Slashdot.

Slashdot

Stockpiling SHTF Ammo – How To and How Much?

https://www.alloutdoor.com/wp-content/uploads/2023/01/m2-military-surplus-ammo-storage-container.jpg

Armed preppers were once derided as paranoid hoarders. Then, COVID happened. We saw empty shelves in grocery stores, lack of access to medications for chronic conditions, and mass civil unrest exacerbated by unemployment and politicking. In short, the pandemic gave us a taste of a real “SHTF” scenario. When it comes to preparing for natural or man-made disasters (or another end-of-days global infection), you need to invest in five things: food, water, shelter, power, and personal defense. That last one means having a stockpile of ammunition to feed your pistols, shotguns, and long rifles. Let’s dive into how to properly store SHTF ammo for the long haul, and take a look at how much you might want to keep around, just in case.

How to Store Ammo Long-Term

Three things determine whether ammo – once pulled out of storage after years – fires reliably, or fizzes out: Temperature, exposure, and moisture.

Ideal Temperature for Ammo Storage

Regardless of the caliber, casing, or powder, all ammo should be stored at 55 to 80 degrees (F). Colder temperatures can cause the sealant on primers to fail, and condensation can more easily form. The inverse is also true: Very high temperatures can exacerbate the effects of humidity and lead to rapid corrosion. Even in dry cold or heat, extreme temperatures can cause gunpowder to break down, resulting in misfires and unreliable cycling.

Keeping Ammo Dry

SHTF Ammo

Two words: Desiccant packs. You probably know these as “dehumidifiers.” If you’ve ever found a small, cloth packet of plastic beads inside some packaging, you’ve seen a desiccant pack. These little perforated pouches contain silica gel beads, which absorb moisture in the air. They’re excellent at preventing a container of ammo from accumulating moisture. It’s best to invest in properly sealed containers for your ammo. That means something made of decent polymer or coated steel, with a rubber gasket providing a proper seal. We can recommend a few cases to keep things simple.

The Best Ammo Storage Containers

US Surplus M2 Ammo Can

SHTF Ammo

The M2 Ammo Can is arguably the best ammo storage container available. It’s made from steel, it’s coated with a rustproof paint, it’s easy to carry, it can hold plenty of weight (up to 50 lbs.) and it has a reliable gasket seal with a sturdy clamp.

MTM .50-Cal Polymer Ammo Can

SHTF Ammo

The MTM Ammo Can is basically a polymer version of the M2. It’s similar in size, and it features a rubber gasket seal with a decent clamping lid, carry handle, and padlock holes for basic security. Plus, MTM containers are made in the USA.

Pelican 1200 Case

SHTF Ammo

The Pelican 1200 Case provides plenty of space for stacked rifle or pistol magazines, and its legendary toughness and pressure-equalizing seal make it a great choice for long-term ammo storage. This writer employs a few 1200s as his choice for ammo storage, having “tactically acquired” a few from his unit’s armory in prior years. Keep your ammo stored in any of these sealed cases with some dehumidifying packs, and it’s guaranteed to remain stable and ready for use for years, if not decades.

The Best Places to Store Ammo

First, let’s clear up where you shouldn’t store ammo. You should avoid any location wherein your ammo containers are subjected to wild temperature swings. That means no attics, sheds or garages. Basements without insulation or climate control should also be avoided. Besides temperature concerns, these three locations should be avoided for security reasons. Most burglars attempt to forcibly enter a home through the garage or basement, and auxiliary buildings are easier targets since they’re physically separate from the main property.

Indoors, Away From Sunlight

Spare closets, empty spaces under bed frames, and unused kitchen cabinets make great spots. These places provide stable temperatures, they’re easily accessible, and they’re not in vulnerable locations.

Keep It Locked Up

Locking ammo is as important as locking up your guns. Keeping ammo indoors means curious children or wayward guests can stumble upon your rounds. Discourage prying eyes and small digits by slapping some locks on your ammo containers (all the containers we recommend can be secured with padlocks).

How Much Ammo Should I Stockpile?

You can never have too much — as long as it’s stored correctly, that is. You should consider the minimum amount of rounds that’ll make you feel secure for the long haul in a true “SHTF” scenario. If Earth were hit by an X-class solar flare and civilization was sent back to the Stone Age, this writer would want enough ammo on tap to last the rest of his (probably shortened) life. So, how much ammo would one need to last, say, 20 to 40 years in a potentially high-conflict environment? We can answer this question – at least, we can ballpark it – with some real data.

Handgun Rounds Stockpile

SHTF Ammo

Data collected from law enforcement shootings with handguns reveal that, on average, it takes 13 to 14 rounds to incapacitate a single threat. In a “SHTF” scenario, you’d want to avoid the public and venturing beyond your safety zone as much as possible. But war-game the idea that you’d need expose yourself at least a few times a month: Assuming you’re in a high-threat environment, you’d want enough ammo to protect yourself from multiple threats. Given the assumption you’re in this doomsday situation for the long haul, some napkin math says you’d want over 1,000 rounds.

That’s about 20 boxes of ammo (most pistol cartridges come in packs of 50). That amount of rounds can be easily stored in a single large container like the Pelican 1200. That’s also enough ammo to hone your marksmanship skills on a regular schedule, while still keeping more mags than you could ever hold ready to go.

Rifle Rounds Stockpile

SHTF Ammo

If you’re like most preppers or survivalists, your mind automatically jumps to 5.56 NATO/.223 Remington, chambered in an AR-type rifle. Speaking from personal experience, it’s surprisingly easy to burn through 210 rounds (that’s seven 30-round magazines, the standard “battle rattle” load) when you’re in a real threat engagement.

You’d want at least enough ammo to replenish those seven mags through multiple threat encounters. Again, if we’re considering the potential for years-long conflict and severe social strife, it’s safe to say that harboring at least 2,100 rounds of rifle ammo is a safe minimum. Stored in typical 20-round boxes, this amount of rounds can comfortably fit in two .50-cal ammo cans.

Hunting Rounds Stockpile

SHTF Ammo

Preppers lucky enough to survive on game in rural areas, rejoice. You probably need many fewer rounds to live comfortably, but you should still keep more ammo than you need for that trusty bolt gun. Data says that the average hunter expends between 3 and 7 rounds to take one deer. Assuming you’re living off game meat, you’ll want to take at least a few bucks or does to keep the fridge or salt locker stored. It’s safe to say that you’ll want at least 500 to 800 rounds of hunting ammo. Speaking of hunting: having reliable ammo and an accurate rifle is just part of the equation. See our top tactics for ensuring a successful hunt when survival is on the line.

The post Stockpiling SHTF Ammo – How To and How Much? appeared first on AllOutdoor.com.

AllOutdoor.com

US Marines Outsmart AI Security Cameras by Hiding in a Cardboard Box

United States Marines outsmarted artificially intelligent (AI) security cameras by hiding in a cardboard box and standing behind trees. From a report: Former Pentagon policy analyst Paul Scharre has recalled the story in his upcoming book Four Battlegrounds: Power in the Age of Artificial Intelligence. In the book, Scharre recounts how the U.S. Army was testing AI monitoring systems and decided to use the Marines to help build the algorithms that the security cameras would use. They then attempted to put the AI system to the test and see if the squad of Marines could find new ways to avoid detection and evade the cameras. To train the AI, the security cameras, which were developed by Defense Advanced Research Projects Agency’s (DARPA) Squad X program, required data in the form of a squad of Marines spending six days walking around in front of them. After six days spent training the algorithm, the Marines decided to put the AI security cameras to the test. "If any Marines could get all the way in and touch this robot without being detected, they would win. I wanted to see, game on, what would happen," DARPA deputy director Phil Root tells Scharre in the book. Within a single day, the Marines had worked out the best way to sneak around an AI monitoring system and avoid detection by the cameras. Root says: "Eight Marines — not a single one got detected." According to Scharre’s book, a pair of marines "somersaulted for 300 meters" to approach the sensor and "never got detected" by the camera.


Read more of this story at Slashdot.

Slashdot

Building APIs in Laravel

https://laravelnews.s3.amazonaws.com/images/building-apis-in-laravel.png

Building APIs in Laravel is an art form. You must think beyond data access and wrapping your Eloquent Models in API endpoints.

The first thing you need to do is design your API; the best way to do this is to think about the purpose of your API. Why are you building this API, and what is the target use case? Once you have figured this out, you can effectively design your API based on how it should be integrated.

By focusing your perspective on how your API should be integrated, you can eliminate any potential pain points within your API before it is even released. This is why I always test integrating any APIs I build to ensure a smooth integration that covers all use cases I intend to have.

Let’s talk through an example to paint a picture. I am building a new bank, Laracoin. I need my users to be able to create accounts and create transactions for these accounts. I have an Account model, a Transaction model, and a Vendor model to which each transaction will belong. An example of this is:

Account -> Has Many -> Transaction -> Belongs To -> Vendor

 

Spending Account -> Lunch 11.50 -> Some Restaurant

So we have three main models that we need to focus on for our API. If we were to approach this without any design-led thinking, then we would create the following routes:

GET /accounts

POST /accounts

GET /accounts/{account}

PUT|PATCH /accounts/{account}

DELETE /accounts/{account}

 

GET /transactions

POST /transactions

GET /transactions/{transaction}

PUT|PATCH /transactions/{transaction}

DELETE /transactions/{transaction}

 

GET /vendors

POST /vendors

GET /vendors/{vendor}

PUT|PATCH /vendors/{vendor}

DELETE /vendors/{vendor}

However, what are the benefits of these routes? We are just creating JSON access for our eloquent models, which works – but adds zero value, and from an integration perspective, it makes things feel very robotic.

Instead, let’s think about the Design and Purpose of our API. Our API will likely be accessed by mostly internal mobile and web applications. We will focus on these use cases to start with. Knowing this means we can fine-tune our API to fit the user journeys in our applications. So typically, in these applications, we will see a list of accounts, as we can manage our accounts. We will also have to click through to an account to see a list of transactions. We will then have to click on a transaction to see more details. We would never really need to see the vendors directly, as they are there more for categorization than anything else. With that in mind, we can design our API around these use cases and principles:

GET /accounts

POST /accounts

GET /accounts/{account}

PUT|PATCH /accounts/{account}

DELETE /accounts/{account}

 

GET /accounts/{account}/transactions

GET /accounts/{account}/transactions/{transaction}

 

POST /transactions

This will allow us to manage our accounts effectively and only be able to fetch transactions directly through the account to which it belongs. We do not want transactions to be edited or managed now. These should be created only – and from there, an internal process should update these should they be required.

Now that we know how our API is meant to be designed, we can focus on how to build this API to ensure it responds quickly and can scale in terms of its complexity.

Firstly, we will make the assumption that we are building an API-only Laravel application – so we will not need any api prefix. Let’s think about how we might register these routes, as this is often the first part of your application that sees problems. A busy routes file is hard to parse mentally, and the cognitive load is the first battle in any application.

If this API were going to be public facing, I would look into supporting a versioned API, in which case I would create a version directory and keep each main group in a dedicated file. However, we aren’t using versioning in this case so we will organize them differently.

The first routes file we want to create is routes/api/accounts.php, which we can add to our routes/api.php.

Route::prefix('accounts')->as('accounts:')->middleware(['auth:sanctum', 'verified'])->group(

base_path('routes/api/accounts.php),

);

Each group will load in its routes, setting up the default middleware prefix and route naming pattern. Our route file for accounts will be flat with minimal grouping other than when we want to look at sub-resources. This allows us to have only one area to look at when trying to understand the routes themselves, but it means that anything and everything to do with accounts will belong in this file.

Route::get(

'/',

App\Http\Controllers\Accounts\IndexController::class,

)->name('index');

Our first route is the accounts index route, which will show all accounts for the authenticated user. This is likely the first thing called through the API aside from the authentication routes, so it is where I typically focus first. It is essential to look at the most critical routes first to unblock other teams, but also it allows you to flesh out the standards you want to follow within your application.

Now that we understand how we are routing our requests, we can think about how we want to process these requests. Where does the logic live, and how can we ensure we keep code duplication to a minimal amount?

I recently wrote a tutorial about how to use Eloquent Effectively, which dives into query classes. This is my preferred approach, as it ensures that we have a minimal amount of code duplication. I won’t go into the specifics as to why I will use this approach, as I went into detail in the previous tutorial. However, I will walk through how to use it in your application. You can follow this approach if it suits your needs.

The critical thing to remember is that the best way to get the most out of your API is to build it in a way that works for you and your team. Spending hours trying to adjust to a method that doesn’t feel natural will only slow you down in a way that won’t give you the benefit you are trying to achieve.

When creating a query class, you need to make the corresponding interface bind to the controller. This isn’t a required step. However, it is me writing the tutorial – so what did you expect, really?

interface FilterForUserContract

{

public function handle(Builder $query, string $user): Builder;

}

Then the implementation we want to use:

final class FilterAccountsForUser implements FilterForUserContract

{

public function handle(Builder $query, string $user): Builder

{

return QueryBuilder::for(

subject: $query,

)->allowedIncludes(

include: ['transactions'],

)->where('user_id', $user)->getEloquentBuilder();

}

}

This query class will get all accounts for the passed-through user, allowing you to include the transactions for each account optionally – then pass back the eloquent builder to add additional scopes where needed.

We can then use this within our controller to query the accounts for the authenticated user, then return them within our response. Let’s look at how we might use this query to understand the available options.

final class IndexController

{

public function __construct(

private readonly Authenticatable $user,

private readonly FilterForUserContract $query,

) {}

 

public function __invoke(Request $request): Responsable

{

$accounts = $this->query->handle(

query: Account::query()->latest(),

user: $this->user->getAuthIdentifier(),

);

 

// return response here.

}

}

At this point, our controller has an eloquent builder that will pass to the response, so when passing the data, make sure you either call get or paginate to pass the data through properly. This leads us to the next point in my opinionated journey.

Responding is the primary responsibility of our API. We should respond quickly and efficiently to have a fast and responsive API for our users to experience. How we respond as an API can be split into two areas, the response class and how the data is transformed for the response.

These two areas are Responses and API Resources. I will start with the API Resources, as I care very much about them. API Resources are used to obfuscate away from the database structure and a way for you to transform the information stored in your API in a way that will best be consumed on the client side.

I use JSON:API standards within my Laravel APIs as it is an excellent standard that is well-documented and used within the API community. Luckily Tim MacDonald has created a fantastic package for creating JSON:API resources in Laravel, which I swear by in all of my Laravel applications. I have recently written a tutorial on how to use this package, so I will only go into some detail here.

Let us start with the Account Resource, which will be set up to have the relevant relationships and attributes. Since my last tutorial, the package has been updated recently, making setting relationships up easier.

final class AccountResource extends JsonApiResource

{

public $relationships = [

'transactions' => TransactionResource::class,

];

 

public function toAttributes(Request $request): array

{

return [

'name' => $this->name,

'balance' => $this->balance->getAmount(),

];

}

}

We are keeping this super simple for now. We want to return the account name and balance, with an option to load in the transactions relationship.

Using these resources means that to access the name, and we would have to use: data.attributes.name, which may take a while to get used to in your web or mobile applications, but you will get the hang of it soon enough. I like this approach, as we can separate the relationships and attributes and extend them where needed.

Once our resources are filled out, we can focus on other areas, such as Authorization. This is a vital part of our API and should not be overlooked. Most of us have used Laravels Gate before, using the Gate Facade. However, I like injecting the Gate contract from the framework itself. This is mainly because I prefer Dependency Injection over Facades when I get a chance. Let’s look at what this might look like in the StoreController for accounts.

final class StoreController

{

public function __construct(

private readonly Gate $access,

) {}

 

public function __invoke(StoreRequest $request): Responsable

{

if (! $this->access->allows('store')) {

// respond with an error.

}

 

// the rest of the controller goes here.

}

}

Here we are just using the Gate functionality as if it were the facade, as they are the same thing. I use allows here, but you can use can or other methods. You should focus on Authorization over how it is implemented, as this is a minor detail for your application at the end of the day.

So we know how we want the data to be represented in the API and how we want to authorize users in the application. Next, we can look at how we might handle write operations.

When it comes to our API, write operations are vital. We need to ensure these are fast as they can be so that our API feels snappy.

You can write data in your API in many different ways, but my preferred approach is to use background jobs and return quickly. This means you can worry about the logic around how things are created in your own time rather than your clients. The benefit is that your background jobs can still publish updates through web sockets for a real-time feel.

Let’s look at the updated StoreController for accounts when we use this approach:

final class StoreController

{

public function __construct(

private readonly Gate $access,

private readonly Authenticatable $user,

) {}

 

public function __invoke(StoreRequest $request): Responsable

{

if (! $this->access->allows('store')) {

// respond with an error.

}

 

dispatch(new CreateAccount(

payload: NewAccount::from($request->validated()),

user: $this->user->getAuthIdentifier(),

));

 

// the rest of the controller goes here.

}

}

We are sending our background job a payload of a Data Transfer Object, which will be serialized on the queue. We created this DTO using the validated data and want to send it through the user ID because we need to know who to make this for.

Following this approach, we have valid data and type-safe data being passed through to create the model. In our tests, all we need to do here is ensure that the job is dispatched.

it('dispatches a background job for creation', function (string $string): void {

Bus::fake();

 

actingAs(User::factory()->create())->postJson(

uri: action(StoreController::class),

data: [

'name' => $string,

],

)->assertStatus(

status: Http::ACCEPTED->value,

);

 

Bus::assertDispatched(CreateAccount::class);

})->with('strings');

We are testing here to ensure that we pass validation, get the correct status code back from our API, and then confirm that the right background job is dispatched.

After this, we can test the job in isolation because it doesn’t need to be included in our endpoint test. Now, how will this be written to the database? We use a Command class to write our data. I use this approach because using only Action classes is messy. We end up with 100s of action classes that are hard to parse when looking for a specific one in our directory.

As always, because I love to use Dependency Injection, we need to create the interface we will use to resolve our implementation.

interface CreateNewAccountContract

{

public function handle(NewAccount $payload, string $user): Model;

}

We use the New Account DTO as the payload and pass through the user ID as a string. Typically, I give this as a string; I would use a UUID or ULID for the ID field in my applications.

final class CreateNewAccount implements CreateNewAccountContract

{

public function handle(NewAccount $payload, string $user): Model

{

return DB::transaction(

callback: fn (): Model => Account::query()->create(

attributes: [

...$payload->toArray(),

'user_id' => $user,

],

),

);

}

}

We wrap our write action in a database transaction so that we only commit to the database if the write is successful. It allows us to roll back and throw an exception should the write be unsuccessful.

We have covered how to transform model data for our response, how to query and write data, as well as how we want to authorize users in the application. The final stage for building a solid API in Laravel is looking at how we respond as an API.

Most APIs suck when it comes to responding. It is ironic as it is perhaps the most essential part of an API. In Laravel, there are multiple ways in which you can respond, from using helper functions to returning new instances of JsonResponse. I, however, like to build out dedicated Response classes. These are similar to Query and Command classes, which aim to reduce code duplication but are also the most predictable way to return a response.

The first response I create is a collection response, which I would use when returning a list of accounts owned by the authenticated user. I would also make a collection of other responses, from single model responses to empty responses and error responses.

class Response implements Responsable

{

public function toResponse(): JsonResponse

{

return new JsonResponse(

data: $this->data,

status: $this->status->value,

);

}

}

We first must create the initial response that our response classes will extend. This is because they will all respond in the same way. They all need to return the data and the status code – in the same way. So now, let us look at the collection response class itself.

final class CollectionResponse extends Response

{

public function __construct(

private readonly JsonApiResourceCollection $data,

private readonly Http $status = Http::OK,

) {}

}

This is super clean and easy to implement moving forward, and you can turn the data property into a union type to be more flexible.

final class CollectionResponse extends Response

{

public function __construct(

private readonly Collection|JsonResource|JsonApiResourceCollection $data,

private readonly Http $status = Http::OK,

) {}

}

These are clean and easy to understand, so let us look at the final implementation for the IndexController for accounts.

final class IndexController

{

public function __construct(

private readonly Authenticatable $user,

private readonly FilterForUserContract $query,

) {}

 

public function __invoke(Request $request): Responsable

{

$accounts = $this->query->handle(

query: Account::query()->latest(),

user: $this->user->getAuthIdentifier(),

);

 

return new CollectionResponse(

data: $accounts->paginate(),

);

}

}

Focusing on these critical areas allows you to scale your API in complexity without worrying about code duplication. These are the key areas that I will always focus on when trying to figure out what is causing a Laravel API to be slow.

This is by no means an exhaustive tutorial or list of what you need to focus on, but following this somewhat short guide, you can set yourself up for success moving forwards.

Laravel News

The weapon that changed warfare

https://GunFreeZone.net/wp-content/uploads/2023/01/evan-matchlock-1.jpg

Matchlock .72 caliber . Shooting a .69 caliber round ball with a 180 grain charge of 1F.

It is a cumbersome animal, incredibly slow to load and shoot, dangerous manipulation but so much fun to shoot. In the words of Leeloo: “Big Badaboom!”

 

Gun Free Zone

Adhesives

https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fbucketeer-e05bbc84-baa3-437e-9518-adb32be77984.s3.amazonaws.com%2Fpublic%2Fimages%2F15a5cb1e-d849-441e-8865-998c6b2d283d_400x400.webp

Once a week we’ll send out a page from Cool Tools: A Catalog of Possibilities. The tools might be outdated or obsolete, but the possibilities they inspire are new. Sign up here to get Tools for Possibilities a week early in your inbox.

Flexible epoxy

3M Scotch-Weld 3532

This is as close to “bombproof” as I have found a glue to be. It seems to stick to just about anything, although 3M says it’s for metals and plastics. I have used it for gluing D-rings – and other things – into my whitewater canoes.

The rings have been able to hold me boiling through big rapids, often upside-down. For this application the glue joint needs to be flexible and waterproof…and this stuff hasn’t ever failed me. How it is different from epoxy: Fills gaps. Flexes under stress without giving away. Sticks to smooth plastics like PVC or vinyl. Seems a LOT stronger than epoxies. You’ll have to find this in a specialty store or order it over the web. Shelf life is 1 year. — Fen Sartorius


Mixes up epoxy

3M Scotch-Weld EPX Applicator

I always used to buy epoxy locally in disposable dispensers that are supposed to dispense equal ratios of the components. The dispensers never work that well: one side always starts to move first and then to get a reasonably equal mix I have to mix up a lot more than I need.

The 3M duo-pack adhesives are sold separately from the dispenser. Because the dispenser is not disposable, it can be a decently built tool, like a caulk gun for epoxy.

The way it works is that you slip on the adhesive cartridge. The applicator has a plunger that pushes up the adhesive cartridge. Think caulk gun. The epoxy comes in double tubes like a doubled tube of caulk. When an adhesive has a different mixing ratio the tubes in the cartridge have different diameters. And there is a different plunger that fits in the tube. The supported mixing ratios are 1:1, 1:2 and 1:10 because those are the ratios of adhesives available. When you buy the system you get the first two plungers, but the 1:10 plunger is sold separately as it is used only for DP-8005 and DP-8010, I think. Just like a caulk gun you can, but you need not remove the adhesive cartridge between uses. The gun stays clean. There is no need to clean it. (Unlike a caulk gun, the adhesive doesn’t leak out the back and get on the gun.)

In fact, if you’re not so worried about waste there’s even a further convenience: static mixing nozzles. These nozzles attach to the end of the epoxy tube and do all the mixing for you so that it really works like a caulk gun: what comes out is ready to use, completely mixed epoxy.

But even if you don’t use the somewhat wasteful mixing nozzles you can still use the gun to extrude the correct ratio mix of 3M adhesive products and then hand mix. I have been able to mix up just the amount of epoxy I need when with the old system I would have mixed ten times what I needed. (No exaggeration here.)

I first got this system because I was trying to glue zinc-plated magnets to polyethylene. I tried regular epoxy. It doesn’t stick well to either one of these materials. There are two adhesives that I think are of particular note in the 3M lineup.

The DP-190 (which I have only used a tiny bit) is supposed to stick to everything except the “low surface energy” plastics. I saw that it is recommended for use with the zinc-plated rare earth magnets (by the magnet sellers). The DP-8005 is designed to stick to low surface energy plastics. I got it for my application.

I also got a small mat made out of teflon because nothing is supposed to stick to that. This was great for repairs using epoxy. I repaired something and laid it on the teflon and it peeled right off after it was cured.

According to 3M, epoxy shelf life is less than a couple years, so you don’t want to buy a lifetime supply at any given time. The shelf life of DP-8005 is only 6 months. The shelf life of the Scotch-Weld Two Part Urethane is 1 year. — Adrian M.

McMaster-Carr sells a very similar product much cheaper, half the cost, for $23. It does not use 3M cartridges. I have had good experiences with Lord adhesives that this gun does use. — KK


Best source for magnets

SuperMagnetMan, supermagnetman.net

I have been buying Neodymium Iron Boron (NIB) super magnets for years. Back then, Wondermagnets was the only source for hobbyists and they had quite a have changed. For the past five years, I have been ordering my magnets from “Mr. George the SuperMagnetMan,” unequivocally the best source today. His prices are the best on the net. His selection is vast: no one else has the stock he has or the variations in size of commonly available shapes. This is no exaggeration or hype. He’s got stuff you can’t get anywhere else and is constantly adding new items, like axially- and diametrically-magnetized NIB wedding rings and radially-magnetized ring magnets. He has magnets so large they are dangerous (fortunately he has put videos on YouTube that show you how to safely handle these monsters — with large leather welding gloves and a special wooden wedge and a 2×4!). He also sells magnetic hooks, pyramid shaped magnets, magnetic jewelry, teflon coated magnets, heart, star, and triangle magnets. You can even get powdered magnets that act like iron filings on steroids! You name it he’s got it. Most magnets are N45-N50 grade, the highest strength you can buy.

Some of the products I have ordered are the magnet powders, radially-magnetized ring magnet, various size sphere magnets, conical magnets, large rectangular magnets, cubes, and many others. Shipping charges are reasonable. Service is great. One time I ordered a bunch of stuff and never completely checked what I got. I went to use one of the magnets months later and found out it was the wrong size. He sent me the right size in the mail a few days after I emailed him.

Mr. George seems like a pretty cool dude, too. An electrical engineer, Mr. George develops magnet products himself and caters to other engineers, inventors, and hobbyists. He can have custom magnets made to order. He has also put up a series of educational videos on YouTube and has done a lot of work with kids. He has a saying, something like, “Give a kid a magnet and you have a friend for life.” — Laral


A strong hold on brake fluid

Seal-All Adhesive & Sealant

Like other adhesives, this one can be used on metals, glass, wood and leather, but it is the only household product I have ever used that will withstand constant exposure to gasoline and/or brake fluid. J-B-WELD will work in some cases, but you have to thoroughly clean and dry the surface or it will fail. Seal-All will seal a leak in a master cylinder-reservoir (non-pressure side) even if you apply it over brake fluid that has already wept out onto the surface. I have also used it to seal an old Coleman fuel tank, and also a weeping fuel fitting on the bottom of a gasoline tank on my bike. This stuff is not what I would consider a toolbox item, but I ride my bike far from home on occasion, and this is one of the items I like to keep in the “just-in-case” bag. — Jackie Gregory


Squeezes tubes dry

Tube-Grip Dispensing Plier

This Tube-Grip easily squeeze tubes of adhesive, calk, sealant, etc. with more precision, less waste with better finished results than other methods. Learning curve is short for starting and stopping applications. Tubes are squeezed beginning from the tube’s bottom seam, and 96% use of product efficiency is claimed. Very thrifty.

Mechanical advantage is claimed to be ten times more than by hand whether gripping vs. pinching. Less fatigue, more control. Concentration on product flow is enhanced because less physical effort is used during application. Tube squeezers for toothpaste and art paint are a different category. Some calling projects are too small for standard tubes of calk, or are in confined areas where a large gun won’t fit.

Tent seam sealing with drippy sealer is controlled better with whole arm movements and a hand grip vs. finger squeezing. I’ve used this 2” dispensing plier for at least 5-years and would not consider many squeeze tube projects without it. A 2 1/2” model also exists. — David McKenzie

Cool Tools