https://lunarphp.io/og.jpgAn open-source package that brings the power of modern headless e-commerce functionality to Laravel.Laravel News Links
The Most Important MySQL Setting
https://www.percona.com/blog/wp-content/uploads/2023/03/lucas.speyer_an_underwater_high_tech_computer_server_a_dolpin_i_9337e5c5-e3c5-41dd-b0b1-e6504186488b-150×150.png
If we were to select the most important MySQL setting, if we were given a freshly installed MySQL or Percona Server for MySQL and could only tune a single MySQL variable, which one would it be?
It has always bothered me that “out-of-the-box” MySQL performance is subpar: if you install MySQL or Percona Server for MySQL in a new server and do not “tune it” (as in change default values for configuration settings), it just won’t be able to make the best use of the server’s available resources – particularly memory.
To illustrate this, I ran the Sysbench-TPCC synthetic benchmark against two different GCP instances running a freshly installed Percona Server for MySQL version 8.0.31 on CentOS 7, both of them spec’d with four vCPUs but with the second one (server B) having a tad over twice as much memory than the reference one (server A).
Sysbench ran on a third server, which I’ll refer to as the application server (APP). I’ve used a fourth instance to host a PMM server to monitor servers A and B and used the data collected by the PMM agents installed on the database servers to compare performance. The table below summarizes the GCP instances used for these tests:
Server identifier | Machine type | vCPU | Memory (GB) |
A | n1-standard-4 | 4 | 15 |
B | n2-highmem-4 | 4 | 32 |
APP | n1-standard-8 | 8 | 64 |
PMM | e2-medium | 2 | 4 |
Sysbench-TPCC has been executed with the following main options:
- ‐‐threads=256
- ‐‐tables=10
- ‐‐scale=100
- ‐‐time=3600
It generated a dataset with the following characteristics:
mysql> SELECT -> ROUND(SUM(data_length+index_length)/1024/1024/1024, 2) as Total_Size_GB, -> ROUND(SUM(data_length)/1024/1024/1024,2) as Data_Size_GB, -> ROUND(SUM(index_length)/1024/1024/1024,2) as Index_Size_GB -> FROM information_schema.tables -> WHERE table_schema='sbtest'; +---------------+--------------+---------------+ | Total_Size_GB | Data_Size_GB | Index_Size_GB | +---------------+--------------+---------------+ | 92.83 | 77.56 | 15.26 | +---------------+--------------+---------------+
One of the metrics measured by Sysbench is the number of queries per second (QPS), which is nicely represented by the MySQL Questions (roughly, “the number of statements executed by the server”) graph in PMM (given these servers are not processing anything other than the Sysbench benchmark and PMM monitoring queries):
Server A (4 vCPU, 15G RAM) | Server B (4 vCPU, 32G RAM) |
![]() |
![]() |
Server A produced an average of 964 QPS for the one-hour period the test was run, while Server B produced an average of 1520 QPS. The throughput didn’t double but increased by 57%. Are these results good enough?
I’ll risk “adding insult to injury” and do the unthinkable of comparing apples to oranges. Here’s how the same test performed when running Percona Distribution for PostgreSQL 14 on these same servers:
Queries: reads | Queries: writes | Queries: other | Queries: total | Transactions | Latency (95th) | |
MySQL (A) | 1584986 | 1645000 | 245322 | 3475308 | 122277 | 20137.61 |
MySQL (B) | 2517529 | 2610323 | 389048 | 5516900 | 194140 | 11523.48 |
PostgreSQL (A) | 2194763 | 2275999 | 344528 | 4815290 | 169235 | 14302.94 |
PostgreSQL (B) | 2826024 | 2929591 | 442158 | 6197773 | 216966 | 9799.46 |
QPS (avg) | ||
MySQL (A) | 965 | 100% |
MySQL (B) | 1532 | 159% |
PostgreSQL (A) | 1338 | 139% |
PostgreSQL (B) | 1722 | 178% |
For a user that does not understand how important it is to tune a database server or doesn’t know how to do it and just experiments with these two RDMS offerings, PostgreSQL seems to have the edge when it comes to out-of-the-box performance. Why is that?
MySQL comes pre-configured to be conservative instead of making the most of the resources available in the server. That’s a heritage of the LAMP model when the same server would host both the database and the web server.
To be fair, that is also true with PostgreSQL; it hasn’t been tuned either, and it, too, can also perform much better. But, by default, PostgreSQL “squeezes” the juice out of the server harder than MySQL does, as the following table with server resource usage indicates:
CPU | Memory | IO | |
M(A) | ![]() |
![]() |
![]() |
M(B) | ![]() |
![]() |
![]() |
P(A) | ![]() |
![]() |
![]() |
P(B) | ![]() |
![]() |
![]() |
Data caching
To ensure durability, the fourth and last property of ACID-compliant databases such as MySQL and PostgreSQL, data must be persisted to “disk” so it remains available once the server is restarted. But since retrieving data from disk is slow, databases tend to work with a caching mechanism to keep as much hot data, the bits and pieces that are most often accessed, in memory.
In MySQL, considering the standard storage engine, InnoDB, the data cache is called Buffer Pool. In PostgreSQL, it is called shared buffers. A curious similarity is that both the Buffer Pool and the shared buffers are configured with 128M by default.
On the other hand, one of the big differences in their implementation stands from the fact MySQL (InnoDB) can load data (pages) from disk straight into the Buffer Pool’s memory area. PostgreSQL’s architecture uses a different approach: as is the case for the majority of the applications, it relies on the file system (FS) cache to load a page from disk to memory and then makes a copy of that page in the shared buffer’s memory area.
I have no intention of discussing the pros and cons of each of these RDBMS’ caching implementations; the only reason I’m explaining this is to highlight how, in practice, they are configured in opposite ways: when we tune MySQL, we tend to allocate most of the memory to the Buffer Pool (let’s simplify and say 80% of it), whereas, on PostgreSQL, we tend to do the inverse and allocate just a small portion of it (say, 20%). The reasoning here is that since PostgreSQL relies on the FS cache, it pays off to allow free memory to be naturally used for FS cache as it ends up working as a sort of 2nd-level caching for PostgreSQL: there’s a good chance that a page that has been evicted from the shared buffers can still be found in the FS cache – and copying a page from one memory area to another is super fast. This explains, in part, how PostgreSQL performed better out of the box for this test workload.
Now that I got your attention, I’ll return the focus to the main subject of this post. I’ll make sure to do a follow-up one for PostgreSQL.
Just increase the Buffer Pool size
I wrote above that “we tend to allocate most of the memory to the Buffer Pool (let’s simplify and say 80% of it)”. I didn’t make up that number; it’s in the MySQL manual. It’s also probably the most well-known MySQL rule-of-thumb. If you want to learn more about it, Jay Janssen wrote a nice blog post (innodb_buffer_pool_size – Is 80% of RAM the right amount?) dissecting it a few years ago. He started that post with the following sentence:
It seems these days if anyone knows anything about tuning InnoDB, it’s that you MUST tune your innodb_buffer_pool_size to 80% of your physical memory.
There you have it: if one could only tune a single MySQL variable, that must be innodb_buffer_pool_size. In fact, I once worked with a customer that had added a slider button to their product’s GUI to set the size of the Buffer Pool on the adjacent MySQL server and nothing else.
Realistically, this has been the number one parameter to tune on MySQL because increasing the data cache size makes a big difference for most workloads, including the one I’ve used for my tests here.
But the 80% rule just doesn’t fit all cases. On Server A, 80% of 14.52G was roughly 12G, and allocating that much memory to the Buffer Pool proved to be too much, with Linux’s Out-Of-Memory (OOM) monitor killing the mysqld process:
[Fri Mar 10 16:24:49 2023] Killed process 950 (mysqld), UID 27, total-vm:16970700kB, anon-rss:14226528kB, file-rss:0kB, shmem-rss:0kB
That’s the blacked-out mark in the graph of the table below. I had to settle for a Buffer Pool size of 10G (69% of memory), which left about 4.5G for the OS as well as other memory-consuming parts of MySQL (such as connections and temporary tables). That’s a good reminder that we don’t simply tune MySQL for the server it is running on; we need to take the workload being executed into (high) consideration too.
For Server B, I’ve tried to go with a Buffer Pool size of 27G (84% of memory), but that also proved too much. I settled with 81%, which was good enough for the task at hand. The results are summarized in the table below.
Buffer Pool size | Default | Tuned |
MySQL (A) | ![]() |
MySQL (B) | ![]() |
As we can see above, throwing more memory (as in increasing the data cache size) just does not cut it beyond a certain point. For example, if the hot data can fit in 12G, then increasing the Buffer Pool to 26G won’t make much of a difference. Or, if we are hitting a limit in writes, we need to look at other areas of MySQL to tune.
Dedicated server
MySQL finally realized almost no one else keeps a LAMP stack running in a single server. We have long been surfing the virtualization wave (to keep it broad). Most production environments have MySQL running on their own dedicated server/VM/container, so it makes no sense to limit the Buffer Pool to only 128M by default anymore.
MySQL 8.0 introduced the variable innodb_dedicated_server, which configures not only the Buffer Pool size (innodb_buffer_pool_size) according to the server’s available memory but also the redo log space (now configured through innodb_redo_log_capacity), which is InnoDB’s transaction log and plays an important role in data durability and in the checkpointing process, which in turn influences… write throughput. Oh, and the InnoDB flush method (innodb_flush_method) as well.
This option for Enabling Automatic Configuration for a Dedicated MySQL Server is a bit more sophisticated than a rule of thumb and employs a simple algorithm to define the value for the Buffer Pool size and the redo log space, and configured my test servers as follows:
Server A | Server B | |
innodb_buffer_pool_size | 11G | 24G |
innodb_redo_log_capacity | 9G | 18G |
innodb_flush_method | O_DIRECT_NO_FSYNC | O_DIRECT_NO_FSYNC |
The default values for innodb_redo_log_capacity and innodb_flush_method being used so far were, respectively, 100M and FSYNC. Without further ado, here are the results for the three test rounds for each server side-by-side for easier comparison:
Server A:
![]() |
![]() |
![]() |
![]() |
Server B:
![]() |
![]() |
![]() |
![]() |
Note how CPU usage is now close to maximum usage (despite a lot of the time being spent in iowait due to the slow disk, particularly for the smaller server).
With the dedicated server (third “peak” in the graphs below) using very similar Buffer Pool values to my Buffer Pool-tuned test (second “peak”), the much larger redo log space coupled with the O_DIRECT flush method (with “no fsync”) allowed for much-improved write performance:
Server A | ![]() |
Server B | ![]() |
It’s probably time to change the default configuration and consider every new MySQL server a dedicated one.
NOTE – I hit a “limitation” from my very first Sysbench run:
Running the test with following options: Number of threads: 256 (...) FATAL: error 1040: Too many connections
Due to MySQL’s cap on the number of concurrent connections, it allows, by default, 150. I could have run my tests with 128 Sysbench threads, but that would not have driven as much load into the database as I wanted, so I raised max_connections to 300.
Technically, this means I cheated since I have modified two MySQL settings instead of one. In my defense, max_connections doesn’t influence the performance of MySQL; it just controls how many clients can connect at the same time, with the intent of limiting database activity somewhat. And if your application attempts to surpass that limit, you get a blatant error message like the one above.
BTW, I also had to increase the exact same setting (max_connections) on PostgreSQL to run my initial provocative test.
The goal of this post was to encourage you to tune MySQL, even if just one setting. But you shouldn’t stop there. If you need to get the most out of your database server, consider using Percona Monitoring and Management (PMM) to observe its performance and find ways to improve it.
Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.
Percona Database Performance Blog
Laravel: 9 Typical Mistakes Juniors Make
https://laraveldaily.com/storage/377/Add-a-heading-(5).png
Some time ago I made a YouTube series called Code Reviews. From that series and other reviews, I’ve collected the 9 most common repeating mistakes Laravel beginners make.
Not all of those are really serious flaws, most of them are just not the most effective ways to code. Then it’s an open question why do you use a framework like Laravel and don’t actually use its core features in full?
So, in no particular order…
Mistake 1. Not Using Route Groups
Where possible combine routes into groups. For example, you have routes like this:
Route::get('dashboard', [HomeController::class, 'index'])->name('dashboard')->middleware(['auth']);
Route::resource('donation', DonationController::class)->middleware(['auth']);
Route::resource('requisition', RequisitionController::class)->middleware(['auth']);
Route::name('admin.')->prefix('admin.')->group(function () {
Route::view('/', 'admin.welcome')->middleware(['auth', 'admincheck']);
Route::resource('donor', DonorController::class)->middleware(['auth', 'admincheck']);
Route::resource('details', OrganisationDetailController::class)->middleware(['auth', 'admincheck']);
});
Here, we have all routes that have middleware auth
and three routes that also check if the user is admin
. But those middlewares are repeating each time.
It would be better to put all routes into the group to check middleware for auth and then inside have another group for admin routes.
This way when the developer opens the routes file, he will immediately know which routes are only for authenticated users.
Route::middleware('auth')->group(function () {
Route::get('dashboard', [HomeController::class, 'index'])->name('dashboard');
Route::resource('donation', DonationController::class);
Route::resource('requisition', RequisitionController::class);
Route::name('admin.')->prefix('admin.')->middleware('admincheck')->group(function () {
Route::view('/', 'admin.welcome');
Route::resource('donor', DonorController::class);
Route::resource('details', OrganisationDetailController::class);
});
});
Read more
Mistake 2. Not Using Route Model Binding
Often I see beginner coders in the controller manually search for the data, even if in routes Model Binding is specified correctly. For example, in your routes you have:
Route::resource('student', StudentController::class);
So here it’s even a resource route. But I see some beginners still write Controller code like this:
public function show($id)
{
$student = Student::findOrFail($id);
return view('dashboard/student/show', compact(['student']));
}
Instead, use Route Model Binding and Laravel will find Model:
public function show(Student $student)
{
return view('dashboard/student/show', compact(['student']));
}
Read more
Mistake 3. Too Long Eloquent Create/Update Code
When saving data into DB I have seen people write code similar to this:
public function update(Request $request)
{
$request->validate(['name' => 'required']);
$user = Auth::user();
$user->name = $request->name;
$user->username = $request->username;
$user->mobile = $request->mobile;
// Some other fields...
$user->save();
return redirect()->route('profile.index');
}
Instead, it can be written shorter, in at least two ways.
First, in this example, you don’t need to set the Auth::user()
to the $user
variable. The first option could be:
public function update(Request $request)
{
$request->validate(['name' => 'required']);
auth()->user()->update([$request->only([
'name',
'username',
'mobile',
// Some other fields...
]);
return redirect()->route('profile.index');
}
The second option put validation into Form Request. Then into the update()
method you would need just to pass $request->validated()
.
public function update(ProfileRequest $request)
{
auth()->user()->update([$request->validated());
return redirect()->route('profile.index');
}
See how shorter the code is?
If you want to dive deeper into Eloquent, I have a full course Eloquent: The Expert Level
Mistake 4. Not Naming Things Properly
Many times beginners name things however they want, not thinking about other developers who would read their code in the future.
For example, shortening variable names: instead of $data
they call $d
. Always use proper naming. For example:
Route::get('/', [IndexController::class, 'show'])
->middleware(['dq'])
->name('index');
Route::get('/about', [IndexController::class, 'about'])
->middleware(['dq'])
->name('about');
Route::get('/dq', [IndexController::class, 'dq'])
->middleware(['auth'])
->name('dq');
What is this middleware and method in Index Controller called dq
? Well, in this example, if we would go into app/Http/Kernel.php
to find this middleware, we could find something like this:
class Kernel extends HttpKernel
{
// ...
protected $routeMiddleware = [
'auth' => \App\Http\Middleware\Authenticate::class,
'admin' => \App\Http\Middleware\EnsureAdmin::class,
'dq' => \App\Http\Middleware\Disqualified::class,
'inprogress' => \App\Http\Middleware\InProgress::class,
// ...
];
}
It doesn’t matter what’s inside this middleware, but from the name of the middleware file it is disqualified
. So everywhere instead of dq
it should be called disqualified
. This way if other developers would join the project, they would have a better understanding.
Mistake 5. Too Big Controllers
Quite often I see juniors writing huge Controllers with all possible actions in one method:
- Validation
- Checking data
- Transforming data
- Saving data
- Saving more data in other tables
- Sending emails/notifications
- …and more
That could all be in one store()
method, for example:
public function store(Request $request)
{
$this->authorize('user_create');
$userData = $request->validate([
'name' => 'required',
'email' => 'required|unique:users',
'password' => 'required',
]);
$userData['start_at'] = Carbon::createFromFormat('m/d/Y', $request->start_at)->format('Y-m-d');
$userData['password'] = bcrypt($request->password);
$user = User::create($userData);
$user->roles()->sync($request->input('roles', []));
Project::create([
'user_id' => $user->id,
'name' => 'Demo project 1',
]);
Category::create([
'user_id' => $user->id,
'name' => 'Demo category 1',
]);
Category::create([
'user_id' => $user->id,
'name' => 'Demo category 2',
]);
MonthlyRepost::where('month', now()->format('Y-m'))->increment('users_count');
$user->sendEmailVerificationNotification();
$admins = User::where('is_admin', 1)->get();
Notification::send($admins, new AdminNewUserNotification($user));
return response()->json([
'result' => 'success',
'data' => $user,
], 200);
}
It’s not necessarily wrong but it becomes very hard to quickly read for other developers in the future. And what’s hard to read, becomes hard to change and fix future bugs.
Instead, Controllers should be shorter and just take the data from routes, call some methods and return the result. All the logic for manipulating data should be in the classes specifically suitable for that:
- Validation in Form Request classes
- Transforming data in Models and/or Observers
- Sending emails in events/listeners put into the queue
- etc.
I have an example of such transformation of a typical Controller method in this article: Laravel Structure: Move Code From Controller to… Where?
There are various approaches how to structure the code, but what should be avoided is one huge method responsible for everything.
Mistake 6. N+1 Eloquent Query Problem
By far the no.1 typical reason for poor performance of Laravel project is the structure of Eloquent queries. Specifically, N+1 Query problem is the most common: running hundreds of SQL queries on one page definitely takes a lot of server resources.
And it’s relatively easy to spot this problem in simple examples like this:
// Controller not eager loading Users:
$projects = Project::all();
// Blade:
@foreach ($projects as $project)
<li> ()</li>
@endforeach
But real life examples get more complicated and the amount of queries may be “hidden” in the accessors, package queries and other unpredictable places.
Also, typical junior developers don’t spend enough time testing their application with a lot of data. It works for them with a few database records, so they don’t go extra mile to simulate future scenarios, where their code would actually cause performance issues.
A few of my best resources about it:
Mistake 7. Breaking MVC Pattern: Logic in Blade
Whenever I see a @php directive in a Blade file, my heart starts beating faster.
See this example:
@php
$x = 5
@endphp
Except for very (VERY) rare scenarios, all the PHP code for getting the data should be executed before coming to show it in Blade.
MVC architecture was created for a reason: that separation of concerns between Model, View and Controller makes it much more predictable where to search for certain code pieces.
And while the M and C parts can be debated whether to store the logic in Model, in Controller, or in separate Service/Action classes, the V layer of Views is kinda sacred. The golden rule: views should not contain logic. In other words, views are only for presenting the data, not for transforming or calculating it.
The origin of this comes to the fact that Views could be taken by a front-end HTML/CSS developer and they could make the necessary changes to styling, without needing to understand any PHP code. Of course, in real life that separation rarely happens in teams, but it’s a noble goal from a pattern that comes from outside of Laravel or even PHP.
For most mistakes in this list I have a “read more” list of links, but here I have nothing much to add. Just don’t store logic in Views, that’s it.
Mistake 8. Relationships: Not Creating Foreign Keys
Relationships between tables are created on two levels: you need to create the related field and then a foreign key. I see many juniors forget the second part.
Have you ever seen something like this in migration?
$table->unsignedBigInteger('user_id');
On the surface, it looks ok. And it actually does the job, with no bugs. At first.
Let me show you what happens if you don’t put constrained()
or references()
in that migration file.
Foreign key is the mechanism for restricting related operations on the database level: so when you delete the parent record, you may choose what happens with children: delete them too or restrict the parent deletion in the first place.
So, if you create just a unsignedBigInteger()
field, without the foreign key, you’re allowing your users to delete the parent without any consequences. So the children stay in the database, with their parent non-existing anymore.
You can also watch my video about it: Laravel Foreign Keys: How to Deal with Errors
Mistake 9. Not Reading The Documentation
I want to end this list with an elephant in the room. For many years, the most popular of my articles and tweets are with information literally taken from the documentation, almost word for word.
Over the years I’ve realized that people don’t actually read the documentation in full, only the main parts of the ones that are the most relevant to them.
So many times, developers surprised me by not knowing the obvious features that were in the docs.
Junior developers earn mostly from ready-made tutorials or courses, which is fine, but reading the official docs should be a regular activity.
Laravel News Links
PlanetScale Database Migrations for Laravel
https://laravelnews.s3.amazonaws.com/images/laravel-database-copy-featured.png
This community PlanetScale package for Laravel adds an artisan pscale:migrate
command to your Laravel applications. This command helps you manage database migrations using the PlanetScale API, a process which varies slightly from using the built-in migrate
command.
During a deployment, you’d run the following command instead of migrate
which does everything necessary to update your database’s schema:
php artisan pscale:migrate
Why is this needed?
You might wonder why this command is needed instead of directly using the migrate
command.
According to the package’s readme, PlantScale handles migrations in a different way that you’d typically see with databases:
PlanetScale has alot of advantages when using it as your application’s production database. However it handles your database and schema migrations in a somewhat unusual way.
It uses branches for your database. A branch can be production or development…
This package uses PlanetScale’s Public API to automate the process of creating a new development branch, connecting your app to the development branch, running your Laravel migrations on the development branch, merging that back into your production branch, and deleting the development branch.
To get started with this package, check out the package setup instructions on GitHub at x7media/laravel-planetscale.
Related:
Speaking of PlanetScale and databases, Aaron Francis published MySQL for Developers. We’d highly recommend you check that out to improve your database skills.
Laravel News
A Collection of Fun Databases For Programming Exploration
Longtime Slashdot reader Esther Schindler writes: When you learn a new tool/technology, you need to create a sample application, which cannot use real in-house data. Why not use something fun for the sample application’s data, such as a Star Wars API or a data collection about World Cup contests? Esther Schindler, Slashdot user #16185, assembled a groovy collection of datasets that may be useful but also may be a source of fascinating internet rabbit holes. For those interested in datasets, Esther also recommends the Data is Plural newsletter and the website ResearchBuzz, which shares dataset descriptions as well as archive-related news and tools.
"Google Research maintains a search site for test datasets, too, if you know what you’re looking for," adds Esther. There’s also, of course, Kaggle.com.
Read more of this story at Slashdot.
Slashdot
Everything You Can Test in Your Laravel Application
https://laravelnews.s3.amazonaws.com/images/everything-you-can-test-in-laravel-app.png
Christoph Rumpel has an excellent guide, Everything You Can Test in Your Laravel Application, of scenarios you’ll likely need to test on real applications.
The post Everything You Can Test in Your Laravel Application appeared first on Laravel News.
Join the Laravel Newsletter to get Laravel articles like this directly in your inbox.
Laravel News
7 Best Belly Band Holsters for Concealed Carry & Working Out
https://www.pewpewtactical.com/wp-content/uploads/2021/04/20.-Womens-Concealed-Belly-Band-1024×683.jpg
Belly bands exist in a weird space where they are not only holsters but an entire support system unto themselves.
In the last few years, belly bands have innovated and expanded outside of cheap neoprene fabric.

If you’ve used a belly band before and dedicated it’s not for you, I get it. I was there too. However, times and holsters have changed.
Today we will look at the most innovative, useful, and downright comfy belly bands on the market.
Belly bands may still not be your bag, but let’s take a peek at the best belly bands out there so you are making an informed decision about your carry options.
Table of Contents
Loading…
Summary of Our Top Picks
-
Best Classic Belly Band
OG style, easy to use, lots of pockets for accessories, can be work strong side IWB, AIWB or cross-draw
-
Best for EDC
Tons of pockets to stow EDC gear, can use your own holster, versatile
-
Best Hybrid Holster
Crossbreed Modular Belly Band 2.0
Material offers breathability and comfort even in hot months, kydex shells keep the gun secure
-
Best Integrated Holster
Uses an integrated polymer holster, low-profile and stays in place pretty well
Why Are Belly Bands Useful?
Belly bands are useful for a number of situations. I like a good belly bend when I’m in a situation where a belt isn’t viable. When I exercise, I typically turn to a belly band.
I can carry and conceal it comfortably with my running shorts and an ancient T-shirt.

There aren’t many situations where I can’t wear a belt, but a belly band is a way to go when I can’t. I say that as a dude, but the dudettes in the audience might have plenty of situations where their style of clothing clashes with a belt.
On top of that, there are deep concealment requirements that might depend on your choice of clothing. This includes formal wear and a dislike of tuckable holsters.

A belly band allows you to carry in numerous unconventional clothing options and in unusual and usual positions if necessary.
Belly bands might not be an everyday carry option for everyone, but they are an efficient and effective tool to keep in the box.
7 Best Belly Bands for CCW & Exercising
1. Galco Underwraps 2.0
The Underwraps 2.0 is a classic belly band design and one of the few classic designs I’d recommend. It’s almost like a Batman belt than a belly band.

It’s chock full of pockets. First, it has a left and right-side leather holster pocket. You could carry two guns, but this seems more aimed at accommodating left and right-handed shooters than going akimbo.
The leather holster pockets come in different sizes, so ensure you have the right size pocket for your gun. They range from pocket pistols to the mighty CZ P-09.
Outside the leather holster pockets are two extra pockets for accessories. You can dump your spare magazines, a light, a knife, a tourniquet, or whatever else and carry it in the Underwraps 2.0.

These are elastic pockets and not leather, so they can fit a wider variety of items.
The main downside is that the leather pockets are not designed to accommodate guns with optics. You are stuck with iron sights, and if you’re like me, you might not have a purely iron sight carry gun anymore.
Prices accurate at time of writing
Prices accurate at time of writing
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
The Galco Underwraps 2.0 gives your midsection a huge secured by rough and tough hook and loop. It’s a rugged design that can be positioned strongside for cross-draw or appendix carry.
If you want a classic design, the Galco Underwraps 2.0 is one of the best.
2. Blackhawk Stache N.A.C.H.O.
The Blackhawk N.A.C.H.O. combines a conventional belly band with a detached holster, it’s designed to work with the Blackhawk Stache series of concealment holsters and accessories, but it will likely work with any modern kydex appendix rig.

N.A.C.H.O. stands for Non-Conventional Adaptive Carry Holster Option.
The N.A.C.H.O. has a 1.5-inch scuba webbing integral holster mounting section. You strap the clip of the holster over the 1.5-inch integral belt. Its design accommodates guns in the Glock 19 size range and smaller.

Since you bring your own holster, you get the retention, proper fit, and trigger coverage you want. This ensures the gun is properly tucked away and easy to draw.
Alongside the holster section, you get four elastic pockets. Two pockets are 3 inches wide, and two are 3.5 inches. You can pack everything in this belly band.

A wallet, knife, tourniquet, cell phone, and beyond. It goes from a holster to a means to carry your entire EDC.
Complaints-wise, I wish it had more of a Velcro backing. It’s somewhat short and doesn’t inspire much confidence if you’re carrying a full load and running around.
Prices accurate at time of writing
Prices accurate at time of writing
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
It will likely hold just fine, and I haven’t had issues, but it is a weak point worth addressing.
The N.A.C.H.O. gives you lots of room to stash your goods besides your gun and holster. It can be worn in nearly any position you want, and since you are bringing the holster, and will accommodate both left and right-handed shooters.
3. Crossbreed Modular Belly Band 2.0
Crossbreed is the king of the hybrid holster, and they’ve been making belly bands for years. The latest 2.0 model has some minor functional differences from the original but dedicated improvements.
This belly band comes with a molded Kydex shell of your choosing. You must choose the right shell for your gun and accessories when you order. This is one of the few times I’ve ever seen a company ensure your belly band holster was weapon-light-ready.

I’m a big fan of the advantages a molded Kydex shell offers. This includes better retention, a consistent draw, and overall better safety.
The holster attaches via hook and loop and is secured even more when the band wraps around the holster. Shooters who carry different guns can order separate shells for an easier carry option.

On top of that, Crossbreed includes a large pocket for a wallet, phone, tourniquet, or whatever and two small pockets for magazines.
The band itself is made from an antimicrobial polyester jersey material for comfort and breathability. It will still be a little muggy, but there is an attempt for comfort and cooling.
Prices accurate at time of writing
Prices accurate at time of writing
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
Modularity rules, especially in the gun world. Changing the gun I’m carrying depending on my style of dress or situation is valuable, and the Crossbreed belly band makes that a reality.
Plus, I can carry with a red dot and light without much concern, either. That can take you a long way if need be.
What do you think of the Crossbreed Modular Belly Band? Rate it below!
Readers’ Ratings
Your Rating?
4. Alien Gear Low-Pro Belly Band
Alien Gear holsters have produced belly bands for quite some time, but the Low-Pro is the first to integrate a kydex holster. A polymer rig molded specifically for a gun adds a higher degree of retention, safety, and an easy draw.
Beyond integrating a polymer rig into the belly band, they’ve taken a few steps to innovate the design.

The Alien Gear Low-Pro is a minimalist design with a thin belt to help reduce the overall size and weight of the rig. On top of that, the minimalist belt setup keeps things from getting hot while wearing.
This would be the perfect rig for exercising at the gym and going for runs.
On top of the low-profile design, the holster also features an innovative cant design. It’s worn at an appendix position but positioned a bit more horizontally than vertically for an easy and quick draw. You have fewer cover garments to defeat for a quicker draw.
It’s a belt-free carry option that is perfect if you live in states where the temperature and humidity often make regular belly bands a sweaty mess. Sadly, the holster shells don’t offer light-bearing options and are not red dot friendly.
Prices accurate at time of writing
Prices accurate at time of writing
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
Alien Gear makes tons of holsters. They likely have an option for you. They have a long history and diverse shell options, so everyone has access to a little of everything.
5. Clip & Carry Strapt-Tac Belly Band
If you’re like me, you look at belly bands and the temperature outside and feel a resounding sense of dread. I live in Florida, and as a well-insulated American, I run hot.
Belly bands often only add to that heat with their large elastic designs.

The Clip & Carry Strap Tac Belly Band is different, though.
Instead of having several inches of the band to add a layer of insulation to your torso, it has a 2-inch-wide strap that makes up most of the band. Where the gun is supported, you have your typical much wider portion to protect the gun from you and you from the gun.

This rig requires you to bring your own holster and works with traditional kydex IWB designs.
A 1.5-inch or so integral strap lets you clip your holster into place, and boom; you’re ready to carry. The clip attaches to the strap and slides into a pocket for maximum retention.
There is a standard, and an appendix version, with the difference being the appendix version has a long pocket and strap to accommodate side car holsters. The belt portion is quite adjustable, and there are a number of size types. Due to the unique, minimalist design, you can carry it in nearly any position.
Prices accurate at time of writing
Prices accurate at time of writing
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
Sure, you have strongside and cross-draw with a heaping of appendix. You can also carry Air Marshal style with the rig up and under your arm, almost like a shoulder holster.
Obviously, the downside is no reloads, no handcuffs, no flashlight, just a gun, but that’s all you need most of the time.
6. Desantis Sky Band II
Companies have long-designed belly bands for the concealed carrier, and these are aimed at the armed citizen…mostly.
Desantis produces the Sky Band, which from the beginning, was aimed at offering law enforcement a deep concealment option.

The Sky Band II mixes traditional belly band design with a modern holster take. Each design comes with a molded kydex holster shell. This offers you the better retention possible, the quickest draw, and the safest method of carrying with a belly band.
The band itself is 5 inches wide to support a duty-sized firearm, and it’s made from surgical-grade elastic. Its elastic offers lots of support and comfort for a heavy gun and a ton of accessories.
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
The Sky Band 2 comes with a handcuff pouch, an elastic pocket, and three magazine pouches. You can almost carry a full loadout in the Sky Band II, and the supportive design ensures it won’t weigh, slip, slide, and go limp on you.
The downside is that 5 inches of wrap mean 5 inches of a hotter-than-average part of your body. You’ll get sweaty fast, especially if this is your summer carry option.

This is the way to go if you need to carry a full loadout.
The Sky Band II holds nothing back and provides a solid carry option for those who want more than a gun on hand.
7. PHLster Enigma
The folks at PHLster might not like me calling the Enigma a belly band because it’s really not, but it serves the same purpose. It’s a beltless carry option that does wrap around your body.
This system isn’t an elastic band but more of a belt that wraps around your body and is secured via something akin to a belt buckle.

The belt is made from nylon and is designed to be rugged and durable. It can fit up to a 46-inch waist and is easily adjustable. An optional Sports Belt expands the waist to 50 inches.
The Enigma then allows you to attach your holster of choice, although PHLster sells the Enigma Express with an included holster if you so choose.

PHLster’s holsters work with the Enigma, but so do JM Custom Kydex, Henry Holsters, Holsterco, KSG Hoslters, Dark Star Gear, and more. If you want a light-bearing model, make sure you purchase the Light Bearing Enigma.
-
25% off all OAKLEY products – OAKLEY25
Copied!
Visit Merchant
The Enigma comes with a leg leash that fits over your thing to keep the Enigma centered and in place. This beltless design is aimed at appendix carry but gives the user more control of the holster’s placement.
Because there is no belt, not buttons, or buckle to contend with, you can change the angle and height easily to get it just right.
The PHLster Enigma is named correctly as it’s not really a belly band, but kind of at the same time. It’s the model I’d suggest if you had to ask me to pick my absolute favorite.
It’s one of the best belt-free carry options out there.
Final Thoughts
Belly bands can be a very useful tool to have on hand. They tend to be fairly comfortable and easy to carry with them in odd situations.

While they might not be ideal for everyday carry, they work great when you need a belt-free solution.
What’s your favorite belly band holster? Let us know in the comments below! For more on concealed carry, check out our Concealed Carry Guide.
The post 7 Best Belly Band Holsters for Concealed Carry & Working Out appeared first on Pew Pew Tactical.
Pew Pew Tactical
Managing Planetscale DB schema with Laravel migrations
https://opengraph.githubassets.com/befba0664babbe9f75ce81cbf06a6d6fa5a0568724da50853489be1ce853f853/x7media/laravel-planetscale
LaravelPlanetScale
This package adds a php artisan pscale:migrate
command to your Laravel app which can be used instead of the normal php artisan migrate
command when using a PlanetScale database.
Installation
Via Composer
composer require x7media/laravel-planetscale
Configuration & Usage
-
Login to your PlanetScale account and get your Service Token and Service Token ID from the organaization settings. Also take a note of your organization name and production branch name as well for the next steps.
-
Add the following database level permissions to your Service Token for your app’s database:
- create_branch – Create a database branch
- delete_branch – Delete a database branch
- connect_branch – Connect to, or create passwords and certificates for a database branch
- create_deploy_request – Create a database deploy request
- read_deploy_request – Read database deploy requests
-
From the database settings screen on PlanetScale, click the checkmark to enable the “Automatically copy migration data” settings. Select “Laravel” from the migration framework dropdown and it should fill it “migrations” for the migration table name. Then save the database settings. This will allow migration status to be synced across PlanetScale database branches.
-
Setup the following enviroment variables in your app with the appropriate values:
PLANETSCALE_ORGANIZATION=
PLANETSCALE_PRODUCTION_BRANCH=
PLANETSCALE_SERVICE_TOKEN_ID=
PLANETSCALE_SERVICE_TOKEN=
Additonally yuou’ll need to make sure your database name is set under:
DB_DATABASE=
OR
Optionally you can publish the config:
php artisan vendor:publish --tag=laravel-planetscale-config
Then customize the values in the config. NOTE: If you take this approach we STRONGLY RECOMMEND that you still use enviroment varibles or some other secrets storage at least for your service token and service token ID for security.
- Replase the
php artisan migrate
command in your deployment script or process with this:
php artisan pscale:migrate
NOTE: The pscale:migrate
command supports the same options are Laravel’s built in migration command, and will pass those options along to it when it gets to that step in the process.
FAQ’s
Why is this necessary?
PlanetScale has alot of advantages when using it as your application’s production database. However it handles your database and schema migrations in a somewhat unusual way.
It uses branches for your database. A branch can be production or development. You’ll want to use a production branch for your app in production because that afford you extra features like automatic backups, however you cannot perform schema changes directly against a production branch. Instead you should create a new development branch based on your production branch and perform your schema changes on that than merge that back into your production branch just like you would do with your code in Git.
This package uses PlanetScale’s Public API to automate the process of creating a new development branch, connecting your app to the development branch, running your Laravel migrations on the development branch, merging that back into your production branch, and deleting the development branch.
Are there any notable limitations to PlanetScale’s branching?
Yes, there is one BIG caveat. That is branching and merging is for schema only. So you will need to seperate your schema migrations from your data migrations. Use this to run your schema migrations and run your data migrations seperatly against your production branch.
An alternative method is to demote your production branch back to a development branch, then you can mix schema and data migrations. Then when that is finished promote the branch back to a production branch. But that is currently a manaual process. I have however made a request with the PlanetScale team to make a slight change to their API that would allow this demote-promote process to be automated, and if that change is made I will update this package. However I ultimately have no control over if or when that will become possible.
Change log
Please see the changelog for more information on what has changed recently.
Testing
Contributing
Please see the contributing guidelines.
Security
If you discover any security related issues, please email info@x7media.com instead of using the issue tracker.
Credits
License
MIT. Please see the license file for more information.
Laravel News Links
Langdon Tactical Offers Completely Free Training Video Series
https://cdn.athlonoutdoors.com/wp-content/uploads/sites/6/2023/03/langdon-tactical-03.jpg
The importance of quality education and training when it comes to firearms ownership and operation is absolutely paramount. Many who are new to firearms feel it can be daunting and have a lot of questions. Particularly those who consider making everyday carry a part of their lifestyle. For this reason, we introduce Langdon Tactical and its video training series and comprehensive resource.
Langdon Tactical Change the Game
Quality, thorough, thoughtful, and well-designed training content to serve firearms owners across the full spectrum of experience is important. However, no single platform has truly hit the mark.
This is where Ernest and Aimee Langdon enter the scene with designs to change the game. They are the president and vice president of the renowned firearms customization outfit Langdon Tactical Technology (LTT).
The Backstory
Ernest’s background includes 12 years of active-duty service in the United States Marine Corps. Additionally, he has more than 30 years of competitive shooting experience. His competitive accolades include a Grand Master rating in USPSA (United States Practical Shooting Association). Likewise, he has accreditation as a Distinguished Master in IDPA (International Defensive Pistol Association). Finally, he has ten National Championships and two World Speed Shooting titles.
Aimee is a global business professional with more than 16 years of accomplished professional business development, sales, and marketing experience. She has podiumed three times at The Tactical Games—twice in first and once in second place at the 2021 Nationals. While leading Business Development and Operations for LTT, she trains at a grueling pace and is an energetic and dedicated mom.

This couple not only helms the day-to-day business at LTT but are also the masterminds and producers behind LTT Discover. LTT is renowned for customizing and converting exceptional stock firearms platforms into extraordinary competition-quality enhanced guns. LTT Discover is a groundbreaking firearms education and empowerment platform.
Free Video Series and Comprehensive Training Resource
As noted in the mission statement of the LTT Discover website, the completely free video series and comprehensive training resource is “…aimed at bringing forth great information and resources to better guide and educate those looking to own a firearm without intimidation or demanding perfection.”
The curriculum is guided, based on experience level and keynotes distilled from the unique backgrounds and perspectives of both Aimee and Ernest. Likewise, it draws insight and contributions from influential pillars and experts in the 2A community. As a result, the series serves as a very inclusive, welcoming, well-rounded resource.
I recently navigated the series and website myself. Having experience with what else is (and isn’t) out there, I feel LTT Discover is genuinely a refreshing new approach. Even the most well-trained and experienced in the firearms community should find great value throughout.

I recently had a chance to chat with the founders about their journey together. We discussed the inception of the LTT Discover platform and learned what is in store for the community. Specifically as it applies to the future of firearms education and training.
QUESTIONS & ANSWERS
How did you two meet each other?
AIMEE: We worked together at a robotics company for six years. We both started new chapters personally and professionally and found each other romantically.
What’s life like between the business, family, and training? How do you juggle it all?
ERNEST: Balance. We are constantly balancing family time, work time, and personal time and sometimes we fail all the way around. But fitness and nutrition are pivotal components of our family unit. We are active together and love to cook.
AIMEE: Outside of our personal shooting training, we attend other classes and courses with other instructors to continue to learn from other people.
I understand the Discover platform was partially developed because of your experiences training together. What’s that dynamic like at the range?
AIMEE: Have you ever tried to teach your spouse something? HA! We shoot together all the time and as a couple, manage it better than most.
ERNEST: In all the years we have been going to the range, we can count on one hand the number of times we silently packed up and left the range. Learning when and how to instruct as well as how to be a student while separating the relationship is something we have figured out.
AIMEE: It’s tough to teach and be corrected all the time. However, separating the emotion from the task often helps. And sometimes, he just needs to be reminded, “positive reinforcement helps too,” or “hey, can we just shoot today?”
ERNEST (laughing): Sometimes, I just need to keep my mouth shut.
What were some of the main motivations or “aha! moments” that led to you beginning to consider developing LTT Discover?
ERNEST: We began to develop Discover as we found a missing gap in education and information available for people who are not tactical, LE or related to the LE/MIL community. Being business owners, we are often asked for direction, information and/or training. As we started to get bigger, more people were asking for assistance. We were looking for places to send them.
AIMEE: Guilty by association, the assumption was that early on, I was just as proficient and experienced as Ernest was, and the reality is, at that time, I hadn’t even touched a handgun before we got together. The “Carry Journey” component of Discover was created based on many gun owners’ first-time experiences around firearms or in making the decision to own a firearm while also incorporating health and fitness as a more complete mindset.
Being in the tactical industry, it’s often assumed or impressed upon others that if you are going to own a firearm, you have to do things a certain way and if you don’t, then you shouldn’t even own a firearm. It’s very intimidating for millions of people and many of the gun owners today.
Who is LTT Discover for?
ERNEST: LTT Discover is for everyone—those thinking about firearms ownership to those who own firearms and who carry every day.

When it comes to the 2A space and industry, how important is community and community building to you, and why?
Aimee: Community and Community building is huge, it’s very important to us. We believe a community provides real and raw emotion, tied to being caring and helpful to individuals in a positive way.
A strong community provides a safe place where people can seek information and ask questions without the fear of being belittled or made fun of because they don’t understand and/or are new and don’t know things that some people consider common knowledge.
Ernest: Being a part of a community makes people feel comfortable to ask a question, agree OR disagree, and be guided by trusted and real individuals from a real raw perspective, position or experience background.
What has the initial response to LTT Discover been since its launch?
AIMEE: We are blown away by the positive response from Discover. We have received so many thanks and “ah-ha’s” from men and women alike who feel like the information is proficient, straightforward, and not intimidating. Many enthusiasts who are gun owners have been able to use it as a tool for friends and loved ones to share information and thoughts from real individuals.
As a closing question, what are your personal hopes and dreams for the impact that LTT will have on the industry and the public at large?
AIMEE: As a brand, we hope Langdon Tactical will be a resource not only for products and training (as it is today) but also as an educational resource that provides a welcoming community guiding people to be more confident, self-reliant, and empowered as individuals.
Thank you, Aimee and Ernest, for sitting down to share this exciting new resource. I expect it will help grow the community in a much-needed way. And I’ll definitely be watching to see what comes next.

To learn more about Langdon Tactical Technology and to explore the LTT Discover platform, visit LangdonTactical.com and LTTDiscover.com.

This article was originally published in the Personal Defense World April/May 2022 issue. Subscription is available in print and digital editions at OutdoorGroupStore.com. Or call 1-800-284-5668, or email subscriptions@athlonmediagroup.com.
The post Langdon Tactical Offers Completely Free Training Video Series appeared first on Personal Defense World.
Personal Defense World
Watch 10 minutes of ‘Legend of Zelda: Tears of the Kingdom’ gameplay
https://s.yimg.com/os/creatr-uploaded-images/2023-03/1c0fa3b0-cd78-11ed-bfff-507b777f076f
As promised, Nintendo has showcased 10 minutes of The Legend of Zelda: Tears of Kingdom gameplay — and it’s a useful preview if you’re wondering just how the developers will improve on Breath of the Wild‘s formula. Most notably, producer Eiji Aonuma notes that fusing objects plays an important role in the game. You can build stronger weapons, and even craft vehicles like powered boats and hovercraft. Enemies can use fused weapons too, though, so you can’t assume that a favorite combat strategy will work.
The demo video also shows a way to reach the floating islands above Hyrule (by using a recall ability on an elevator stone), and what happens if you fall or jump off. You have full control all the way down, so you can glide to distant areas or plunge quickly toward the ground. Many mechanics appear familiar, so you won’t have to relearn the fundamentals.
And yes, Nintendo plans to cater to Legend of Zelda devotees with special edition hardware. The company is releasing a Tears of the KingdomOLED Switch (shown below) for $360 on April 28th, weeks ahead of the game’s May 12th launch. You won’t get a copy of Tears, unfortunately, but you will get lavish artwork on the Switch itself, the Joy-Con controllers and the dock. If you already have a Switch, you can also buy Tears-edition Pro Controller ($75) or carrying case ($25).
This article originally appeared on Engadget at https://www.engadget.com/watch-10-minutes-of-legend-of-zelda-tears-of-the-kingdom-gameplay-145613610.html?src=rssEngadget