Get real-time insights into the status of your PHP FPM with this convenient card for Laravel Pulse.
Example
Installation
Install the package using Composer:
composer require maantje/pulse-php-fpm
Enable PHP FPM status path
Configure your PHP FPM status path in your FPM configuration:
Register the recorder
In your pulse.php configuration file, register the PhpFpmRecorder with the desired settings:
return [
// ...'recorders' => [
PhpFpmRecorder::class => [
// Optionally set a server name gethostname() is the default'server_name' => env('PULSE_SERVER_NAME', gethostname()),
// Optionally set a status path the current value is the default'status_path' => 'localhost:9000/status', // with unix socket unix:/var/run/php-fpm/web.sock/status// Optionally give datasets, these are the default values.// Omitting a dataset or setting the value to false will remove the line from the chart// You can also set a color as value that will be used in the chart'datasets' => [
'active processes' => '#9333ea',
'total processes' => 'rgba(147,51,234,0.5)',
'idle processes' => '#eab308',
'listen queue' => '#e11d48',
],
],
]
]
Welcome to the ultimate guide for MySQL issues debugging — your comprehensive MySQL checklist for tackling MySQL database environment challenges. This checklist for MySQL debugging is as useful to a…
A quick configuration change may do the trick in improving the performance of your AWS RDS for MySQL instance. Here, we will discuss a notable new feature in Amazon RDS, the Dedicated Log Volume (DLV), that has been introduced to boost database performance. While this discussion primarily targets MySQL instances, the principles are also relevant to PostgreSQL and MariaDB instances.
What is a Dedicated Log Volume (DLV)?
A Dedicated Log Volume (DLV) is a specialized storage volume designed to house database transaction logs separately from the volume containing the database tables. This separation aims to streamline transaction write logging, improving efficiency and consistency. DLVs are particularly advantageous for databases with large allocated storage, high I/O per second (IOPS) requirements, or latency-sensitive workloads.
Who can benefit from DLV?
DLVs are currently supported for Provisioned IOPS (PIOPS) storage, with a fixed size of 1,000 GiB and 3,000 Provisioned IOPS. Amazon RDS extends support for DLVs across various database engines:
MariaDB: 10.6.7 and later v10 versions
MySQL: 8.0.28 and later v8 versions
PostgreSQL: 13.10 and later v13 versions, 14.7 and later v14 versions, and 15.2 and later v15 versions
Cost of enabling Dedicated Log Volumes (DLV) in RDS
The documentation doesn’t say much about additional charges for the Dedicated Log Volumes, but I reached out to AWS support, who responded exactly as follows:
Please note that there are no additional costs for enabling a dedicated log volume (DLV) on Amazon RDS. By default, to enable DLV, you must be using PIOPS storage, sized at 1,000 GiB with 3,000 IOPS, and you will be priced according to the storage type.
Are DLVs effective for your RDS instance?
Implementing dedicated mounts for components such as binlogs and datadir is a recommended standard practice. It becomes more manageable and efficient by isolating logs and data to a dedicated mount. This segregation facilitates optimized I/O operations, preventing potential bottlenecks and enhancing overall system performance. Overall, adopting this practice promotes a structured and efficient storage strategy, fostering better performance, manageability, and, ultimately, a more robust database environment.
Thus, using Dedicated Log Volumes (DLVs), though new in AWS RDS, has been one of the recommended best practices and is a welcome setup improvement for your RDS instance.
We performed a standard benchmarking test using the sysbench tool to compare the performance of a DLV instance vs a standard RDS MySQL instance, as shared in the following section.
Benchmarking AWS RDS DLV setup
Setup
2 RDS Single DB instances
1 EC2 Instance
Regular
DLV Enabled
Sysbench
db.m6i.2xlarge
c5.2xlarge
MySQL 8.0.31
CentOS 7
8 Core / 32G
8 Core / 16G
Data Size: 32G
– Default RDS configuration was used with binlogs enabled having full ACID compliance configurations.
Benchmark results for DLV-enabled instance vs. standard instance
Write-only traffic
Read-write traffic
Read-only traffic
Benchmarking analysis
For both read-only and read-write traffic, there is a constant improvement in the QPS counters as the number of threads increases.
For write-only traffic, the QPS counters match the performance of standard RDS instances for lower thread counts, though, for higher counters, there is a drastic improvement.
The DLV, of course, affects the WRITE operations the most, and hence, the write-only test should be given the most consideration for the comparison of the DLV configuration vs. standard RDS.
Benchmarking outcome
Based on the sysbench benchmark results in the specified environment, it is strongly advised to employ DLV for a standard RDS instance. DLV demonstrates superior performance across most sysbench workloads, particularly showcasing notable enhancements in write-intensive scenarios.
Implementation considerations
When opting for DLVs, it’s crucial to be aware of the following considerations:
DLV activation requires a reboot: After modifying the DLV setting for a DB instance, a reboot is mandatory for the changes to take effect.
Recommended for larger configurations: While DLVs offer advantages across various scenarios, they are particularly recommended for database configurations of five TiB or greater. This recommendation underscores DLV’s effectiveness in handling substantial storage volumes.
Benchmark and test: It is always recommended to test and review the performance of your application traffic rather than solely depending on standard benchmarking dependent on synthetic load.
DLV in Multi-AZ deployments
Amazon RDS seamlessly integrates DLVs with Multi-AZ deployments. Whether you’re modifying an existing Multi-AZ instance or creating a new one, DLVs are automatically created for both the primary and secondary instances. This ensures that the advantages of DLV extend to enhanced availability and reliability in Multi-AZ configurations.
DLV with read replicas
DLV support extends to read replicas. If the primary DB instance has DLV enabled, all read replicas created after DLV activation inherit this feature. However, it’s important to note that read replicas created before DLV activation will not have it enabled by default. Explicit modification is required for pre-existing read replicas to leverage DLV benefits.
Conclusion
Dedicated Log Volumes have emerged as a strong option for optimizing Amazon RDS performance. By segregating transaction logs and harnessing the power of dedicated storage, DLVs contribute to enhanced efficiency and consistency. Integrating DLVs into your database strategy will help you toward your efforts in achieving peak performance and reliability.
How Percona can help
Percona is a trusted partner for many industry-leading organizations across the globe that rely on us for help in fully utilizing their AWS RDS environment. Here’s how Percona can enhance your AWS RDS experience:
Expert configuration: RDS works well out of the box, but having Percona’s expertise ensures optimal performance. Our consultants will configure your AWS RDS instances for the best possible performance, ensuring minimal TCO.
Decades of experience: Our consultants bring decades of experience in solving complex database performance issues. They understand your goals and objectives, providing unbiased solutions for your database environment.
Blog resources: Percona experts are actively contributing to the community through knowledge sharing via forums and blogs. For example, here are two blogs on this subject:
Laravel Pulse is a lightweight application monitoring tool for Laravel. It was just
released today and I took a bit of time to create a custom card to
show outdated composer dependencies.
This is what the card looks like right now:
I was surprised at how easy the card was to create. Pulse has all of the infrastructure built out for:
storing data in response to events or on a schedule
retrieving your data back, aggregated or not
rendering your data into a view.
The hooks are all very well thought through.
There is no official documentation for custom cards yet, so much of this is subject to change. Everything I’m telling
you here I learned through diving into the source code.
Recording data
The first step is to create a Recorder that will record the data you’re looking to monitor. If you
open config/pulse.php you’ll see a list of recorders:
The recorders listen for application events. Pulse emits a SharedBeat event if your recorder needs to run on an
interval instead of in response to an application event.
For example, the Servers recorder records server stats every 15 seconds in response to the SharedBeat event:
classServers
{
publicstring$listen = SharedBeat::class;
publicfunctionrecord(SharedBeat$event): void
{
if ($event->time->second % 15 !== 0) {
return;
}
// Record server stats...
}
}
But the Queue recorder listens for specific application events:
In our case, we just need to check for outdated packages once a day on a schedule, so we’ll use the SharedBeat event.
Creating the recorder
The recorder is a plain PHP class with a record method. Inside of that method you’re given one of the events to which
you’re listening. You also have access to Pulse in the constructor.
classOutdated
{
publicstring$listen = SharedBeat::class;
publicfunction__construct(
protectedPulse$pulse,
protectedRepository$config
) {
//
}
publicfunctionrecord(SharedBeat$event): void
{
//
}
}
The SharedBeat event has a time property on it, which we can use to decide if we want to run or not.
classOutdated
{
// ...
publicfunctionrecord(SharedBeat$event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
return;
}
}
}
Pulse will handle invoking the record method, we just need to figure out what to do there. In our case we’re going to
run composer outdated.
classOutdated
{
// ...
publicfunctionrecord(SharedBeat$event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
There is currently no documentation, but from what I can tell the pulse_aggregates table stores pre-computed rollups
of time-series data for better performance. The entries table stores individual events, like requests or exceptions.
The values table seems to be a simple "point in time" store.
We’re going to use the values table to stash the output of composer outdated. To do this, we use the pulse->set()
method.
classOutdated
{
// ...
publicfunctionrecord(SharedBeat$event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
By default, our ComposerOutdated class extends Livewire’s Component class, but we’re going to change that to extend
Pulse’s Card class.
namespaceApp\Livewire;
use Livewire\Component;
use Laravel\Pulse\Livewire\Card;
classComposerOutdatedextendsComponent
class ComposerOutdated extendsCard
{
publicfunctionrender()
{
returnview('livewire.composer-outdated');
}
}
To get our data back out of the Pulse data store, we can just use the Pulse facade. This is one of the things I’m
really liking about Pulse. I don’t have to add migrations, maintain tables, add new models, etc. I can just use their
data store!
Now, in our resources/views/vendor/pulse folder, we have a new dashboard.blade.php where we can add our custom card. This is what it looks like by default:
<x-pulse>
<livewire:pulse.serverscols="full" />
<livewire:pulse.usagecols="4"rows="2" />
<livewire:pulse.queuescols="4" />
<livewire:pulse.cachecols="4" />
<livewire:pulse.slow-queriescols="8" />
<livewire:pulse.exceptionscols="6" />
<livewire:pulse.slow-requestscols="6" />
<livewire:pulse.slow-jobscols="6" />
<livewire:pulse.slow-outgoing-requestscols="6" />
</x-pulse>
Adding our custom card
We can now add our new card wherever we want!
<x-pulse>
<livewire:composer-outdatedcols="1"rows="3" />
<livewire:pulse.serverscols="full" />
<livewire:pulse.usagecols="4"rows="2" />
<livewire:pulse.queuescols="4" />
<livewire:pulse.cachecols="4" />
<livewire:pulse.slow-queriescols="8" />
<livewire:pulse.exceptionscols="6" />
<livewire:pulse.slow-requestscols="6" />
<livewire:pulse.slow-jobscols="6" />
<livewire:pulse.slow-outgoing-requestscols="6" />
</x-pulse>
Community site
There is a lot to learn about Pulse, and I’ll continue to post here as I do. I’m working
on builtforpulse.com to showcase Pulse-related packages and articles, so make sure you stay
tuned over there!
30 years ago today, Doom "invented the modern PC games industry, as a place dominated by technologically advanced action shooters," remembers the Guardian:
In late August 1993, a young programmer named Dave Taylor walked into an office block… The carpets, he discovered, were stained with spilled soda, the ceiling tiles yellowed by water leaks from above. But it was here that a team of five coders, artists and designers were working on arguably the most influential action video game ever made. This was id Software. This was Doom… [W]hen Taylor met id’s charismatic designer and coder John Romero, he was shown their next project… "There were no critters in it yet," recalls Taylor of that first demo. "There was no gaming stuff at all. It was really just a 3D engine. But you could move around it really fluidly and you got such a sense of immersion it was shocking. The renderer was kick ass and the textures were so gritty and cool. I thought I was looking at an in-game cinematic. And Romero is just the consummate demo man: he really feeds off of your energy. So as my jaw hit the floor, he got more and more animated. Doom was amazing, but John was at least half of that demo’s impact on me." […] In late 1992, it had become clear that the 3D engine John Carmack was planning for Doom would speed up real-time rendering while also allowing the use of texture maps to add detail to environments. As a result, Romero’s ambition was to set Doom in architecturally complex worlds with multiple storeys, curved walls, moving platforms. A hellish Escher-esque mall of death… "Doom was the first to combine huge rooms, stairways, dark areas and bright areas," says Romero, "and lava and all that stuff, creating a really elaborate abstract world. That was never possible before…." [T]he way Doom combined fast-paced 3D action with elaborate, highly staged level design would prove hugely influential in the years to come. It’s there in every first-person action game we play today… But Doom wasn’t just a single-player game. Carmack consumed an entire library of books on computer networking before working on the code that would allow players to connect their PCs via modem to a local area network (LAN) and play in the game together… Doom brought fast-paced, real-time action, both competitive and cooperative, into the gaming mainstream. Seeing your friends battling imps and zombie space marines beside you in a virtual world was an exhilarating experience… When Doom was launched on 10 December 1993, it became immediately clear that the game was all-consuming — id Software had chosen to make the abbreviated shareware version available via the FTP site of the University of Wisconsin-Madison, but that crashed almost immediately, bringing the institution’s network to its knees… "We changed the rules of design," says Romero. "Getting rid of lives, which was an arcade holdover that every game had; getting rid of score because it was not the goal of the game. We wanted to make it so that, if the player died, they’d just start that level over — we were constantly pushing them forward. The game’s attitude was, I want you to keep playing. We wanted to get people to the point where they always needed more." It was a unique moment in time. In the article designer Sandy Petersen remembers that "I would sometimes get old dungeons I’d done for D&D and use them as the basis for making a map in Doom." Cheat codes had been included for debugging purposes — but were left in the game rs to discover. The article even includes a link to a half-hour video of a 1993 visit to Id software filmed by BBS owner Dan Linton. And today on X, John Romero shared a link to the Guardian’s article, along with some appreciative words for anyone who’s ever played the game. "DOOM is still remembered because of the community that plays and mods it 30 years on. I’m grateful to be a part of that community and fortunate to have been there at its beginning."
The Guardian’s article notes that now Romero "is currently working on Sigil 2, a spiritual successor to the original Doom series."
Sheldon Ruston was hired to move the Elmwood Building in downtown Halifax off its foundations for renovations and development, but that was no easy task.
Arcade cabinets made of marble feature meticulously carved sculptures of iconic video game characters. Weighing 300 kg, each arcade machine boasts high quality Calacatta marble exterior, LCD screen, and joystick pad. These unique, limited-edition arcade cabinets feature Mario, Sonic, Alien, and other video game character sculptures carved on the sides. Juxtaposition of ancient-inspired marble with […]Toxel.com
FrankenPHP just hit a significant milestone this week, reaching a v1.0 release. A modern PHP application server written in Go, FrankenPHP gives you a production-grade PHP server with just one command.
It includes native support for Symphony, Laravel, WordPress, and more:
Production-grade PHP server, powered by Caddy
Easy deploy – package your PHP apps as a standalone, self-executable binary
Run only one service – no more separate PHP-FPM and Nginx processes
Extensible – compatible with PHP 8.2+, most PHP extensions, and all Caddy modules
Worker mode – boot your application once and keep it in memory
Real-time events sent to the browser as a JavaScript event
Which PHP modules are supported?
I tried looking for a definitive list, but from what I gather most popular PHP extensions should work. The documentation confirms that OPcache and Debug are natively supported by FrankenPHP.
You can get started with FrankenPHP at frankenphp.dev, and browse the documentaion to learn about the worker mode, Docker images, and creating static binaries of your application.
If you want to experiment with your application, the easiest way to try it out is to run the following Docker command:
https://theawesomer.com/photos/2023/12/velcro_glow_patches_t.jpgGlowDaddy’s Velcro glow-in-the-dark cards can be used alone or attached to the hook-and-loop panels found on tactical bags. Each is precision cut from HyperGlow luminescent material, which can glow brightly for hours after exposure to direct sunlight or a UV light source. The card measures 3.37″ L x 2.1″ W x 0.12″ thick.The Awesomer