PHP FPM status card for Laravel Pulse

https://opengraph.githubassets.com/2bff59fe8040a87ba2fa7f87c30df23ca5392d7bf7f0411375a4cc4a86b43e15/maantje/pulse-php-fpm

PHP FPM status card for Laravel Pulse

Get real-time insights into the status of your PHP FPM with this convenient card for Laravel Pulse.

Example

Drag Racing

Installation

Install the package using Composer:

composer require maantje/pulse-php-fpm

Enable PHP FPM status path

Configure your PHP FPM status path in your FPM configuration:

Register the recorder

In your pulse.php configuration file, register the PhpFpmRecorder with the desired settings:

return [
    // ...
    
    'recorders' => [
        PhpFpmRecorder::class => [
            // Optionally set a server name gethostname() is the default
            'server_name' => env('PULSE_SERVER_NAME', gethostname()),
            // Optionally set a status path the current value is the default
            'status_path' => 'localhost:9000/status', // with unix socket unix:/var/run/php-fpm/web.sock/status
            // Optionally give datasets, these are the default values.
            // Omitting a dataset or setting the value to false will remove the line from the chart
            // You can also set a color as value that will be used in the chart
            'datasets' => [
                'active processes' => '#9333ea',
                'total processes' => 'rgba(147,51,234,0.5)',
                'idle processes' => '#eab308',
                'listen queue' => '#e11d48',
            ],
        ],
    ]
]

Ensure you’re running the pulse:check command.

Add to your dashboard

Integrate the card into your Pulse dashboard by publish the vendor view.
and then modifying the dashboard.blade.php file:

<x-pulse>
    <livewire:pulse.servers cols="full" />
    
+ <livewire:fpm cols="full" />

    <livewire:pulse.usage cols="4" rows="2" />

    <livewire:pulse.queues cols="4" />

    <livewire:pulse.cache cols="4" />

    <livewire:pulse.slow-queries cols="8" />

    <livewire:pulse.exceptions cols="6" />

    <livewire:pulse.slow-requests cols="6" />

    <livewire:pulse.slow-jobs cols="6" />

    <livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

And that’s it! Enjoy enhanced visibility into your PHP FPM status on your Pulse dashboard.

Laravel News Links

Maximizing Performance of AWS RDS for MySQL with Dedicated Log Volumes

https://percona.com/blog/wp-content/uploads/2023/12/benchmark-rds-mysql-dlv3-1024×629.pngMaximizing AWS RDS for MySQL Performance

A quick configuration change may do the trick in improving the performance of your AWS RDS for MySQL instance. Here, we will discuss a notable new feature in Amazon RDS, the Dedicated Log Volume (DLV), that has been introduced to boost database performance. While this discussion primarily targets MySQL instances, the principles are also relevant to PostgreSQL and MariaDB instances.

What is a Dedicated Log Volume (DLV)?

A Dedicated Log Volume (DLV) is a specialized storage volume designed to house database transaction logs separately from the volume containing the database tables. This separation aims to streamline transaction write logging, improving efficiency and consistency. DLVs are particularly advantageous for databases with large allocated storage, high I/O per second (IOPS) requirements, or latency-sensitive workloads.

Who can benefit from DLV?

DLVs are currently supported for Provisioned IOPS (PIOPS) storage, with a fixed size of 1,000 GiB and 3,000 Provisioned IOPS. Amazon RDS extends support for DLVs across various database engines:

  • MariaDB: 10.6.7 and later v10 versions
  • MySQL: 8.0.28 and later v8 versions
  • PostgreSQL: 13.10 and later v13 versions, 14.7 and later v14 versions, and 15.2 and later v15 versions

Cost of enabling Dedicated Log Volumes (DLV) in RDS

The documentation doesn’t say much about additional charges for the Dedicated Log Volumes, but I reached out to AWS support, who responded exactly as follows: 

Please note that there are no additional costs for enabling a dedicated log volume (DLV) on Amazon RDS. By default, to enable DLV, you must be using PIOPS storage, sized at 1,000 GiB with 3,000 IOPS, and you will be priced according to the storage type. 

Are DLVs effective for your RDS instance?

Implementing dedicated mounts for components such as binlogs and datadir is a recommended standard practice. It becomes more manageable and efficient by isolating logs and data to a dedicated mount. This segregation facilitates optimized I/O operations, preventing potential bottlenecks and enhancing overall system performance. Overall, adopting this practice promotes a structured and efficient storage strategy, fostering better performance, manageability, and, ultimately, a more robust database environment.

Thus, using Dedicated Log Volumes (DLVs), though new in AWS RDS, has been one of the recommended best practices and is a welcome setup improvement for your RDS instance.

We performed a standard benchmarking test using the sysbench tool to compare the performance of a DLV instance vs a standard RDS MySQL instance, as shared in the following section.

Benchmarking AWS RDS DLV setup

Setup

2 RDS Single DB instances 1 EC2 Instance
Regular DLV Enabled Sysbench
db.m6i.2xlarge c5.2xlarge
MySQL 8.0.31 CentOS 7
8 Core / 32G 8 Core / 16G
Data Size: 32G

– Default RDS configuration was used with binlogs enabled having full ACID compliance configurations.

Benchmark results for DLV-enabled instance vs. standard instance

Write-only traffic

AWS RDS for MySQL - DLV benchmarking

Read-write traffic

AWS RDS for MySQL - DLV benchmarking

Read-only traffic

AWS RDS for MySQL - DLV benchmarking

Benchmarking analysis

  • For both read-only and read-write traffic, there is a constant improvement in the QPS counters as the number of threads increases.
  • For write-only traffic, the QPS counters match the performance of standard RDS instances for lower thread counts, though, for higher counters, there is a drastic improvement.
  • The DLV, of course, affects the WRITE operations the most, and hence, the write-only test should be given the most consideration for the comparison of the DLV configuration vs. standard RDS.

Benchmarking outcome

Based on the sysbench benchmark results in the specified environment, it is strongly advised to employ DLV for a standard RDS instance. DLV demonstrates superior performance across most sysbench workloads, particularly showcasing notable enhancements in write-intensive scenarios.

Implementation considerations

When opting for DLVs, it’s crucial to be aware of the following considerations:

  1. DLV activation requires a reboot: After modifying the DLV setting for a DB instance, a reboot is mandatory for the changes to take effect.
  2. Recommended for larger configurations: While DLVs offer advantages across various scenarios, they are particularly recommended for database configurations of five TiB or greater. This recommendation underscores DLV’s effectiveness in handling substantial storage volumes.
  3. Benchmark and test: It is always recommended to test and review the performance of your application traffic rather than solely depending on standard benchmarking dependent on synthetic load.

DLV in Multi-AZ deployments

Amazon RDS seamlessly integrates DLVs with Multi-AZ deployments. Whether you’re modifying an existing Multi-AZ instance or creating a new one, DLVs are automatically created for both the primary and secondary instances. This ensures that the advantages of DLV extend to enhanced availability and reliability in Multi-AZ configurations.

DLV with read replicas

DLV support extends to read replicas. If the primary DB instance has DLV enabled, all read replicas created after DLV activation inherit this feature. However, it’s important to note that read replicas created before DLV activation will not have it enabled by default. Explicit modification is required for pre-existing read replicas to leverage DLV benefits.

Conclusion

Dedicated Log Volumes have emerged as a strong option for optimizing Amazon RDS performance. By segregating transaction logs and harnessing the power of dedicated storage, DLVs contribute to enhanced efficiency and consistency. Integrating DLVs into your database strategy will help you toward your efforts in achieving peak performance and reliability.

How Percona can help

Percona is a trusted partner for many industry-leading organizations across the globe that rely on us for help in fully utilizing their AWS RDS environment. Here’s how Percona can enhance your AWS RDS experience:

Expert configuration: RDS works well out of the box, but having Percona’s expertise ensures optimal performance. Our consultants will configure your AWS RDS instances for the best possible performance, ensuring minimal TCO.

Decades of experience: Our consultants bring decades of experience in solving complex database performance issues. They understand your goals and objectives, providing unbiased solutions for your database environment.

Blog resources: Percona experts are actively contributing to the community through knowledge sharing via forums and blogs. For example, here are two blogs on this subject:

Discover how our expert support, services, and enterprise-grade open source database software can make your business run better.

Get in touch

Percona Database Performance Blog

Creating a custom Laravel Pulse card

https://d8nrpaglj2m0a.cloudfront.net/0e3d71bd-1b15-4fd4-b522-1272db7b946e/images/articles/og-pulse.jpg

Laravel Pulse is a lightweight application monitoring tool for Laravel. It was just
released today and I took a bit of time to create a custom card to
show outdated composer dependencies.

This is what the card looks like right now:

I was surprised at how easy the card was to create. Pulse has all of the infrastructure built out for:

  • storing data in response to events or on a schedule
  • retrieving your data back, aggregated or not
  • rendering your data into a view.

The hooks are all very well thought through.

There is no official documentation for custom cards yet, so much of this is subject to change. Everything I’m telling
you here I learned through diving into the source code.

Recording data

The first step is to create a Recorder that will record the data you’re looking to monitor. If you
open config/pulse.php you’ll see a list of recorders:

/*

|--------------------------------------------------------------------------

| Pulse Recorders

|--------------------------------------------------------------------------

|

| The following array lists the "recorders" that will be registered with

| Pulse, along with their configuration. Recorders gather application

| event data from requests and tasks to pass to your ingest driver.

|

*/

'recorders' => [

Recorders\Servers::class => [

'server_name' => env('PULSE_SERVER_NAME', gethostname()),

'directories' => explode(':', env('PULSE_SERVER_DIRECTORIES', '/')),

],

 

// more recorders ...

]

The recorders listen for application events. Pulse emits a SharedBeat event if your recorder needs to run on an
interval instead of in response to an application event.

For example, the Servers recorder records server stats every 15 seconds in response to the SharedBeat event:

class Servers

{

public string $listen = SharedBeat::class;

 

public function record(SharedBeat $event): void

{

if ($event->time->second % 15 !== 0) {

return;

}

 

// Record server stats...

}

}

But the Queue recorder listens for specific application events:

class Queues

{

public array $listen = [

JobReleasedAfterException::class,

JobFailed::class,

JobProcessed::class,

JobProcessing::class,

JobQueued::class,

];

 

public function record(

JobReleasedAfterException|JobFailed|JobProcessed|JobProcessing|JobQueued $event

): void

{

// Record the job...

}

}

In our case, we just need to check for outdated packages once a day on a schedule, so we’ll use the SharedBeat event.

Creating the recorder

The recorder is a plain PHP class with a record method. Inside of that method you’re given one of the events to which
you’re listening. You also have access to Pulse in the constructor.

class Outdated

{

public string $listen = SharedBeat::class;

 

public function __construct(

protected Pulse $pulse,

protected Repository $config

) {

//

}

 

public function record(SharedBeat $event): void

{

//

}

 

}

The SharedBeat event has a time property on it, which we can use to decide if we want to run or not.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

}

}

Pulse will handle invoking the record method, we just need to figure out what to do there. In our case we’re going to
run composer outdated.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

 

// Run composer to get the outdated dependencies

$result = Process::run("composer outdated -D -f json");

 

if ($result->failed()) {

throw new RuntimeException(

'Composer outdated failed: ' . $result->errorOutput()

);

}

 

// Just make sure it's valid JSON

json_decode($result->output(), JSON_THROW_ON_ERROR);

}

}

Writing to the Pulse tables

Pulse ships with three separate tables:

  • pulse_aggregates
  • pulse_entries
  • pulse_values

There is currently no documentation, but from what I can tell the pulse_aggregates table stores pre-computed rollups
of time-series data for better performance. The entries table stores individual events, like requests or exceptions.
The values table seems to be a simple "point in time" store.

We’re going to use the values table to stash the output of composer outdated. To do this, we use the pulse->set()
method.

class Outdated

{

// ...

 

public function record(SharedBeat $event): void

{

// Only run once per day

if ($event->time !== $event->time->startOfDay()) {

return;

}

 

// Run composer to get the outdated dependencies

$result = Process::run("composer outdated -D -f json");

 

if ($result->failed()) {

throw new RuntimeException(

'Composer outdated failed: ' . $result->errorOutput()

);

}

 

// Just make sure it's valid JSON

json_decode($result->output(), JSON_THROW_ON_ERROR);

 

// Store it in one of the Pulse tables

$this->pulse->set('composer_outdated', 'result', $result->output());

}

}

Now our data is stored and will be updated once per day. Let’s move on to displaying that data!

(Note: You don’t have to create a recorder. Your card can pull data from anywhere!)

Displaying the data

Pulse is built on top of Laravel Livewire. To add a new Pulse card to your dashboard,
we’ll create a new Livewire component called ComposerOutdated.

php artisan livewire:make ComposerOutdated

 

# COMPONENT CREATED ????

# CLASS: app/Livewire/ComposerOutdated.php

# VIEW: resources/views/livewire/composer-outdated.blade.php

By default, our ComposerOutdated class extends Livewire’s Component class, but we’re going to change that to extend
Pulse’s Card class.

namespace App\Livewire;

 

use Livewire\Component;

use Laravel\Pulse\Livewire\Card;

 

class ComposerOutdated extends Component

class ComposerOutdated extends Card

{

public function render()

{

return view('livewire.composer-outdated');

}

}

To get our data back out of the Pulse data store, we can just use the Pulse facade. This is one of the things I’m
really liking about Pulse. I don’t have to add migrations, maintain tables, add new models, etc. I can just use their
data store!

class ComposerOutdated extends Card

{

public function render()

{

// Get the data out of the Pulse data store.

$packages = Pulse::values('composer_outdated', ['result'])->first();

 

$packages = $packages

? json_decode($packages->value, JSON_THROW_ON_ERROR)['installed']

: []

 

return View::make('composer-outdated', [

'packages' => $packages,

]);

}

}

Publishing the Pulse dashboard

To add our card to the Pulse dashboard, we must first publish the vendor view.

php artisan vendor:publish --tag=pulse-dashboard

Now, in our resources/views/vendor/pulse folder, we have a new dashboard.blade.php where we can add our custom card. This is what it looks like by default:

<x-pulse>

<livewire:pulse.servers cols="full" />

 

<livewire:pulse.usage cols="4" rows="2" />

 

<livewire:pulse.queues cols="4" />

 

<livewire:pulse.cache cols="4" />

 

<livewire:pulse.slow-queries cols="8" />

 

<livewire:pulse.exceptions cols="6" />

 

<livewire:pulse.slow-requests cols="6" />

 

<livewire:pulse.slow-jobs cols="6" />

 

<livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

Adding our custom card

We can now add our new card wherever we want!

<x-pulse>

<livewire:composer-outdated cols="1" rows="3" />

 

<livewire:pulse.servers cols="full" />

 

<livewire:pulse.usage cols="4" rows="2" />

 

<livewire:pulse.queues cols="4" />

 

<livewire:pulse.cache cols="4" />

 

<livewire:pulse.slow-queries cols="8" />

 

<livewire:pulse.exceptions cols="6" />

 

<livewire:pulse.slow-requests cols="6" />

 

<livewire:pulse.slow-jobs cols="6" />

 

<livewire:pulse.slow-outgoing-requests cols="6" />

</x-pulse>

Community site

There is a lot to learn about Pulse, and I’ll continue to post here as I do. I’m working
on builtforpulse.com to showcase Pulse-related packages and articles, so make sure you stay
tuned over there!

GitHub Package

You can see this package at github.com/aarondfrancis/pulse-outdated.

YouTube Video

Laravel News Links

‘Doom’ at 30: What It Means, By the People Who Made It

30 years ago today, Doom "invented the modern PC games industry, as a place dominated by technologically advanced action shooters," remembers the Guardian:
In late August 1993, a young programmer named Dave Taylor walked into an office block… The carpets, he discovered, were stained with spilled soda, the ceiling tiles yellowed by water leaks from above. But it was here that a team of five coders, artists and designers were working on arguably the most influential action video game ever made. This was id Software. This was Doom… [W]hen Taylor met id’s charismatic designer and coder John Romero, he was shown their next project… "There were no critters in it yet," recalls Taylor of that first demo. "There was no gaming stuff at all. It was really just a 3D engine. But you could move around it really fluidly and you got such a sense of immersion it was shocking. The renderer was kick ass and the textures were so gritty and cool. I thought I was looking at an in-game cinematic. And Romero is just the consummate demo man: he really feeds off of your energy. So as my jaw hit the floor, he got more and more animated. Doom was amazing, but John was at least half of that demo’s impact on me." […] In late 1992, it had become clear that the 3D engine John Carmack was planning for Doom would speed up real-time rendering while also allowing the use of texture maps to add detail to environments. As a result, Romero’s ambition was to set Doom in architecturally complex worlds with multiple storeys, curved walls, moving platforms. A hellish Escher-esque mall of death… "Doom was the first to combine huge rooms, stairways, dark areas and bright areas," says Romero, "and lava and all that stuff, creating a really elaborate abstract world. That was never possible before…." [T]he way Doom combined fast-paced 3D action with elaborate, highly staged level design would prove hugely influential in the years to come. It’s there in every first-person action game we play today… But Doom wasn’t just a single-player game. Carmack consumed an entire library of books on computer networking before working on the code that would allow players to connect their PCs via modem to a local area network (LAN) and play in the game together… Doom brought fast-paced, real-time action, both competitive and cooperative, into the gaming mainstream. Seeing your friends battling imps and zombie space marines beside you in a virtual world was an exhilarating experience… When Doom was launched on 10 December 1993, it became immediately clear that the game was all-consuming — id Software had chosen to make the abbreviated shareware version available via the FTP site of the University of Wisconsin-Madison, but that crashed almost immediately, bringing the institution’s network to its knees… "We changed the rules of design," says Romero. "Getting rid of lives, which was an arcade holdover that every game had; getting rid of score because it was not the goal of the game. We wanted to make it so that, if the player died, they’d just start that level over — we were constantly pushing them forward. The game’s attitude was, I want you to keep playing. We wanted to get people to the point where they always needed more." It was a unique moment in time. In the article designer Sandy Petersen remembers that "I would sometimes get old dungeons I’d done for D&D and use them as the basis for making a map in Doom." Cheat codes had been included for debugging purposes — but were left in the game rs to discover. The article even includes a link to a half-hour video of a 1993 visit to Id software filmed by BBS owner Dan Linton. And today on X, John Romero shared a link to the Guardian’s article, along with some appreciative words for anyone who’s ever played the game. "DOOM is still remembered because of the community that plays and mods it 30 years on. I’m grateful to be a part of that community and fortunate to have been there at its beginning."
The Guardian’s article notes that now Romero "is currently working on Sigil 2, a spiritual successor to the original Doom series."


Read more of this story at Slashdot.

Slashdot

Marble Arcade Machines

https://www.toxel.com/wp-content/uploads/2023/12/marblearcade01.jpg

Arcade cabinets made of marble feature meticulously carved sculptures of iconic video game characters. Weighing 300 kg, each arcade machine boasts high quality Calacatta marble exterior, LCD screen, and joystick pad. These unique, limited-edition arcade cabinets feature Mario, Sonic, Alien, and other video game character sculptures carved on the sides. Juxtaposition of ancient-inspired marble with […]Toxel.com

FrankenPHP v1.0 is Here

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/frankenphp-featured.png

FrankenPHP v1.0 is Here

FrankenPHP just hit a significant milestone this week, reaching a v1.0 release. A modern PHP application server written in Go, FrankenPHP gives you a production-grade PHP server with just one command.

It includes native support for Symphony, Laravel, WordPress, and more:

  • Production-grade PHP server, powered by Caddy
  • Easy deploy – package your PHP apps as a standalone, self-executable binary
  • Run only one service – no more separate PHP-FPM and Nginx processes
  • Extensible – compatible with PHP 8.2+, most PHP extensions, and all Caddy modules
  • Worker mode – boot your application once and keep it in memory
  • Real-time events sent to the browser as a JavaScript event
  • Zstandard and Gzip compression
  • Structured logging
  • Monitor Caddy with built-in Prometheus metrics
  • Native support for HTTPS, HTTP/2 and HTTP/3
  • Automatic HTTPS certificates and renewals
  • Graceful release – deploy your apps with zero downtime
  • Support for Early Hints

Is there support for FrakenPHP in Laravel Octane?
Not yet, but there is an active pull request to Add support for FrankenPHP to Laravel Octane.

Which PHP modules are supported?
I tried looking for a definitive list, but from what I gather most popular PHP extensions should work. The documentation confirms that OPcache and Debug are natively supported by FrankenPHP.

You can get started with FrankenPHP at frankenphp.dev, and browse the documentaion to learn about the worker mode, Docker images, and creating static binaries of your application.

If you want to experiment with your application, the easiest way to try it out is to run the following Docker command:

docker run -v $PWD:/app/public \
    -p 80:80 -p 443:443 \
    dunglas/frankenphp

For Laravel, you’ll need to run the following Docker command (the FrankenPHP Laravel docs have complete setup instructions):

docker run -p 443:443 -v $PWD:/app dunglas/frankenphp

You can also run the frankenphp binary in macOS and Linux if you’d rather not use Docker.


The post FrankenPHP v1.0 is Here appeared first on Laravel News.

Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.

Laravel News

Velcro Glow Patch

https://theawesomer.com/photos/2023/12/velcro_glow_patches_t.jpgGlowDaddy&#8217;s Velcro glow-in-the-dark cards can be used alone or attached to the hook-and-loop panels found on tactical bags. Each is precision cut from HyperGlow luminescent material, which can glow brightly for hours after exposure to direct sunlight or a UV light source. The card measures 3.37&#8243; L x 2.1&#8243; W x 0.12&#8243; thick.The Awesomer