https://media.notthebee.com/articles/657883b4a4a3b657883b4a4a3c.jpg
Christmas carols have officially peaked, you guys. Watch this video and tell me otherwise.
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/657883b4a4a3b657883b4a4a3c.jpg
Christmas carols have officially peaked, you guys. Watch this video and tell me otherwise.
Not the Bee
https://percona.com/blog/wp-content/uploads/2023/12/benchmark-rds-mysql-dlv3-1024×629.png
A quick configuration change may do the trick in improving the performance of your AWS RDS for MySQL instance. Here, we will discuss a notable new feature in Amazon RDS, the Dedicated Log Volume (DLV), that has been introduced to boost database performance. While this discussion primarily targets MySQL instances, the principles are also relevant to PostgreSQL and MariaDB instances.
A Dedicated Log Volume (DLV) is a specialized storage volume designed to house database transaction logs separately from the volume containing the database tables. This separation aims to streamline transaction write logging, improving efficiency and consistency. DLVs are particularly advantageous for databases with large allocated storage, high I/O per second (IOPS) requirements, or latency-sensitive workloads.
DLVs are currently supported for Provisioned IOPS (PIOPS) storage, with a fixed size of 1,000 GiB and 3,000 Provisioned IOPS. Amazon RDS extends support for DLVs across various database engines:
The documentation doesn’t say much about additional charges for the Dedicated Log Volumes, but I reached out to AWS support, who responded exactly as follows:
Please note that there are no additional costs for enabling a dedicated log volume (DLV) on Amazon RDS. By default, to enable DLV, you must be using PIOPS storage, sized at 1,000 GiB with 3,000 IOPS, and you will be priced according to the storage type.
Implementing dedicated mounts for components such as binlogs and datadir is a recommended standard practice. It becomes more manageable and efficient by isolating logs and data to a dedicated mount. This segregation facilitates optimized I/O operations, preventing potential bottlenecks and enhancing overall system performance. Overall, adopting this practice promotes a structured and efficient storage strategy, fostering better performance, manageability, and, ultimately, a more robust database environment.
Thus, using Dedicated Log Volumes (DLVs), though new in AWS RDS, has been one of the recommended best practices and is a welcome setup improvement for your RDS instance.
We performed a standard benchmarking test using the sysbench tool to compare the performance of a DLV instance vs a standard RDS MySQL instance, as shared in the following section.
Setup
2 RDS Single DB instances | 1 EC2 Instance | ||
Regular | DLV Enabled | Sysbench | |
db.m6i.2xlarge | c5.2xlarge | ||
MySQL 8.0.31 | CentOS 7 | ||
8 Core / 32G | 8 Core / 16G | ||
Data Size: 32G |
– Default RDS configuration was used with binlogs enabled having full ACID compliance configurations.
Write-only traffic
Read-write traffic
Read-only traffic
Benchmarking analysis
Benchmarking outcome
Based on the sysbench benchmark results in the specified environment, it is strongly advised to employ DLV for a standard RDS instance. DLV demonstrates superior performance across most sysbench workloads, particularly showcasing notable enhancements in write-intensive scenarios.
When opting for DLVs, it’s crucial to be aware of the following considerations:
DLV in Multi-AZ deployments
Amazon RDS seamlessly integrates DLVs with Multi-AZ deployments. Whether you’re modifying an existing Multi-AZ instance or creating a new one, DLVs are automatically created for both the primary and secondary instances. This ensures that the advantages of DLV extend to enhanced availability and reliability in Multi-AZ configurations.
DLV with read replicas
DLV support extends to read replicas. If the primary DB instance has DLV enabled, all read replicas created after DLV activation inherit this feature. However, it’s important to note that read replicas created before DLV activation will not have it enabled by default. Explicit modification is required for pre-existing read replicas to leverage DLV benefits.
Dedicated Log Volumes have emerged as a strong option for optimizing Amazon RDS performance. By segregating transaction logs and harnessing the power of dedicated storage, DLVs contribute to enhanced efficiency and consistency. Integrating DLVs into your database strategy will help you toward your efforts in achieving peak performance and reliability.
Percona is a trusted partner for many industry-leading organizations across the globe that rely on us for help in fully utilizing their AWS RDS environment. Here’s how Percona can enhance your AWS RDS experience:
Expert configuration: RDS works well out of the box, but having Percona’s expertise ensures optimal performance. Our consultants will configure your AWS RDS instances for the best possible performance, ensuring minimal TCO.
Decades of experience: Our consultants bring decades of experience in solving complex database performance issues. They understand your goals and objectives, providing unbiased solutions for your database environment.
Blog resources: Percona experts are actively contributing to the community through knowledge sharing via forums and blogs. For example, here are two blogs on this subject:
Discover how our expert support, services, and enterprise-grade open source database software can make your business run better.
Percona Database Performance Blog
Welcome to the ultimate guide for MySQL issues debugging — your comprehensive MySQL checklist for tackling MySQL database environment challenges. This checklist for MySQL debugging is as useful to a…
The post What do you need to debug MySQL issues – a checklist first appeared on Change Is Inevitable.Planet MySQL
I recently purchased a new MacBook. Impressed as I was, I was equally amazed by how easy…Laracasts
https://d8nrpaglj2m0a.cloudfront.net/0e3d71bd-1b15-4fd4-b522-1272db7b946e/images/articles/og-pulse.jpg
Laravel Pulse is a lightweight application monitoring tool for Laravel. It was just
released today and I took a bit of time to create a custom card to
show outdated composer dependencies.
This is what the card looks like right now:
I was surprised at how easy the card was to create. Pulse has all of the infrastructure built out for:
The hooks are all very well thought through.
There is no official documentation for custom cards yet, so much of this is subject to change. Everything I’m telling
you here I learned through diving into the source code.
The first step is to create a Recorder
that will record the data you’re looking to monitor. If you
open config/pulse.php
you’ll see a list of recorders:
/*
|--------------------------------------------------------------------------
| Pulse Recorders
|--------------------------------------------------------------------------
|
| The following array lists the "recorders" that will be registered with
| Pulse, along with their configuration. Recorders gather application
| event data from requests and tasks to pass to your ingest driver.
|
*/
'recorders' => [
Recorders\Servers::class => [
'server_name' => env('PULSE_SERVER_NAME', gethostname()),
'directories' => explode(':', env('PULSE_SERVER_DIRECTORIES', '/')),
],
// more recorders ...
]
The recorders listen for application events. Pulse emits a SharedBeat
event if your recorder needs to run on an
interval instead of in response to an application event.
For example, the Servers
recorder records server stats every 15 seconds in response to the SharedBeat
event:
class Servers
{
public string $listen = SharedBeat::class;
public function record(SharedBeat $event): void
{
if ($event->time->second % 15 !== 0) {
return;
}
// Record server stats...
}
}
But the Queue
recorder listens for specific application events:
class Queues
{
public array $listen = [
JobReleasedAfterException::class,
JobFailed::class,
JobProcessed::class,
JobProcessing::class,
JobQueued::class,
];
public function record(
JobReleasedAfterException|JobFailed|JobProcessed|JobProcessing|JobQueued $event
): void
{
// Record the job...
}
}
In our case, we just need to check for outdated packages once a day on a schedule, so we’ll use the SharedBeat
event.
The recorder is a plain PHP class with a record
method. Inside of that method you’re given one of the events to which
you’re listening. You also have access to Pulse
in the constructor.
class Outdated
{
public string $listen = SharedBeat::class;
public function __construct(
protected Pulse $pulse,
protected Repository $config
) {
//
}
public function record(SharedBeat $event): void
{
//
}
}
The SharedBeat
event has a time property on it, which we can use to decide if we want to run or not.
class Outdated
{
// ...
public function record(SharedBeat $event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
return;
}
}
}
Pulse will handle invoking the record
method, we just need to figure out what to do there. In our case we’re going to
run composer outdated
.
class Outdated
{
// ...
public function record(SharedBeat $event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
return;
}
// Run composer to get the outdated dependencies
$result = Process::run("composer outdated -D -f json");
if ($result->failed()) {
throw new RuntimeException(
'Composer outdated failed: ' . $result->errorOutput()
);
}
// Just make sure it's valid JSON
json_decode($result->output(), JSON_THROW_ON_ERROR);
}
}
Pulse ships with three separate tables:
There is currently no documentation, but from what I can tell the pulse_aggregates
table stores pre-computed rollups
of time-series data for better performance. The entries
table stores individual events, like requests or exceptions.
The values
table seems to be a simple "point in time" store.
We’re going to use the values
table to stash the output of composer outdated
. To do this, we use the pulse->set()
method.
class Outdated
{
// ...
public function record(SharedBeat $event): void
{
// Only run once per day
if ($event->time !== $event->time->startOfDay()) {
return;
}
// Run composer to get the outdated dependencies
$result = Process::run("composer outdated -D -f json");
if ($result->failed()) {
throw new RuntimeException(
'Composer outdated failed: ' . $result->errorOutput()
);
}
// Just make sure it's valid JSON
json_decode($result->output(), JSON_THROW_ON_ERROR);
// Store it in one of the Pulse tables
$this->pulse->set('composer_outdated', 'result', $result->output());
}
}
Now our data is stored and will be updated once per day. Let’s move on to displaying that data!
(Note: You don’t have to create a recorder. Your card can pull data from anywhere!)
Pulse is built on top of Laravel Livewire. To add a new Pulse card to your dashboard,
we’ll create a new Livewire component called ComposerOutdated
.
php artisan livewire:make ComposerOutdated
# COMPONENT CREATED ????
# CLASS: app/Livewire/ComposerOutdated.php
# VIEW: resources/views/livewire/composer-outdated.blade.php
By default, our ComposerOutdated
class extends Livewire’s Component
class, but we’re going to change that to extend
Pulse’s Card
class.
namespace App\Livewire;
use Livewire\Component;
use Laravel\Pulse\Livewire\Card;
class ComposerOutdated extends Component
class ComposerOutdated extends Card
{
public function render()
{
return view('livewire.composer-outdated');
}
}
To get our data back out of the Pulse data store, we can just use the Pulse
facade. This is one of the things I’m
really liking about Pulse. I don’t have to add migrations, maintain tables, add new models, etc. I can just use their
data store!
class ComposerOutdated extends Card
{
public function render()
{
// Get the data out of the Pulse data store.
$packages = Pulse::values('composer_outdated', ['result'])->first();
$packages = $packages
? json_decode($packages->value, JSON_THROW_ON_ERROR)['installed']
: []
return View::make('composer-outdated', [
'packages' => $packages,
]);
}
}
To add our card to the Pulse dashboard, we must first publish the vendor view.
php artisan vendor:publish --tag=pulse-dashboard
Now, in our resources/views/vendor/pulse
folder, we have a new dashboard.blade.php
where we can add our custom card. This is what it looks like by default:
<x-pulse>
<livewire:pulse.servers cols="full" />
<livewire:pulse.usage cols="4" rows="2" />
<livewire:pulse.queues cols="4" />
<livewire:pulse.cache cols="4" />
<livewire:pulse.slow-queries cols="8" />
<livewire:pulse.exceptions cols="6" />
<livewire:pulse.slow-requests cols="6" />
<livewire:pulse.slow-jobs cols="6" />
<livewire:pulse.slow-outgoing-requests cols="6" />
</x-pulse>
We can now add our new card wherever we want!
<x-pulse>
<livewire:composer-outdated cols="1" rows="3" />
<livewire:pulse.servers cols="full" />
<livewire:pulse.usage cols="4" rows="2" />
<livewire:pulse.queues cols="4" />
<livewire:pulse.cache cols="4" />
<livewire:pulse.slow-queries cols="8" />
<livewire:pulse.exceptions cols="6" />
<livewire:pulse.slow-requests cols="6" />
<livewire:pulse.slow-jobs cols="6" />
<livewire:pulse.slow-outgoing-requests cols="6" />
</x-pulse>
There is a lot to learn about Pulse, and I’ll continue to post here as I do. I’m working
on builtforpulse.com to showcase Pulse-related packages and articles, so make sure you stay
tuned over there!
You can see this package at github.com/aarondfrancis/pulse-outdated.
Laravel News Links
https://media.notthebee.com/articles/6573842cb0c236573842cb0c24.jpg
Sheldon Ruston was hired to move the Elmwood Building in downtown Halifax off its foundations for renovations and development, but that was no easy task.
Not the Bee
30 years ago today, Doom "invented the modern PC games industry, as a place dominated by technologically advanced action shooters," remembers the Guardian:
In late August 1993, a young programmer named Dave Taylor walked into an office block… The carpets, he discovered, were stained with spilled soda, the ceiling tiles yellowed by water leaks from above. But it was here that a team of five coders, artists and designers were working on arguably the most influential action video game ever made. This was id Software. This was Doom… [W]hen Taylor met id’s charismatic designer and coder John Romero, he was shown their next project… "There were no critters in it yet," recalls Taylor of that first demo. "There was no gaming stuff at all. It was really just a 3D engine. But you could move around it really fluidly and you got such a sense of immersion it was shocking. The renderer was kick ass and the textures were so gritty and cool. I thought I was looking at an in-game cinematic. And Romero is just the consummate demo man: he really feeds off of your energy. So as my jaw hit the floor, he got more and more animated. Doom was amazing, but John was at least half of that demo’s impact on me." […] In late 1992, it had become clear that the 3D engine John Carmack was planning for Doom would speed up real-time rendering while also allowing the use of texture maps to add detail to environments. As a result, Romero’s ambition was to set Doom in architecturally complex worlds with multiple storeys, curved walls, moving platforms. A hellish Escher-esque mall of death… "Doom was the first to combine huge rooms, stairways, dark areas and bright areas," says Romero, "and lava and all that stuff, creating a really elaborate abstract world. That was never possible before…." [T]he way Doom combined fast-paced 3D action with elaborate, highly staged level design would prove hugely influential in the years to come. It’s there in every first-person action game we play today… But Doom wasn’t just a single-player game. Carmack consumed an entire library of books on computer networking before working on the code that would allow players to connect their PCs via modem to a local area network (LAN) and play in the game together… Doom brought fast-paced, real-time action, both competitive and cooperative, into the gaming mainstream. Seeing your friends battling imps and zombie space marines beside you in a virtual world was an exhilarating experience… When Doom was launched on 10 December 1993, it became immediately clear that the game was all-consuming — id Software had chosen to make the abbreviated shareware version available via the FTP site of the University of Wisconsin-Madison, but that crashed almost immediately, bringing the institution’s network to its knees… "We changed the rules of design," says Romero. "Getting rid of lives, which was an arcade holdover that every game had; getting rid of score because it was not the goal of the game. We wanted to make it so that, if the player died, they’d just start that level over — we were constantly pushing them forward. The game’s attitude was, I want you to keep playing. We wanted to get people to the point where they always needed more." It was a unique moment in time. In the article designer Sandy Petersen remembers that "I would sometimes get old dungeons I’d done for D&D and use them as the basis for making a map in Doom." Cheat codes had been included for debugging purposes — but were left in the game rs to discover. The article even includes a link to a half-hour video of a 1993 visit to Id software filmed by BBS owner Dan Linton. And today on X, John Romero shared a link to the Guardian’s article, along with some appreciative words for anyone who’s ever played the game. "DOOM is still remembered because of the community that plays and mods it 30 years on. I’m grateful to be a part of that community and fortunate to have been there at its beginning."
The Guardian’s article notes that now Romero "is currently working on Sigil 2, a spiritual successor to the original Doom series."
Read more of this story at Slashdot.
Slashdot
https://www.toxel.com/wp-content/uploads/2023/12/marblearcade01.jpg
Arcade cabinets made of marble feature meticulously carved sculptures of iconic video game characters. Weighing 300 kg, each arcade machine boasts high quality Calacatta marble exterior, LCD screen, and joystick pad. These unique, limited-edition arcade cabinets feature Mario, Sonic, Alien, and other video game character sculptures carved on the sides. Juxtaposition of ancient-inspired marble with […]Toxel.com
https://static.tildacdn.com/tild3163-3566-4430-b533-343665643032/-/empty/Add_a_heading-15.png
These metrics focus on the efficiency of various cache systems within the database, helping to identify potential bottlenecks and areas for optimization. They measure the hit rate and fragmentation of different cache types, such as thread, table, MyISAM, and InnoDB caches, to ensure that frequently accessed data is readily available and cache usage is optimized.
Laravel News Links
https://theawesomer.com/photos/2023/12/velcro_glow_patches_t.jpgGlowDaddy’s Velcro glow-in-the-dark cards can be used alone or attached to the hook-and-loop panels found on tactical bags. Each is precision cut from HyperGlow luminescent material, which can glow brightly for hours after exposure to direct sunlight or a UV light source. The card measures 3.37″ L x 2.1″ W x 0.12″ thick.The Awesomer