Deep sleep may buffer against Alzheimer’s memory loss

https://www.futurity.org/wp/wp-content/uploads/2023/05/sleep-alzheimers-disease-dementia-memory-loss-1600.jpgAn older man with a white beard sleeps soundly in bed.

Deep sleep might help buffer against memory loss for older adults facing a heightened burden of Alzheimer’s disease, new research suggests.

Deep sleep, also known as non-REM slow-wave sleep, can act as a “cognitive reserve factor” that may increase resilience against a protein in the brain called beta-amyloid that is linked to memory loss caused by dementia. Disrupted sleep has previously been associated with faster accumulation of beta-amyloid protein in the brain.

“Think of deep sleep almost like a life raft that keeps memory afloat…”

However, the new research reveals that superior amounts of deep, slow-wave sleep can act as a protective factor against memory decline in those with existing high amounts of Alzheimer’s disease pathology—a potentially significant advance that experts say could help alleviate some of dementia’s most devastating outcomes.

“With a certain level of brain pathology, you’re not destined for cognitive symptoms or memory issues,” says Zsófia Zavecz, a postdoctoral researcher at the University of California, Berkeley’s Center for Human Sleep Science. “People should be aware that, despite having a certain level of pathology, there are certain lifestyle factors that will help moderate and decrease the effects.

“One of those factors is sleep and, specifically, deep sleep.”

Cognitive reserve factors

The research in the journal BMC Medicine is the latest in a large body of work aimed at finding a cure for Alzheimer’s disease and preventing it altogether.

As the most prevalent form of dementia, Alzheimer’s disease destroys memory pathways and, in advanced forms, interferes with a person’s ability to perform basic daily tasks. Roughly one in nine people over age 65 have the progressive disease—a proportion that is expected to grow rapidly as the baby boomer generation ages.

In recent years, scientists have probed the ways that deposits of beta-amyloid associate with Alzheimer’s disease and how such deposits also affect memory more generally. In addition to sleep being a foundational part of memory retention, the researchers previously discovered that the declining amount of a person’s deep sleep could act as a “crystal ball” to forecast a faster rate of future beta-amyloid buildup in the brain, after which dementia is more likely set in.

Years of education, physical activity, and social engagement are widely believed to shore up a person’s resilience to severe brain pathology—essentially keeping the mind sharp, despite the decreased brain health. These are called cognitive reserve factors. However, most of them, such as past years of education or the size of one’s social network, cannot be easily changed or modified retroactively.

That idea of cognitive reserve became a compelling target for sleep researchers, says Matthew Walker, a professor of neuroscience and psychology and senior author of the study.

“If we believe that sleep is so critical for memory,” Walker says, “could sleep be one of those missing pieces in the explanatory puzzle that would tell us exactly why two people with the same amounts of vicious, severe amyloid pathology have very different memory?”

“If the findings supported the hypothesis, it would be thrilling, because sleep is something we can change,” he adds. “It is a modifiable factor.”

Filling in a missing puzzle piece

To test that question, the researchers recruited 62 older adults from the Berkeley Aging Cohort Study. Participants, who were healthy adults and not diagnosed with dementia, slept in a lab while researchers monitored their sleep waves with an electroencephalography (EEG) machine. Researchers also used a positron emission tomography (PET) scan to measure the amount of beta-amyloid deposits in the participants’ brains. Half of the participants had high amounts of amyloid deposits; the other half did not.

After they slept, the participants completed a memory task involving matching names to faces.

Those with high amounts of beta-amyloid deposits in their brain who also experienced higher levels of deep sleep performed better on the memory test than those with the same amount of deposits but who slept worse. This compensatory boost was limited to the group with amyloid deposits. In the group without pathology, deep sleep had no additional supportive effect on memory, which was understandable as there was no demand for resilience factors in otherwise intact cognitive function.

In other words, deep sleep bent the arrow of cognition upward, blunting the otherwise detrimental effects of beta-amyloid pathology on memory.

In their analysis, the researchers went on to control for other cognitive reserve factors, including education and physical activity, and still sleep demonstrated a marked benefit. This suggests that sleep, independent of these other factors, contributes to salvaging memory function in the face of brain pathology. These new discoveries, they says, indicate the importance of non-REM slow-wave sleep in counteracting some of the memory-impairing effects of beta-amyloid deposits.

Walker likened deep sleep to a rescue effort.

“Think of deep sleep almost like a life raft that keeps memory afloat, rather than memory getting dragged down by the weight of Alzheimer’s disease pathology,” Walker says. “It now seems that deep NREM sleep may be a new, missing piece in the explanatory puzzle of cognitive reserve. This is especially exciting because we can do something about it. There are ways we can improve sleep, even in older adults.”

Chief among those areas for improvement? Stick to a regular sleep schedule, stay mentally and physically active during the day, create a cool and dark sleep environment and minimize things like coffee late in the day and screen time before bed. A warm shower before turning in for the night has also been shown to increase the quality of deep, slow-wave sleep, Zavecz says.

With a small sample size of healthy participants, the study is simply an early step in understanding the precise ways sleep may forestall memory loss and the advance of Alzheimer’s, Zavecz says.

Still, it opens the door for potential longer-term experiments examining sleep-enhancement treatments that could have far-reaching implications.

“One of the advantages of this result is the application to a huge population right above the age of 65,” Zavecz says. “By sleeping better and doing your best to practice good sleep hygiene, which is easy to research online, you can gain the benefit of this compensatory function against this type of Alzheimer’s pathology.”

Source: UC Berkeley

The post Deep sleep may buffer against Alzheimer’s memory loss appeared first on Futurity.

Futurity

Dad Sits Down Son To Have ‘The Talk’ About The Star Wars Sequel Trilogy

https://media.babylonbee.com/articles/645405e15c3f6645405e15c3f7.jpg

BLUE SPRINGS, MO — A local father determined the time had come to sit his young son down and officially have “The Talk”…about the Star Wars sequel trilogy. The man reportedly knew he couldn’t avoid it any longer once the boy began to talk about how great The Last Jedi was.

“He’s at the age where he needs to know the truth,” Cody Callow said. “I mean, his name is Lucas, after all. He can’t be allowed to go on maturing without knowing how the world really works and why the sequel trilogy really isn’t very good. Letting him enter manhood under the impression The Last Jedi was an acceptable entry into Star Wars canon would be shirking my responsibility as a father.”

Cody said young Lucas Callow was coming dangerously close to feeling the sequel trilogy was the best representation of the Star Wars franchise, a philosophy that could lead him down a dark path in life. “Sure today he ‘harmlessly’ enjoys Riann Johnson’s destruction of Luke Skywalker’s character,” Cody said, “but the next thing you know, he’ll be claiming Rey was actually ‘The Chosen One’ who brought balance to the Force. I can’t, in good conscience, let that happen. He’s my son!”

Later, Lucas was playing with his Rose Tico action figure when his father entered the room to start the difficult but important conversation by saying “Lucas, I am your father…”

At publishing time, the conversation was reported to have gone well, with the young man now fully understanding that the real Star Wars trilogy has been over since 1983.


Use these foolproof ways to defend yourself even when you don’t have a gun.

Subscribe to our YouTube channel for more tactical instruction

Babylon Bee

‘Star Trek’ Fans Can Now Virtually Tour Every Starship Enterprise Bridge

A new web portal allows "Star Trek" fans to explore the iconic bridge of the starship Enterprise through 360-degree, 3D models and learn about its evolution throughout the franchise’s history. Smithsonian Magazine reports: The site features 360-degree, 3D models of the various versions of the Enterprise, as well as a timeline of the ship’s evolution throughout the franchise’s history. Fans of the show can also read detailed information about each version of the ship’s design, its significance to the "Star Trek" storyline and its production backstory. Developed in honor of the "Star Trek: Picard" series finale, which dropped late last month on Paramount+, the portal is a collaboration between the Roddenberry Estate, the Roddenberry Archive and the technology company OTOY. A group of well-known "Star Trek" artists — including Denise and Michael Okuda, Daren Dochterman, Doug Drexler and Dave Blass — also supported the project.
The voice of the late actress Majel Roddenberry, who played the Enterprise’s computer for years, will be added to the site in the future. Gene Roddenberry died in 1991, followed by Majel Roddenberry in 2008; the two had been married since 1969. The portal’s creators also released a short video, narrated by actor John de Lancie, exploring every version of the Enterprise’s bridge to date, "from its inception in Pato Guzman’s 1964 sketches, through its portrayal across decades of TV shows and feature films, to its latest incarnation on the Enterprise-G, as revealed in the final episode of ‘Star Trek: Picard,’" per the video description. Accompanying video interviews with "Star Trek" cast and crew — including William Shatner, who played Captain Kirk in the original series, and Terry Matalas, a showrunner for "Star Trek: Picard" — also explore the series’ legacy.


Read more of this story at Slashdot.

Slashdot

New ‘Double Dragon’ game trailer promises nostalgic beat-em-up thrills

http://img.youtube.com/vi/TVvlBr6hVXA/0.jpg

The original Double Dragon basically invented co-op beat-em-up action in 1987, and now modern players are about to get a dose of nostalgic side-scrolling goodness thanks to a new franchise installment. Double Dragon Gaiden: Rise of the Dragons launches this fall for every major platform, including PC, Xbox consoles, PlayStation 4 and 5 and the Nintendo Switch.

What to expect from this installment? The trailer suggests a return to the tried-and-true beat-em-up formula. There’s a nice retro pixelated art style, 13 playable characters to choose from and, of course, two-player local co-op. The new title also includes a tag-team ability, so you actually play as two characters at once.

Developer Modus Games is teasing some roguelite elements, like a dynamic mission select feature that randomizes stage length, enemy number and difficulty. This is also a 2023 console game and not an arcade machine from the 1980s, so expect purchasable upgrades and some light RPG mechanics.

As for the plot, the years haven’t been kind to series protagonists Jimmy and Billy Lee. The sequel finds New York City devastated by nuclear war, which leads to gangs of hooligans roaming the radioactive streets. You know what happens next (you beat them up). It remains to be seen if your avatars can beat up that long nuclear winter.

Modus Games isn’t a well-known developer but it has plenty of well-regarded indie titles under its belt, like Afterimage and Teslagrad 2. The trailer looks cool, so this is worth keeping an eye on, especially given that there hasn’t been a Double Dragon game since the long ago days of 2016.

This article originally appeared on Engadget at https://www.engadget.com/new-double-dragon-game-trailer-promises-nostalgic-beat-em-up-thrills-175831891.html?src=rssEngadget

Laravel Filament: How To Upload Video Files

https://laraveldaily.com/storage/393/Untitled-design—2023-04-27T113003.756.png

Filament admin panel has a File Upload field, but is it possible to upload video files with it? In this tutorial, I will demonstrate that and show the uploaded video in a custom view page using basic HTML <video> Element.

viewing video page


Prepare Server for Large File Uploads

Before touching any code, first, we will prepare the web-server to be able to upload larger files, in the php.ini file settings.

The default value for upload_max_filesize is 2 MB and 8 MB for post_max_size. We need to increase those values.

I’m using PHP 8.1. If yours is different, change the version to yours.

sudo nano /etc/php/8.1/fpm/php.ini

I will be uploading a 17MB video file so I will increase both upload_max_filesize and post_max_size to 20MB.

post_max_size = 20M

upload_max_filesize = 20M

Next, restart the PHP FPM service.

sudo service php8.1-fpm restart

Now, our PHP is ready to accept such files.


Uploading File

On the DB level, we will have one model Video with two string fields attachment and type.

app/Models/Video.php:

class Video extends Model

{

protected $fillable = [

'attachment',

'type',

];

}

Next, Filament. When creating a Filament Resource, we also need to create a record view.

php artisan make:filament-resource Video --view

For the form, we will have a basic File Upload field. This field will be required, will have a max upload size of 20MB, and I will preserve the original filename. The last one is optional.

app/Filament/VideoResource.php:

class VideoResource extends Resource

{

// ...

public static function form(Form $form): Form

{

return $form

->schema([

FileUpload::make('attachment')

->required()

->preserveFilenames()

->maxSize(20000),

]);

}

// ...

}

Before uploading, we also need to set the max file size for the Livewire. First, we need to publish Livewire config.

php artisan livewire:publish --config

config/livewire.php:

return [

// ...

'temporary_file_upload' => [

'disk' => null, // Example: 'local', 's3' Default: 'default'

'rules' => 'max:20000',

'directory' => null, // Example: 'tmp' Default 'livewire-tmp'

'middleware' => null, // Example: 'throttle:5,1' Default: 'throttle:60,1'

'preview_mimes' => [ // Supported file types for temporary pre-signed file URLs.

'png', 'gif', 'bmp', 'svg', 'wav', 'mp4',

'mov', 'avi', 'wmv', 'mp3', 'm4a',

'jpg', 'jpeg', 'mpga', 'webp', 'wma',

],

'max_upload_time' => 5, // Max duration (in minutes) before an upload gets invalidated.

],

// ...

];

Now upload should be working. But before creating the record, we need to get the mime type of the file and save it into the DB.

app/Filament/VideoResouce/Pages/CreateVideo.php:

<?php

 

namespace App\Filament\Resources\VideoResource\Pages;

 

use Illuminate\Support\Facades\Storage;

use App\Filament\Resources\VideoResource;

use Filament\Resources\Pages\CreateRecord;

 

class CreateVideo extends CreateRecord

{

protected static string $resource = VideoResource::class;

 

protected function mutateFormDataBeforeCreate(array $data): array

{

$data['type'] = Storage::disk('public')->mimeType($data['attachment']);

 

return $data;

}

}


Viewing Video

To view the video, we will use a basic HTML <video> tag. For this, in Filament we will need to make a basic custom view page.

First, let’s add the ViewRecord page custom view path.

app/Filament/Resources/VideoRecourse/Pages/ViewVideo.php:

class ViewVideo extends ViewRecord

{

protected static string $resource = VideoResource::class;

 

protected static string $view = 'filament.pages.view-video';

}

Now let’s create this view file and add a video player to it.

resources/views/filament/pages/view-video.blade.php:

<x-filament::page>

<video controls>

<source src="" type="">

Your browser does not support the video tag.

</video>

</x-filament::page>

After visiting the view page you will your uploaded video in the native browser video player.

viewing video page


That’s it! As you can see, video files aren’t different than any other files, they just need to have different validation for larger files.

You can learn more tips on how to work with Filament, in my 2-hour course Laravel Filament Admin: Practical Course.

Laravel News Links

Laravel analytics – how and why I made my own analytics package

https://www.danielwerner.dev/assets/img/home-bg.jpg

I’ve used google analytics for couple of years and it worked quite well for me. The question arises why did I make my own analytics package. There were a couple of reasons to do so:

  • Google analytics became quite complex and slow lately. Especially with introduction of the new Google Analytics 4 it became more complex, and I realised that I don’t use even 0.1 percent of its capability. This blog and the other websites I developed as side projects only need simple things like visitor count per day in a specific period, and page views for the top visited pages. That’s it!
  • I wanted to get rid of the third party cookies as much as possible
  • Third party analytics tools are mostly blocked by ad blockers, so I see lower numbers than the real visitors.

Requirements

  • It needs to be a Laravel package, as I want to use it in couple of projects
  • Keep it simple, only the basic functionality
    • Track page visits by uri, and also the relevant model ids if applicable (for example blog post id, or product id)
    • Save UserAgents for possible further analysis of visitor devices (desktop vs mobile) and to filter out bot traffic
    • Save IP address for a planned feature: segment users by countries and cities
    • “In house” solution, track the data in the applications own database
    • Only backend functionality for tracking, no frontend tracking
    • Create chart for visitors in the last 28 days, and most visited pages in the same period
  • Build the MVP and push back any optional features, like
    • Aggregate the data into separate table instead of querying the page_views table (I’ll build it when the queries become slow)
    • Add geoip databse, and save the user’s country and city based on their IP
    • Add possibility to change the time period shown on the charts

The database

As I mentioned earlier the goal was to keep the whole thing very simple, so the database only consits of one table called laravel_analytics_page_views where the larave_analytics_ prefix is configurable in the config file to prevent potential conflicts with the app’s databses tables.

The schema structure/migration looks like this:

$tableName = config('laravel-analytics.db_prefix') . 'page_views';

Schema::create($tableName, function (Blueprint $table) {
    $table->id();
    $table->string('session_id')->index();
    $table->string('path')->index();
    $table->string('user_agent')->nullable();
    $table->string('ip')->nullable();
    $table->string('referer')->nullable()->index();
    $table->string('county')->nullable()->index();
    $table->string('city')->nullable();
    $table->string('page_model_type')->nullable();
    $table->string('page_model_id')->nullable();
    $table->timestamp('created_at')->nullable()->index();
    $table->timestamp('updated_at')->nullable();

    $table->index(['page_model_type', 'page_model_id']);
});

We track the unique visitors by session_id, which is of course not perfect and not 100% accurate but it does the job.

We create a polymorpthic relation with page_model_type and page_model_id if there is a relevant model to the tracked page we save the type and the id to use in the future if necessary. Also created a combined index for these 2 fiels, as they are mostly queried together when using polymorphic relations.

The middleware

I wanted and universal solution rather than adding the analytics to all the controllers created a middleware which can handle the tracking. The middleware can be added to all routes or to specific group(s) of routes.

The middleware itself is quite simple, it tracks only the get requests and skips the ajax calls. As it doesn’t make sense to track bot traffic, I used the https://github.com/JayBizzle/Crawler-Detect package to detect the crawlers and bots. When a crawler is detected it simply skips the tracking, this way we can avoid having useless data in the table.

It was somewhat tricky how to get the associated model for the url in an universal way. The solution at the end is not totally universal because it assumes that the app uses route model binding and assumes that the first binding is relevant to that page. Again it is not perfect but it fits the minimalistic approach I followed while developing this package.

Here is the code of the middleware:

public function handle(Request $request, Closure $next)
{
    $response = $next($request);

    try {
        if (!$request->isMethod('GET')) {
            return $response;
        }

        if ($request->isJson()) {
            return $response;
        }

        $userAgent = $request->userAgent();

        if (is_null($userAgent)) {
            return $response;
        }

        /** @var CrawlerDetect $crawlerDetect */
        $crawlerDetect = app(CrawlerDetect::class);

        if ($crawlerDetect->isCrawler($userAgent)) {
            return $response;
        }

        /** @var PageView $pageView */
        $pageView = PageView::make([
            'session_id' => session()->getId(),
            'path' => $request->path(),
            'user_agent' => Str::substr($userAgent, 0, 255),
            'ip' => $request->ip(),
            'referer' => $request->headers->get('referer'),
        ]);

        $parameters = $request->route()?->parameters();
        $model = null;

        if (!is_null($parameters)) {
            $model = reset($parameters);
        }

        if (is_a($model, Model::class)) {
            $pageView->pageModel()->associate($model);
        }

        $pageView->save();

        return $response;
    } catch (Throwable $e) {
        report($e);
        return $response;
    }
}

 

The routes

When developing Laravel packages it is possible to set up the package service provider to tell the application to use the routes from the package. I usually don’t use this approach, because this way in the application you don’t have much control over the routes: for example you cannot add prefix, put them in an group or add middleware to them. 

I like to create a class with a static method routes, where I define all the routes.

public static function routes()
{

    Route::get(
        'analytics/page-views-per-days',
        [AnalyticsController::class, 'getPageViewsPerDays']
    );

    Route::get(
        'analytics/page-views-per-path',
        [AnalyticsController::class, 'getPageViewsPerPaths']
    );
}

This way I could easily put the package routes under the /admin part in my application for example.

The frontend components

The frontend part consists of 2 vue components one for the visitor chart and one contains a simple table of the most visited pages. For the chart I used the Vue chartjs library (https://github.com/apertureless/vue-chartjs

 
<template>
    <div>
        <div><strong>Visitors: </strong></div>
        <div>
            <LineChartGenerator
                :chart-options="chartOptions"
                :chart-data="chartData"
                :chart-id="chartId"
                :dataset-id-key="datasetIdKey"
                :plugins="plugins"
                :css-classes="cssClasses"
                :styles="styles"
                :width="width"
                :height="height"
            />
        </div>
    </div>
</template>

<script>


import { Line as LineChartGenerator } from 'vue-chartjs/legacy'
import {
    Chart as ChartJS,
    Title,
    Tooltip,
    Legend,
    LineElement,
    LinearScale,
    CategoryScale,
    PointElement
} from 'chart.js'

ChartJS.register(
    Title,
    Tooltip,
    Legend,
    LineElement,
    LinearScale,
    CategoryScale,
    PointElement
)

export default {
    name: 'VisitorsPerDays',
    components: { LineChartGenerator },
    props: {
        'initialData': Object,
        'baseUrl': String,
        chartId: {
            type: String,
            default: 'line-chart'
        },
        datasetIdKey: {
            type: String,
            default: 'label'
        },
        width: {
            type: Number,
            default: 400
        },
        height: {
            type: Number,
            default: 400
        },
        cssClasses: {
            default: '',
            type: String
        },
        styles: {
            type: Object,
            default: () => {}
        },
        plugins: {
            type: Array,
            default: () => []
        }
    },
    data() {
        return {
            chartData: {
                labels: Object.keys(this.initialData),
                datasets: [
                    {
                        label: 'Visitors',
                        backgroundColor: '#f87979',
                        data: Object.values(this.initialData)
                    }
                ]
            },
            chartOptions: {
                responsive: true,
                maintainAspectRatio: false,
                scales: {
                    y: {
                        ticks: {
                            precision: 0
                        }
                    }
                }
            }
        }
    },
    mounted() {
    },

    methods: {

    },
}
</script>

 

Conclusion

It was quite fun and interesting project and after using it for about an month and analysing the results, it seem to be working fine. If you are interested in the code, or would like to try the package feel free to check it out on GitHub here: https://github.com/wdev-rs/laravel-analytics

Laravel News Links

Save Money in AWS RDS: Don’t Trust the Defaults

https://www.percona.com/blog/wp-content/uploads/2023/03/lucas.speyer_an_icon_of_an_electronic_cloud_97fa4765-ec96-44fb-b23e-dbe3512b9710-150×150.pngaws rds

Default settings can help you get started quickly – but they can also cost you performance and a higher cloud bill at the end of the month. Want to save money on your AWS RDS bill? I’ll show you some MySQL settings to tune to get better performance, and cost savings, with AWS RDS.

Recently I was engaged in a MySQL Performance Audit for a customer to help troubleshoot performance issues that led to downtime during periods of high traffic on their AWS RDS MySQL instances. During heavy loads, they would see messages about their InnoDB settings in the error logs:

[Note] InnoDB: page_cleaner: 1000ms intended loop took 4460ms. The settings might not be optimal. (flushed=140, during the time.)

This message is normally a side effect of a storage subsystem that is not capable of keeping up with the number of writes (e.g., IOPs) required by MySQL. This is “Hey MySQL, try to write less. I can’t keep up,” which is a common situation when innodb_io_capacity_max is set too high.

After some time of receiving these messages, eventually, they hit performance issues to the point that the server becomes unresponsive for a few minutes. After that, things went back to normal.

Let’s look at the problem and try to gather some context information.

Investigating AWS RDS performance issues

We had a db.m5.8xlarge instance type (32vCPU – 128GB of RAM) with a gp2 storage of 5TB, which should provide up to 10000 IOPS (this is the maximum capacity allowed by gp2), running MySQL 5.7. This is a pretty decent setup, and I don’t see many customers needing to write this many sustained IOPS.

The innodb_io_capacity_max parameter was set to 2000, so the hardware should be able to deliver that many IOPS without major issues. However, gp2 suffers from a tricky way of calculating credits and usage that may drive erroneous conclusions about the real capacity of the storage. Reviewing the CloudWatch graphics, we only had roughly 8-9k IOPS (reads and writes) used during spikes.

AWS RDS MySQL

writeops

While the IO utilization was quite high, there should be some room to get more IOPS, but we were still seeing errors. What caught my attention was the self-healing condition shown by MySQL after a few minutes.

Normally, the common solution that was actually discussed during our kick-off call was, “Well, there is always the chance to move to Provisioned IOPS, but that is quite expensive.” Yes, this is true, io2 volumes are expensive, and honestly, I think they should be used only where really high IO capacity at expected latencies is required, and this didn’t seem to be the case.

Otherwise, most of the environments can adapt to gp2/gp3 volumes; for that matter, you need to provision a big enough volume and get enough IOPS.

Finding the “smoking gun” with pt-mysql-summary

Not too long ago, my colleague Yves Trudeau and I worked on a series of posts debating how to configure an instance for write-intensive workloads. A quick look at the pt-mysql-summary output shows something really interesting when approaching the issue out of the busy period of load:

# InnoDB #####################################################
                  Version | 5.7.38
         Buffer Pool Size | 93.0G
         Buffer Pool Fill | 100%
        Buffer Pool Dirty | 1%
           File Per Table | ON
                Page Size | 16k
            Log File Size | 2 * 128.0M = 256.0M
          Log Buffer Size | 8M
             Flush Method | O_DIRECT
      Flush Log At Commit | 1
               XA Support | ON
                Checksums | ON
              Doublewrite | ON
          R/W I/O Threads | 4 4
             I/O Capacity | 200
       Thread Concurrency | 0
      Concurrency Tickets | 5000
       Commit Concurrency | 0
      Txn Isolation Level | REPEATABLE-READ
        Adaptive Flushing | ON
      Adaptive Checkpoint | 
           Checkpoint Age | 78M
             InnoDB Queue | 0 queries inside InnoDB, 0 queries in queue

 

Wait, what? 256M of redo logs and a Checkpoint Age of only 78M? That is quite conservative, considering a 93GB buffer pool size. I guess we should assume bigger redo logs for such a big buffer pool. Bingo! We have a smoking gun here.

Additionally, full ACID features were enabled, this is innodb_flush_log_at_trx_commit=1 and sync_binlog=1, which adds a lot of write overhead to every operation because, during the commit stage, everything is flushed to disk (or to gp2 in this case).

Considering a spike of load running a lot of writing queries, hitting the max checkpoint age in this setup is a very likely situation.

Basically, MySQL will perform flushing operations at a certain rate depending on several factors. This rate is normally close to innodb_io_capacity (200 by default); if the number of writes starts to approach to max checkpoint age, then the adaptive flushing algorithm will start to push up to innodb_io_capacity_max (2000 by default) to try to keep the free space in the redo logs far from the max checkpoint age limit.

If we keep pushing, we can eventually reach the max checkpoint age, which will drive the system to the synchronous state, meaning that a sort of furious flushing operations will happen beyond innodb_io_capacity_max and all writing operations will be paused (freezing writes) until there is free room in the redo logs to keep writing.

This was exactly what was happening on this server. We calculated roughly how many writes were being performed per hour, and then we recommended increasing the size of redo log files to 2x2GB each (4GB total). In practical terms, it was 3.7G due to some rounding that RDS does, so we got:

# InnoDB #####################################################
                  Version | 5.7.38
         Buffer Pool Size | 92.0G
         Buffer Pool Fill | 100%
        Buffer Pool Dirty | 2%
           File Per Table | ON
                Page Size | 16k
            Log File Size | 2 * 1.9G = 3.7G
          Log Buffer Size | 8M
             Flush Method | O_DIRECT

 

Then we also increased the innodb_io_capacity_max to 4000, so we let the adaptive flushing algorithm increase writes with some more room. Results in CloudWatch show we were right:

 

AWS RDS Cloud MySQL

The reduction during the last couple of weeks is more than 50% of IOPS, which is pretty decent now, and we haven’t changed the hardware at all. Actually, it was possible to reduce the storage size to 3TB and avoid moving to expensive io2 (provisioned IOPS) storage.

Conclusions

RDS normally works very well out of the box; most of the configurations are properly set for the type of instance provisioned. Still, I’ve found that the RDS default size of the redo logs being this small is silly, and people using a fully managed solution would expect not to worry about some common tuning.

MySQL 8.0 implemented innodb_dedicated_server that auto sizes innodb_log_file_size and innodb_log_files_in_group (now replaced by innodb_redo_log_capacity) as a function of InnoDB Buffer Pool size using a pretty simple, but effective, algorithm, and I guess it shouldn’t be hard for AWS team to implement it. We’ve done some research, and it seems RDS is not pushing this login into the 8.0 versions, which sounds strange to have such a default for innodb_redo_log_capacity

In the meantime, checking how RDS MySQL is configured with default parameters is something we all should review to avoid the typical “throwing more hardware solution” – and, by extension, spending more money.

Percona Consultants have decades of experience solving complex database performance issues and design challenges. They’ll work with you to understand your goals and objectives and provide the best, unbiased solutions for your database environment.

 

Learn more about Percona Consulting

 

A personalized Percona Database Performance Audit will help uncover potential performance killers in your current configuration.

 

Get your personalized audit

Percona Database Performance Blog

How “Invisible” Metal Cuts Are Made

https://theawesomer.com/photos/2023/04/thin_line_metal_cuts_t.jpg

How “Invisible” Metal Cuts Are Made

Link

Metal objects like the Metmo Cube are fascinating because they feature parts that are so precisely cut that you can’t see where one piece begins and the other one ends. Science educator Steve Mould explains wire EDM machining, which enables the creation of such incredibly tight-fitting objects.

The Awesomer