MySQL Capacity Planning

https://www.percona.com/blog/wp-content/uploads/2023/08/MySQL-capacity-planning-150×150.jpgMySQL capacity planning

As businesses grow and develop, the requirements that they have for their data platform grow along with it. As such, one of the more common questions I get from my clients is whether or not their system will be able to endure an anticipated load increase. Or worse yet, sometimes I get questions about regaining normal operations after a traffic increase caused performance destabilization.

As the subject of this blog post suggests, this all comes down to proper capacity planning. Unfortunately, this topic is more of an art than a science, given that there is really no foolproof algorithm or approach that can tell you exactly where you might hit a bottleneck with server performance. But we can discuss common bottlenecks, how to assess them, and have a better understanding as to why proactive monitoring is so important when it comes to responding to traffic growth.

Hardware considerations

The first thing we have to consider here is the resources that the underlying host provides to the database. Let’s take a look at each common resource. In each case, I’ll explain why a 2x increase in traffic doesn’t necessarily mean you’ll have a 2x increase in resource consumption.

Memory

Memory is one of the easier resources to predict and forecast and one of the few places where an algorithm might help you, but for this, we need to know a bit about how MySQL uses memory.

MySQL has two main memory consumers. Global caches like the InnoDB buffer pool and MyISAM key cache and session-level caches like the sort buffer, join buffer, random read buffer, etc.

Global memory caches are static in size as they are defined solely by the configuration of the database itself. What this means is that if you have a buffer pool set to 64Gb, having an increase in traffic isn’t going to make this any bigger or smaller. What changes is how session-level caches are allocated, which may result in larger memory consumption.

A tool that was popular at one time for calculating memory consumption was mysqlcalculator.com. Using this tool, you could enter in your values for your global and session variables and the number of max connections, and it would return the amount of memory that MySQL would consume. In practice, this calculation doesn’t really work, and that’s due to the fact that caches like the sort buffer and join buffer aren’t allocated when a new connection is made; they are only allocated when a query is run and only if MySQL determines that one or more of the session caches will be needed for that query. So idle connections don’t use much memory at all, and active connections may not use much more if they don’t require any of the session-level caches to complete their query.

The way I get around this is to estimate the amount of memory consumed on average by sessions as such…

({Total memory consumed by MySQL} – {sum of all global caches}) / {average number of active sessions}

Keep in mind that even this isn’t going to be super accurate, but at least it gives you an idea of what common session-level memory usage looks like. If you can figure out what the average memory consumption is per active session, then you can forecast what 2x the number of active sessions will consume.

This sounds simple enough, but in reality, there could be more to consider. Does your traffic increase come with updated code changes that change the queries? Do these queries use more caches? Will your increase in traffic mean more data, and if so, will you need to grow your global cache to ensure more data fits into it?

With the points above under consideration, we know that we can generally predict what MySQL will do with memory under a traffic increase, but there may be changes that could be unforeseen that could change the amount of memory that sessions use.

The solution is proactive monitoring using time-lapse metrics monitoring like what you would get with Percona Monitoring and Management (PMM). Keep an eye on your active session graph and your memory consumption graph and see how they relate to one another. Checking this frequently can help you get a better understanding of how session memory allocation changes over time and will give you a better understanding of what you might need as traffic increases.

CPU

When it comes to CPU, there’s obviously a large number of factors that contribute to usage. The most common is the queries that you run against MySQL itself. However, having a 2x increase in traffic may not lead to a 2x increase in CPU as, like memory, it really depends on the queries that are run against the database. In fact, the most common cause of massive CPU increase that I’ve seen isn’t traffic increase; it’s code changes that introduced inefficient revisions to existing queries or new queries. As such, a 0% increase in traffic can result in full CPU saturation.

This is where proactive monitoring comes into play again. Keep an eye on CPU graphs as traffic increases. In addition, you can collect full query profiles on a regular basis and run them through tools like pt-query-digest or look at the Query Analyzer (QAN) in PMM to keep track of query performance, noting where queries may be less performant than they once were, or when new queries have unexpected high load.

Disk space

A 2x increase in traffic doesn’t mean a 2x increase in disk space consumption. It may increase the rate at which disk space is accumulated, but that also depends on how much of the traffic increase is write-focused. If you have a 4x increase in reads and a 1.05X increase in writes, then you don’t need to be overly concerned about disk space consumption rates.

Once again, we look at proactive monitoring to help us. Using time-lapse metrics monitoring, we can monitor overall disk consumption and the rate at which consumption occurs and then predict how much time we have left before we run out of space.

Disk IOPS

The amount of disk IOPS your system uses will be somewhat related to how much of your data can fit into memory. Keep in mind that the disk will still need to be used for background operations as well, including writing to the InnoDB redo log, persisting/checkpointing data changes to table spaces from the redo log, etc. But, for example, if you have a large traffic increase that’s read-dependent and all of the data being read in the buffer pool, you may not see much of an IOPS increase at all.

Guess what we should do in this case? If you said “proactive monitoring,” you get a gold star. Keep an eye out for metrics related to IOPS and disk utilization as traffic increases.

Before we move on to the next section, consider the differences in abnormal between disk space and disk IOPS. When you saturate disk IOPS, your system is going to run slow. If you fill up your disk, your database will start throwing errors and may stop working completely. It’s important to understand the difference so you know how to act based on the situation at hand.

Database engine considerations

While resource utilization/saturation are very common bottlenecks for database performance, there are limitations within the engine itself. Row-locking contention is a good example, and you can keep an eye on row-lock wait time metrics in tools like PMM. But, much like any other software that allows for concurrent session usage, there are mutexes/semaphores in the code that are used to limit the number of sessions that can access shared resources. Information about this can be found in the semaphores section in the output of the “SHOW ENGINE INNODB STATUS” command.

Unfortunately, this is the single hardest bottleneck to predict and is based solely on the use case. I’ve seen systems running 25,000+ queries per second with no issue, and I’ve also seen systems running ~5,000 queries per second that ran into issues with mutex contention.

Keeping an eye on metrics for OS context switching will help with this a little bit, but unfortunately this is a situation where you normally don’t know where the wall is until you run right into it. Adjusting variables like innodb_thread_concurrency can help with this in a pinch, but when you get to this point, you really need to look at query efficiency and horizontal scaling strategies.

Another thing to consider is configurable hard limits like max_connections, where you can limit the upper bound of the number of connections that can connect to MySQL at any given time. Keep in mind that increasing this value can impact memory consumption as more connections will use more memory, so use caution when adjusting upward.

Conclusion

Capacity planning is not something you do once a year or more as part of a general exercise. It’s not something you do when management calls you to let you know a big sale is coming up that will increase the load on the hosts. It’s part of a regular day-to-day activity for anyone that’s operating in a database administrator role.

Proactive monitoring plays a big part in capacity planning. I’m not talking about alert-based monitoring that hits your pager when it’s already too late, but evaluating metrics usage on a regular basis to see what the data platform is doing, how it’s handling its current traffic, etc. In most cases, you don’t see massive increases in traffic all at once; typically, it’s gradual enough that you can monitor as it increases and adjust your system or processes to avoid saturation.

Tools like PMM and the Percona Toolkit play a big role in proactive monitoring and are open source for free usage. So if you don’t have tools like this in place, this comes in at a price point that makes tool integration easier for your consideration.

Also, if you still feel concerned about your current capacity planning, you can always reach out to Percona Managed Services for a performance review or query review that will give you a detailed analysis of the current state of your database along with recommendations to keep it as performant as possible.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

 

Download Percona Monitoring and Management Today

Percona Database Performance Blog

TailwindCraft: Free and Open-Source Prebuilt UI Components

https://tailwindcraft.com/storage/photos/1/cover.pngMeticulously designed open-source UI components powered by Tailwind CSS. Streamlines web development with versatile prebuilt elements, seamless Tailwind CSS integration, and a vibrant open-source community.Laravel News Links

5 Ways to Retrieve the Last Inserted ID in Laravel

https://laracoding.com/wp-content/uploads/2023/07/5-ways-to-retrieve-the-last-inserted-id-in-laravel_1093.png

In Laravel, after inserting data into a database table, you might need to retrieve the last inserted ID after creating the record. This ID is essential for various tasks, like redirecting users to the newly created resource or performing further operations.

This blog post will guide you through several methods to get the last inserted ID in Laravel by using Eloquent models, the DB facade, or direct access to the PDO instance. Let’s explore the various approaches

Method 1: Retrieving Insert ID Using Model::create

One common way to insert data into the database and get the last inserted ID is by using the create method on an Eloquent model. This method not only inserts the data but also automatically fetches the last inserted ID. You can easily access the ID as a property as: $user->id. Consider the following example:

$user = User::create([
    'name' => 'Steve Rogers',
    'email' => 'captain.america@marvel.com',
    'password' => bcrypt('password123'),
]);

$lastInsertedId = $user->id;

Note that when passing an array of properties to the create function, make sure they are defined as fillable, otherwise the values won’t be written into the database. In our example this means we need to ensure the class User properly defines: protected $fillable = ['name', 'email', 'password'];

Method 2: Retrieving Insert ID Using new Model() and save()

Another approach to insert data and retrieve the last inserted ID is by creating a new instance of the model, setting the attributes, and then calling the save method. You can easily access the ID as a property with $user->id. Let’s consider the following example:

$user = new User();
$user->name = 'Natasha Romanoff';
$user->email = 'black.widow@marvel.com';
$user->password = bcrypt('password123');
$user->save();

$lastInsertedId = $user->id;

Method 3: Retrieving Insert ID Using DB Facade insertGetId()

When you need to insert data without using Eloquent models, you can utilize the insertGetId method provided by the DB facade.

$lastInsertedId = DB::table('users')->insertGetId([
    'name' => 'Bruce Banner',
    'email' => 'hulk@marvel.com',
    'password' => bcrypt('password123'),
    'created_at' =>  \Carbon\Carbon::now(),
    'updated_at' => \Carbon\Carbon::now(),
]);

Note that inserting records using this method will not fill your timestamp values automatically, even if they are defined in your Migration. For this reason we’ve included code to generate them manually using Carbon in the code above.

Method 4: Retrieving Insert ID Using Model::insertGetId

If you are inserting data directly through a model and want to retrieve the last inserted ID, you can use the insertGetId method on the model itself.

$lastInsertedId = User::insertGetId([
    'name' => 'Clint Barton',
    'email' => 'hawkeye@marvel.com',
    'password' => 'password123',
    'created_at' =>  \Carbon\Carbon::now(),
    'updated_at' => \Carbon\Carbon::now(),
]);

Note that inserting records using this method will not fill your timestamp values automatically, even if they are defined in your Migration. For this reason we’ve included code to generate them manually using Carbon in the code above.

Method 5: Direct Access to PDO lastInsertId

You can also directly access the PDO instance to retrieve the last inserted ID.

DB::table('users')->insert([
    'name' => 'Peter Parker',
    'email' => 'spiderman@marvel.com',
    'password' => bcrypt('password123'),
    'created_at' =>  \Carbon\Carbon::now(),
    'updated_at' => \Carbon\Carbon::now(),
]);

$lastInsertedId = DB::getPdo()->lastInsertId();

Note that inserting records using this method will not fill your timestamp values automatically, even if they are defined in your Migration. For this reason, we’ve included code to generate them manually using Carbon in the code above.

Conclusion

By following the methods outlined in this blog post, you can easily obtain the last inserted ID using Eloquent models, the DB facade, or direct access to the PDO instance. Choose the method that suits your needs. However, it’s worth noting that method 1 and method 2 are the most commonly used in Laravel applications. Happy coding!

References

Laravel News Links

A Dive Into toRawSql()


reading up on laravel tricks

Fly.io can build and run your Laravel apps globally, including your scheduled tasks. Deploy your Laravel application on Fly.io, you’ll be up and running in minutes!

In the recent past, we were able to dump out the SQL our query builder was generating like so:

$filter = 'wew, dogs';

// Using the `toSql()` helper
DB::table('foo')
  ->select(['id', 'col1', 'col2'])
  ->join('bar', 'foo.bar_id', 'bar.id')
  ->where('foo.some_colum', $filter)
  ->toSql();

// SELECT id, col1, cole2
//     FROM foo
//     INNER JOIN nar on foo.bar_id = bar.id
//     WHERE foo.some_column = ?

This was useful for debugging complicated queries, but note how we didn’t get the value in our WHERE statement!
All we got was a pesky ? – a placeholder for whatever value we’re passing into the query.
The actual value is hidden from us.

“That’s not what I need”, you may have asked yourself. Assuming we aren’t debugging SQL syntax, our query bindings are what we likely care about the most.

Getting the Full Query

New to Laravel 10.15 is the ability to get the full sql query! That’s much more useful for debugging.

$filter = 'wew, dogs';

// Using the `toRawSql()` helper
DB::table('foo')
  ->select(['id', 'col1', 'col2'])
  ->join('bar', 'foo.bar_id', 'bar.id')
  ->where('foo.some_colum', $filter)
  ->toRawSql();

// SELECT "id", "col1", "col2"
//     FROM "foo" 
//     INNER JOIN "bar" ON "foo"."bar_id" = "bar"."id"
//     WHERE "foo"."some_colum" = 'wew, dogs'"

Much better!

The trick to this is that PDO (the core library used to connect to databases) doesn’t just give this to us – we can only get the SQL with the binding placeholders (hence the old toSql() limitation).

So, we need to build the query with our values within the query ourselves! How’s that done?

How It’s Done

This is tricky business, as we’re dealing with user input – any crazy thing a developer (or their users)
might throw into a sql query needs to be properly escaped.

The new toRawSql() helper stuffs the important logic into a method named substituteBindingsIntoRawSql(). Here’s the PR, for reference.

If we dig into that method code a bit, we’ll
see what’s going on!

The first thing the function does is escape all of the values. This lets Laravel print out the query as a string without worrying about mis-aligned quotes or similar issues.

$bindings = array_map(fn ($value) => $this->escape($value), $bindings);

The call to $this->escape() goes down to a database connection object, and deeper into the underlying PDO object. PDO does the work of
actually escaping the query values in a safe way.

You can get a “Connection Refused” error using the toRawSql() method if your database connection isn’t configured or isn’t working.
That’s because the underlying code uses the “connection” object (PDO under the hood) to escape characters within the output.

Following the escaping-of-values, the method goes through the query character by character!

for ($i = 0; $i < strlen($sql); $i++) {
    $char = $sql[$i];
    $nextChar = $sql[$i + 1] ?? null;

    // and so on
}

The major supported databases use different conventions for escape characters. The code here attempts to find escaped characters and ignore them, lest it tries to
substitute something that looks like a query binding character but isn’t. This is the most fraught bit of code in this new feature.

$query = '';


for ($i = 0; $i < strlen($sql); $i++) {
    $char = $sql[$i];
    $nextChar = $sql[$i + 1] ?? null;

    // Single quotes can be escaped as '' according to the SQL standard while
    // MySQL uses \'. Postgres has operators like ?| that must get encoded
    // in PHP like ??|. We should skip over the escaped characters here.
    if (in_array($char.$nextChar, ["\'", "''", '??'])) {
        // We are building the query string back up - We ignore escaped characters
        // and append them to our rebuilt query string. Since we append
        // two characters, we `$i += 1` so the loop skips $nextChar in our `for` loop
        $query .= $char.$nextChar;
        $i += 1;
    } ...
}

The for loop is rebuilding the query string, but with values substituted in for their ? placeholders. The first check here
is looking for certain escape characters. It needs to know the current character AND the next one to know if it’s an escaped character
and therefore should not do any substitutions.

The next part of our conditional is this:

} elseif ($char === "'") { // Starting / leaving string literal...
    $query .= $char;
    $isStringLiteral = ! $isStringLiteral;
}

If we’re opening an unescaped quote, it means we’re at the start (or end) of a string literal. We set a flag for this case, which
is important in our next check.

elseif ($char === '?' && ! $isStringLiteral) { // Substitutable binding...
    $query .= array_shift($bindings) ?? '?';
}

Here’s the magic. If we’re NOT inside of a string literal, AND we’re not finding an escaped character, AND we find a ? character,
then we can assume it’s a query binding to be substituted with the actual value. We take our array of values and shift it – we remove the
first item in that array and append its value to our query string. (The array of values are in the same order of query binding characters – ? – in the query).

Finally, if we just have a regular character that doesn’t have special meaning, we just append it to our query string:

else { // Normal character...
    $query .= $char;
}

Fly.io ❤️ Laravel

Fly your servers close to your users—and marvel at the speed of close proximity. Deploy globally on Fly in minutes!


Deploy your Laravel app!  

Here’s the whole method, as it stands as I write this:

public function substituteBindingsIntoRawSql($sql, $bindings)
{
    $bindings = array_map(fn ($value) => $this->escape($value), $bindings);

    $query = '';

    $isStringLiteral = false;

    for ($i = 0; $i < strlen($sql); $i++) {
        $char = $sql[$i];
        $nextChar = $sql[$i + 1] ?? null;

        // Single quotes can be escaped as '' according to the SQL standard while
        // MySQL uses \'. Postgres has operators like ?| that must get encoded
        // in PHP like ??|. We should skip over the escaped characters here.
        if (in_array($char.$nextChar, ["\'", "''", '??'])) {
            $query .= $char.$nextChar;
            $i += 1;
        } elseif ($char === "'") { // Starting / leaving string literal...
            $query .= $char;
            $isStringLiteral = ! $isStringLiteral;
        } elseif ($char === '?' && ! $isStringLiteral) { // Substitutable binding...
            $query .= array_shift($bindings) ?? '?';
        } else { // Normal character...
            $query .= $char;
        }
    }

    return $query;
}

That’s basically all there is to the story. We want the entire query to be available with our values!
Here are some (light) caveats we have to get this feature:

  1. We need to make a database connection (thanks to using the PDO library for safe escaping)
  2. The above code MAY contain the occasional bug depending on what values are used
  3. Very long (e.g. binary) values in a query would likely return a complete mess of a query string

Those trade-offs seem fine to me!

Laravel News Links

Flamethrower Tuba

https://theawesomer.com/photos/2023/08/flaming_tuba_t.jpg

Flamethrower Tuba

Link

There’s nothing inherently dangerous about playing a tuba. Sure, you might run out of breath, but that’s about it. YouTuber and maker MasterMilo has created the most dangerous brass instrument we’ve ever seen. His flamethrower tuba is powered by a chainsaw engine and spews a stream of flaming propane out of its bell.

The Awesomer

Automatically generate RSS feeds in a Laravel application

https://leopoletto.com/assets/images/how-to-generate-rss-feeds-in-a-laravel-application.png

One handy way of keeping users up-to-date on your content is creating an RSS feed.
It allows them to sign up using an RSS reader.
The effort to implement this feature is worth considering because
the website will have another content distribution channel.

Spatie, a well-known company by creating hundreds of good packages for Laravel.
One of them is laravel-feed.
Let’s see how it works:

Installation

The first step is to install the package in your Laravel Application:

composer require spatie/laravel-feed

Then you must publish the config file:

php artisan vendor:publish --provider="Spatie\Feed\FeedServiceProvider" --tag="feed-config"

Usage

Let’s break down the possibilities when configuring a feed.

Creating feeds

The config file has a feeds key containing an array in which each item represents a new feed, and the key is the feed name.

Let’s create a feed for our Blog Posts:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //...
        ],
        'another-feed' => [
            //...
        ]   
    ]
];

The key blog-posts is also the name of the feed in which its value contains the configuration as an Array.
You can create more feeds if needed, but for the sake of this article, let’s focus on blog-posts.

That being said, for our model to work,
we need
to implement the interface Spatie\Feed\Feedable.
It has a signature for a public method named toFeedItem
which must return an instance of Spatie\Feed\FeedItem.

Below is an example of how to create a FeedItem object:

app/Models/BlogPost.php

use Illuminate\Database\Eloquent\Model;
use Spatie\Feed\Feedable;
use Spatie\Feed\FeedItem;

class BlogPost extends Model implements Feedable
{
    //...
    public function toFeedItem(): FeedItem
    {
        return FeedItem::create()
            ->id($this->id)
            ->title($this->title)
            ->summary($this->summary)
            ->updated($this->updated_at)
            ->link(route('blog-posts.show', $this->slug))
            ->authorName($this->author->name)
            ->authorEmail($this->author->email);
    }
}

Now we must create a class with a static method which is going to return a collection of App\Models\BlogPost objects:

app/Feed/BlogPostFeed.php

namespace App\Feed;

use App\Models\BlogPost;
use Illuminate\Database\Eloquent\Collection;

class BlogPostFeed
{
    public static function getFeedItems(): Collection
    {
        return BlogPost::all();
    } 
}

Back to our config file, the first key for our feed configuration is items,
which defines where to retrieve the collection of posts.

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems']
            //...
        ],
    ]
];

Then you have to define the URL:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            'url' => '/posts', //https://domain.com/posts
            //...
        ],
    ]
];

Register the routes using a macro feeds included in the package:

app/routes/web.php

//...
Route::feeds();  //https://domain.com/posts

If you wish to add a prefix:

app/routes/web.php

//...
Route::feeds('rss'); //https://domain.com/rss/posts

Following, you must add a title, description and language:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            //'url' => '/posts',
            'title' => 'My feed',
            'description' => 'The description of the feed.',
            'language' => 'en-US',
            //...
        ],
    ]
];

You can also define the format of the feed and the view that will render it.
The acceptable values are RSS, atom, or JSON:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            //'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            //'url' => '/posts',
            //'title' => 'My feed',
            //'description' => 'The description of the feed.',
            //'language' => 'en-US',
            'format' => 'rss',
            'view' => 'feed::rss',
            //...
        ],
    ]
];

There are a few additional options:

 /*
 * The image to display for the feed. For Atom feeds, this is displayed as
 * a banner/logo; for RSS and JSON feeds, it's displayed as an icon.
 * An empty value omits the image attribute from the feed.
 */
'image' => '',

/*
 * The mime type to be used in the <link> tag. Set to an empty string to automatically
 * determine the correct value.
 */
'type' => '',

/*
 * The content type for the feed response. Set to an empty string to automatically
 * determine the correct value.
 */
'contentType' => '',

The final result of the config file should look like below:

app/config/feed.php

return [
    'feeds' => [
        'blog-posts' => [
            'items' => [App\Feed\BlogPostFeed::class, 'getFeedItems'],
            'url' => '/posts',
            'title' => 'My feed',
            'description' => 'The description of the feed.',
            'language' => 'en-US',
            'format' => 'rss',
            'view' => 'feed::rss',
            'image' => '',
            'type' => '',
            'contentType' => '',
        ],
    ]
];

Automatically generate feed links

Feed readers discover a feed looking for a tag in the head section of your HTML documents:

<link rel="alternate" type="application/atom+xml" title="News" href="/rss/posts">

Add this to your <head>:

@include('feed::links')

Alternatively, use the available blade component:

<x-feed-links />

Conclusion

In this article,
you’ve learned how easy it is to add an RSS feed to your website using the laravel-feed package from Spatie.

If you have any comments,
you can share them in the discussion on Twitter.

Laravel News Links

PATH settings for Laravel

https://laravelnews.s3.amazonaws.com/images/path-featured.jpg

For Laravel development, we often find ourselves typing commands like ./vendor/bin/pest to run project-specific commands.

We don’t need to!

To help here, we can update our Mac (or Linux) $PATH variable.

What’s $PATH?

The $PATH variable sets the directories your system looks for when finding commands to run.

For example, we can type which <cmd> to find the path to any given command:

$ which git

/usr/local/bin/git

My system knew to find git in /usr/local/bin because /usr/local/bin is one directory set in my $PATH!

You can echo out your path right now:

# Output the whole path

echo $PATH

 

# For human-readability, split out each

# directory into a new line:

echo "$PATH" | tr ':' '\n'

Relative Directories in PATH

We can edit our $PATH variable to add in whatever directories we want!

One extremely handy trick is to set relative directories in your $PATH variable.

Two examples are adding ./vendor/bin and ./node_modules/.bin:

# In your ~/.zshrc, ~/.bashrc or, ~/.bash_profile or similar

# Each directory is separated by a colon

PATH=./vendor/bin:./node_modules/.bin:$PATH

Here we prepended our two new paths to the existing $PATH variable. Now, no matter what Laravel application we’re cded into, we can run pest and know we’re running ./vendor/bin/pest, phpunit to run ./vendor/bin/phpunit (and the same for any given Node command in ./node_modules/.bin).

We can also set the current directory . in our $PATH (if it’s not already set – it may be):

# In your ~/.zshrc, ~/.bashrc or, ~/.bash_profile or similar

# Each directory is separated by a colon

# Here we also set the current directory in our PATH

PATH=.:./vendor/bin:./node_modules/.bin:$PATH

This way we can type artisan instead of ./artisan or php artisan.

These are the settings I have in place in Chipper CI so users can run pest or phpunit without having to worry about where the command exists in their CI environments.

Notes

Order also matters in $PATH. When a command is being searched for, the earlier directories are searched first. The system will use the first command found – this means you can over-ride a system command by placing it in a directory earlier in $PATH. That’s why we prepend ./vendor/bin and ./node_modules/.bin into $PATH instead of append it.

You can find all locations of a command like this:

$ which -a git

 

git is /usr/local/bin/git

git is /usr/bin/git

git is /usr/local/bin/git

git is /usr/bin/git

Lastly, in all cases here, the commands should have executable permissions to work like this. This is something to keep in mind when creating your own commands, such as a custom bash script.

Laravel News

Scientists in Japan Develop Experimental Alzheimer’s Vaccine That Shows Promise in Mice

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/6c6ea977db6c4dd1d04bba37a0b2b576.jpg

Scientists in Japan may be at the start of a truly monumental accomplishment: a vaccine that can slow or delay the progression of Alzheimer’s disease. In preliminary research released this week, the vaccine appeared to reduce inflammation and other important biomarkers in the brains of mice with Alzheimer’s-like illness, while also improving their awareness. More research will be needed before this vaccine can be tested in humans, however.

Why Do People Buy into Crypto? | Gizmodo Interview

The experimental vaccine is being developed primarily by scientists from Juntendo University in Japan.

It’s intended to work by training the immune system to go after certain senescent cells, aging cells that no longer divide to make more of themselves, but instead stick around in the body. These cells aren’t necessarily harmful, and some play a vital role in healing and other life functions. But they’ve also been linked to a variety of age-related diseases, including Alzheimer’s. The vaccine specifically targets senescent cells that produce high levels of something called senescence-associated glycoprotein, or SAGP. Other research has suggested that people with Alzheimer’s tend to have brains filled with these cells in particular.

The team tested their vaccine on mice bred to have brains that develop the same sort of gradual destruction seen in humans with Alzheimer’s. This damage is thought to be fueled by the accumulation of a misfolded form of amyloid-beta, a protein. The mice were divided into two groups, with only one group given the actual vaccine.

In the brains of the vaccinated mice, the team found signs of reduced inflammation and fewer amyloid deposits along with lower levels of SAGP-expressing cells. These mice also seemed to behave more like typical mice compared to controls. They continued to exhibit anxiety as they aged, for instance—a trait that tends to fade in people with late-stage Alzheimer’s. They also showed more awareness of their surroundings during maze tests.

The findings were presented over the weekend at the American Heart Association’s Basic Cardiovascular Sciences Scientific Sessions 2023. That means this research hasn’t been formally peer-reviewed yet, so it should be viewed with added caution. At the same time, the team’s vaccine appears to have met an important criteria that many past attempts have failed to reach.

“Earlier studies using different vaccines to treat Alzheimer’s disease in mouse models have been successful in reducing amyloid plaque deposits and inflammatory factors, however, what makes our study different is that our SAGP vaccine also altered the behavior of these mice for the better,” said lead author Chieh-Lun Hsiao, a post-doctoral fellow in the department of cardiovascular biology and medicine at Juntendo University, in a statement released by the American Heart Association.

Of course, mice studies are only the beginning of showing that an experimental drug or vaccine can possibly work as intended. It will take further studies to validate these results and to test the vaccine’s safety in humans before large-scale trials even enter the picture.

But there have been several recent, if modest, successes in Alzheimer’s treatment as of late, and other experimental candidates—including vaccines—are already in clinical trials. With any luck, these newer and upcoming therapies might one day stop Alzheimer’s from being the incurable death sentence that it currently is today.

Gizmodo