‘This is a Left-Wing Cult’: Joe Rogan UNLOADS on Dishonest Media Coverage of Kyle Rittenhouse Trial

https://www.louderwithcrowder.com/media-library/image.png?id=27984877&width=980

Day Two of Rittenhouse jury deliberation has begun. As we wait, we reflect on what garbage human beings the mainstream media has been throughout this entire thing. The main reason he’s on trial is because of lies and untruths the media has spread. And this is just one man’s opinion, but I’m sure being attacked by media is on the mind of at last a few jurors if they’re thinking about voting to acquit Rittenhouse. Joe Rogan is someone who knows firsthand how much the media lies and swears by it. Once people discover this rant by Dr. Joe, MD, the media will proclaim "who, us?" Brian Stelter is probably crying into his breakfast cheesecake as we speak.

"This information is not based on reality. This is a left-wing cult. They’re pumping stuff out and then they are confirming this belief. They are all getting together, and they are ignoring contrary evidence. They are ignoring any narrative that challenges their belief about what happened, and they are not looking at it realistically. They are only looking at it like you would if you were in a f*cking cult."

As an aside, what a cast of characters! Drew Hernandez, Tim Pool, Blaire White, AND Alex Jones.

More people need to be exposed to who the media is. They won’t just go after you if you hold a different opinion than them or if they think they can use you to advance their leftist narrative. They’ll go after you if they even assume you have a different opinion. Rittenhouse is only one of the most extreme examples of it.


SNL Propaganda Isn’t Even Trying Anymore | Louder With Crowder

youtu.be

Louder With Crowder

How Triggers May Significantly Affect the Amount of Memory Allocated to Your MySQL Server

https://www.percona.com/blog/wp-content/uploads/2021/11/Triggers-Affect-Memory-Allocated-to-Your-MySQL-Server-300×157.pngTriggers Affect Memory Allocated to Your MySQL Server

Triggers Affect Memory Allocated to Your MySQL ServerMySQL stores active table descriptors in a special memory buffer called the table open cache. This buffer is controlled by configuration variables table_open_cache that hold the maximum number of table descriptors that MySQL should store in the cache, and table_open_cache_instances that stores the number of the table cache instances. With default values of table_open_cache=4000 and table_open_cache_instances=16, MySQL will create 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other.

If you use only tables, the table cache does not require a lot of memory because descriptors are lightweight, and even if you significantly increase the value of the table_open_cache, the required memory amount would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of tables.

However, if your tables have triggers, it changes the game.

For the test I created a table with a single column and inserted a row into it:

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

Then I flushed the table cache and measured how much memory it uses:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,02 sec)mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

Then I accessed the table to put it into the cache.

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,01 sec)

16 table descriptors took less than 16 KiB in the cache.

Now let’s try to create some triggers on this table and see if it changes anything.

mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW 
    -> BEGIN 
    ->   SIGNAL SQLSTATE '45000' SET message_text='Very long string. 
    ->     MySQL stores table descriptors in a special memory buffer, called table open cache. 
    ->     This buffer could be controlled by configuration variables table_open_cache that 
    ->     holds how many table descriptors MySQL should store in the cache and table_open_cache_instances 
    ->     that stores the number of the table cache instances. So with default values of table_open_cache=4000 
    ->     and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 
    ->     table descriptors each. These table cache instances could be accessed concurrently, allowing DML 
    ->     to use cached table descriptors without locking each other. If you use only tables, the table cache 
    ->     does not require a lot of memory, because descriptors are lightweight, and even if you significantly 
    ->     increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take 
    ->     up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge 
    ->     number for this number of open tables. However, if your tables have triggers, it changes the game.'; 
    -> END|

Then let’s flush the table cache and test memory usage again.

Initial state:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

After I put the tables into the cache:

$ for i in `seq 1 1 16`; do mysql test -e "SELECT * FROM tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    20 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     75.17 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|    611.12 KiB |
+---------------+
1 row in set (0,00 sec)

As a result, in addition to 75.17 KiB in the table cache, 611.12 KiB is occupied by the memory/sql/sp_head::main_mem_root. That is the "Mem root for parsing and representation of stored programs."

This means that each time when the table is put into the table cache, all associated triggers are put into the memory buffer, storing their definitions.

FLUSH TABLES command clears the stored programs cache as well as the table cache:

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

More triggers increase memory usage when put into the cache.

For example, if we create five more triggers and repeat our test we will see the following numbers:

mysql> \d |
mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000
' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances t
hat stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 t
able descriptors each. These table cache instances could be accessed concurrently, allowing DML to u
se cached table descriptors without locking each other. If you use only tables, the table cache doe
s not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take u
p to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hu
ge number for this number of open tables. However, if your tables have triggers, it changes the gam
e.'; END|
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000' SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, calle
at holds how many table descriptors MySQL should store in the cache and table_open_cache_instances that stores the number of the table cache instances. So with default values of table_open_cache=4000
and table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 table descriptors each. These table cache instances could be accessed concurrently, allowing DML to use cached table descriptors without locking each other. If you use only tables, the table cache does not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a huge number for this number of open tables. However, if your tables have triggers, it changes the game.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed ta a
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Very long string. MySQL stores table descriptors in a special memory buffer, call
ed table open cache. This buffer could be controlled by configuration variables table_open_cache tha
t holds how many table descriptors MySQL should store in the cache and table_open_cache_instances th
at stores the number of the table cache instances. So with default values of table_open_cache=4000 a
nd table_open_cache_instances=16, you will have 16 independent memory buffers that will store 250 ta
ble descriptors each. These table cache instances could be accessed concurrently, allowing DML to us
e cached table descriptors without locking each other. If you use only tables, the table cache does
not require a lot of memory, because descriptors are lightweight, and even if you significantly increased the value of table_open_cache, it would not be so high. For example, 4000 tables will take up
to 4000 x 4K = 16MB in the cache, 100.000 tables will take up to 390MB that is also not quite a hug
e number for this number of open tables. However, if your tables have triggers, it changes the game
.'; END|
Query OK, 0 rows affected (0,01 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables | 35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
| 446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 3.58 MiB |
+---------------+
1 row in set (0,00 sec)

Numbers for the event memory/sql/sp_head::main_mem_root differ six times:

mysql> SELECT 3.58*1024/611.12;
+------------------+
| 3.58*1024/611.12 |
+------------------+
|         5.998691 |
+------------------+
1 row in set (0,00 sec)

Note that the length of the trigger definition affects the amount of memory allocated by the memory/sql/sp_head::main_mem_root.

For example, if we define the triggers as follow:

mysql> DROP TABLE tc_test;
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TABLE tc_test( f1 INT);
Query OK, 0 rows affected (0,03 sec)

mysql> INSERT INTO tc_test VALUES(1);
Query OK, 1 row affected (0,01 sec)

mysql> \d |
mysql> CREATE TRIGGER tc_test_ai AFTER INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_au AFTER UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,04 sec)

mysql> CREATE TRIGGER tc_test_ad AFTER DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bi BEFORE INSERT ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> CREATE TRIGGER tc_test_bu BEFORE UPDATE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,02 sec)

mysql> CREATE TRIGGER tc_test_bd BEFORE DELETE ON tc_test FOR EACH ROW BEGIN SIGNAL SQLSTATE '45000'
SET message_text='Short string';end |
Query OK, 0 rows affected (0,01 sec)

mysql> \d ;
mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,01 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|     60.50 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |    35 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/TABLE_SHARE::mem_root';
+---------------+
| current_alloc |
+---------------+
|    446.23 KiB |
+---------------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
|      1.89 MiB |
+---------------+
1 row in set (0,00 sec)

The resulting amount of memory is 1.89 MiB instead of 3.58 MiB for the longer trigger definition.

Note that having a single table cache instance requires less memory to store trigger definitions. E.g. for our small six triggers, it will be 121.12 KiB instead of 1.89 MiB:

mysql> SHOW GLOBAL VARIABLES LIKE 'table_open_cache_instances';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| table_open_cache_instances |     1 |
+----------------------------+-------+
1 row in set (0,00 sec)

mysql> FLUSH TABLES;
Query OK, 0 rows affected (0,00 sec)

mysql> SHOW GLOBAL STATUS LIKE 'Open_tables';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Open_tables   |     2 |
+---------------+-------+
1 row in set (0,00 sec)

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
Empty set (0,00 sec)

$ for i in `seq 1 1 16`; do mysql test -e "select * from tc_test"; done
...

mysql> SELECT current_alloc FROM sys.memory_global_by_current_bytes 
    -> WHERE event_name='memory/sql/sp_head::main_mem_root';
+---------------+
| current_alloc |
+---------------+
| 121.12 KiB |
+---------------+
1 row in set (0,00 sec)

Conclusion

When you access tables that have associated triggers, their definitions are put into the stored programs cache even when not fired. This was reported at MySQL Bug #86821 and closed as “Not a Bug” by Oracle. This is, certainly, not a bug, but the table and stored routines cache design. Still, it is good to be prepared, so you are not surprised when you run short of memory faster than you expect. Especially if you have many triggers with long definitions.

Percona Database Performance Blog

BREAKING: Mark McCloskey Argues with BLM Protestors While Rittenhouse Jury Deliberates [VIDEO]

https://cdn0.thetruthaboutguns.com/wp-content/uploads/2021/11/2021-11-16_17-02-30.png

Mark McCloskey kenosha protestor
Mark McCloskey gets into it verbally with a protestor outside the courthouse in Kenosha. (Photo credit: Fox News)

Next Post Coming Soon…▶

While jurors deliberate inside the courthouse in Kenosha, Wisconsin in the Kyle Rittenhouse case – the outcome of which promises to be important for gun owners everywhere – Mark McCloskey is outside arguing with protestors. Why? Because there’s never a dull moment in what has been a circus of a two-week trial.

Fox News reports:

Mark McCloskey, the St. Louis lawyer who made national headlines last year when he carried a gun on his property near a social justice protest in his neighborhood, argued with a protester outside the Kenosha County Courthouse on Tuesday afternoon. 

“It really hurts me that you would have that much hatred,” the protester told McCloskey. 

“There is absolutely no hatred involved in what I did,” McCloskey responded. “They came in, storming through my gate, broke down my gate, stormed toward my house, and I was afraid for my life.”

If the jurors can’t reach a decision in the case, Judge Bruce Schroeder will be polling them to find out if they want to continue deliberating. Is McCloskey helping by grabbing another fifteen minutes of fame – or would that be infamy? – on the courthouse steps? Seems unlikely.

Here’s part of the exchange between McCloskey and a protestor.

Next Post Coming Soon…▶

The Truth About Guns

How to add Server Timing Header information for Laravel Application

https://postsrc.com/storage/images/snippets/how-to-add-server-timing-header-information-for-laravel-application.jpeg

To get the server timing information and pass it in the response header in the Laravel application you can make use “

laravel-server-timing

” package by beyondcode. This package allows you to get the total number of milliseconds the application takes to bootstrap, execute and run. 

Add Server-Timing header information from within your Laravel apps.

Step 1: Install the package using Composer

First, you will have to install it using composer using the command below.

composer require beyondcode/laravel-server-timing

Step 2: Add the Middleware Class

To add server-timing header information, you need to add the \BeyondCode\ServerTiming\Middleware\ServerTimingMiddleware::class, middleware to your HTTP Kernel. In order to get the most accurate results, put the middleware as the first one to load in the middleware stack.

\BeyondCode\ServerTiming\Middleware\ServerTimingMiddleware::class

Step 3: View the timing from the browser

To view the timing you can view your browser inspect window and see the Server Timing tabs.

Server Timing tabs

Adding Additional Requirements

Sometimes you might also add additional requirements by using the code below. By doing so you will be able to know how long the codes take to execute and whether it requires more optimization for speedy performance.

ServerTiming::start('Running expensive task');

// do something

ServerTiming::stop('Running expensive task');

Optional: Publish the configuration

You can also publish the configuration by running the code below.

php artisan vendor:publish --tag=server-timing-config

In addition to the default configuration, you can add the value of the timing.php is as follows to ensure it syncs with the .env variable.

<?php

return [
    'enabled' => env('SERVER_TIMING', false),
];

Laravel News Links

Brownells Launches the Interactive “How to Build an AR-15” Video Series

https://www.thefirearmblog.com/blog/wp-content/uploads/2021/11/unnamed-6-180×180.jpg

Brownells Launches the Interactive "How to Build an AR-15" Video SeriesWith increasing restrictions in regards to firearms on YouTube and other social media platforms, new gun owners and those who are interested in building out their first AR are finding it harder and harder to find reliable and simple instructions for their first AR-15 build. Brownells and the living legend Roy Hill are excited to […]

Read More …

The post Brownells Launches the Interactive “How to Build an AR-15” Video Series appeared first on The Firearm Blog.

The Firearm Blog

Database and Eloquent ORM: New features and improvements since the original Laravel 8 release (1/2)

https://protone.media/img/header_social.jpg

Database and Eloquent ORM: New features and improvements since the original Laravel 8 release (1/2)

In this series, I show you new features and improvements to the Laravel framework since the original release of version 8. Last week, I wrote about the Collection class. This week is about the Database and Eloquent features in Laravel 8. The team added so many great improvements to the weekly versions that I split the Database and Eloquent features into two blog posts. Here is part one!

I got most code examples and explanations from the PRs and official documentation.

v8.5.0 Added crossJoinSub method to the query builder (#34400)

Add a subquery cross join to the query.

use Illuminate\Support\Facades\DB;

$totalQuery = DB::table('orders')->selectRaw('SUM(price) as total');

DB::table('orders')
    ->select('*')
    ->crossJoinSub($totalQuery, 'overall')
    ->selectRaw('(price / overall.total) * 100 AS percent_of_total')
    ->get();

v8.10.0 Added is() method to 1-1 relations for model comparison (#34693)

We can now do model comparisons between related models, without extra database calls!

// Before: foreign key is leaking from the post model
$post->author_id === $user->id;

// Before: performs extra query to fetch the user model from the author relation
$post->author->is($user);

// After
$post->author()->is($user);

v8.10.0 Added upsert to Eloquent and Base Query Builders (#34698, #34712)

If you would like to perform multiple “upserts” in a single query, then you may use the upsert method instead of multiple updateOrCreate calls.

Flight::upsert([
    ['departure' => 'Oakland', 'destination' => 'San Diego', 'price' => 99],
    ['departure' => 'Chicago', 'destination' => 'New York', 'price' => 150]
], ['departure', 'destination'], ['price']);

v8.12.0 Added explain() to Query\Builder and Eloquent\Builder (#34969)

The explain() method allows you to receive the explanation from the builder (both Query and Eloquent).

User::where('name', 'Illia Sakovich')->explain();

User::where('name', 'Illia Sakovich')->explain()->dd();

v8.15.0 Added support of MorphTo relationship eager loading constraints (#35190)

If you are eager loading a morphTo relationship, Eloquent will run multiple queries to fetch each type of related model. You may add additional constraints to each of these queries using the MorphTo relation’s constrain method:

use Illuminate\Database\Eloquent\Builder;
use Illuminate\Database\Eloquent\Relations\MorphTo;

$comments = Comment::with(['commentable' => function (MorphTo $morphTo) {
    $morphTo->constrain([
        Post::class => function (Builder $query) {
            $query->whereNull('hidden_at');
        },
        Video::class => function (Builder $query) {
            $query->where('type', 'educational');
        },
    ]);
}])->get();

v8.17.2 Added BelongsToMany::orderByPivot() (#35455)

This method allows you to directly order the query results of a BelongsToMany relation:

class Tag extends Model
{
    public $table = 'tags';
}

class Post extends Model
{
    public $table = 'posts';

    public function tags()
    {
        return $this->belongsToMany(Tag::class, 'posts_tags', 'post_id', 'tag_id')
            ->using(PostTagPivot::class)
            ->withTimestamps()
            ->withPivot('flag');
    }
}

class PostTagPivot extends Pivot
{
    protected $table = 'posts_tags';
}

// Somewhere in a controller
public function getPostTags($id)
{
    return Post::findOrFail($id)->tags()->orderPivotBy('flag', 'desc')->get();
}

The sole method will return the only record that matches the criteria. If no records are found, a NoRecordsFoundException will be thrown. If multiple records were found, a MultipleRecordsFoundException will be thrown.

DB::table('products')->where('ref', '#123')->sole()

v8.27.0 Allow adding multiple columns after a column (#36145)

The after method may be used to add columns after an existing column in the schema:

Schema::table('users', function (Blueprint $table) {
    $table->after('remember_token', function ($table){
        $table->string('card_brand')->nullable();
        $table->string('card_last_four', 4)->nullable();
    });
});

v8.37.0 Added anonymous migrations (#36906)

Laravel automatically assign a class name to all of the migrations. You may now return an anonymous class from your migration file:

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration {
    public function up()
    {
        Schema::table('people', function (Blueprint $table) {
            $table->string('first_name')->nullable();
        });
    }
};

v8.27.0 Add query builder map method (#36193)

The new chunkMap method is similar to the each query builder method, where it automatically chunks over the results:

return User::orderBy('name')->chunkMap(fn ($user) => [
    'id' => $user->id,
    'name' => $user->name,
]), 25);

v8.28.0 ArrayObject + Collection Custom Casts (#36245)

Since the array cast returns a primitive type, it is not possible to mutate an offset of the array directly. To solve this, the AsArrayObject cast casts your JSON attribute to an ArrayObject class:

// Within model...
$casts = ['options' => AsArrayObject::class];

// Manipulating the options...
$user = User::find(1);

$user->options['foo']['bar'] = 'baz';

$user->save();

v8.40.0 Added Eloquent\Builder::withOnly() (#37144)

If you would like to override all items within the $with property for a single query, you may use the withOnly method:

class Product extends Model{
    protected $with = ['prices', 'colours', 'brand'];

    public function colours(){ ... }
    public function prices(){ ... }
    public function brand(){ ... }
}

Product::withOnly(['brand'])->get();

v8.41.0 Added cursor pagination (aka keyset pagination) (#37216, #37315)

Cursor-based pagination places a “cursor” string in the query string, an encoded string containing the location that the next paginated query should start paginating and the direction it should paginate. This method of pagination is particularly well-suited for large data sets and “infinite” scrolling user interfaces.

use App\Models\User;
use Illuminate\Support\Facades\DB;

$users = User::orderBy('id')->cursorPaginate(10);
$users = DB::table('users')->orderBy('id')->cursorPaginate(10);

v8.12.0 Added withMax, withMin, withSum and withAvg methods to QueriesRelationships (#34965, #35004)

In addition to the withCount method, Eloquent now provides withMin, withMax, withAvg, and withSum methods. These methods will place a {relation}_{function}_{column} attribute on your resulting models.

Post::withCount('comments');

Post::withMin('comments', 'created_at');
Post::withMax('comments', 'created_at');
Post::withSum('comments', 'foo');
Post::withAvg('comments', 'foo');

Under the hood, these methods use the withAggregate method:

Post::withAggregate('comments', 'created_at', 'distinct');
Post::withAggregate('comments', 'content', 'length');
Post::withAggregate('comments', 'created_at', 'custom_function');

Comment::withAggregate('post', 'title');
Post::withAggregate('comments', 'content');

v8.13.0 Added loadMax, loadMin, loadSum and loadAvg methods to Eloquent\Collection. Added loadMax, loadMin, loadSum, loadAvg, loadMorphMax, loadMorphMin, loadMorphSum and loadMorphAvg methods to Eloquent\Model (#35029)

In addition to the new with* method above, new load* methods are added to the Collection and Model class.

// Eloquent/Collection
public function loadAggregate($relations, $column, $function = null) {...}
public function loadCount($relations) {...}
public function loadMax($relations, $column)  {...}
public function loadMin($relations, $column)  {...}
public function loadSum($relations, $column)  {...}
public function loadAvg($relations, $column)  {...}

// Eloquent/Model
public function loadAggregate($relations, $column, $function = null) {...}
public function loadCount($relations) {...}
public function loadMax($relations, $column) {...}
public function loadMin($relations, $column) {...}
public function loadSum($relations, $column) {...}
public function loadAvg($relations, $column) {...}

public function loadMorphAggregate($relation, $relations, $column, $function = null) {...}
public function loadMorphCount($relation, $relations) {...}
public function loadMorphMax($relation, $relations, $column) {...}
public function loadMorphMin($relation, $relations, $column) {...}
public function loadMorphSum($relation, $relations, $column) {...}
public function loadMorphAvg($relation, $relations, $column) {...}

v8.13.0 Modify QueriesRelationships::has() method to support MorphTo relations (#35050)

Add a polymorphic relationship count / exists condition to the query.

public function hasMorph($relation, ...)

public function orHasMorph($relation,...)
public function doesntHaveMorph($relation, ...)
public function whereHasMorph($relation, ...)
public function orWhereHasMorph($relation, ...)
public function orHasMorph($relation, ...)
public function doesntHaveMorph($relation, ...)
public function orDoesntHaveMorph($relation,...)

Example with a closure to customize the relationship query:

// Retrieve comments associated to posts or videos with a title like code%...
$comments = Comment::whereHasMorph(
    'commentable',
    [Post::class, Video::class],
    function (Builder $query) {
        $query->where('title', 'like', 'code%');
    }
)->get();

// Retrieve comments associated to posts with a title not like code%...
$comments = Comment::whereDoesntHaveMorph(
    'commentable',
    Post::class,
    function (Builder $query) {
        $query->where('title', 'like', 'code%');
    }
)->get();

Laravel News Links

Explosive Admission at Rittenhouse Trial – Video

https://www.ammoland.com/wp-content/uploads/2021/11/Facepalm-by-prosecutor-Krautz-after-Grosskruetz-says-he-was-pointing-gun-at-Rittenhouse-when-he-is-shot-1000-500×368.jpg

U.S.A.-(AmmoLand.com)- On day six of the Kyle Rittenhouse trial in Kenosha, Wisconsin, November 8, 2021, the court video captured a particularly dramatic moment.

Gaige Grosskreutz has given testimony as a prosecution witness. He is being cross-examined by Corey Chirafisi, a defense attorney. It occurs about 2 hours and 27 minutes into this trial video on November 8, 2021. Over the next few minutes, there is this exchange:

Defense attorney Corey Chirafisi:

So, your hands are up, and at that point he (Rittenhouse) has not fired. Correct?

Gaige Grosskreutz:

No he has not.

Defense attorney Corey Chirafisi:

Do you agree at this point, you are dropping your hands, you are loading up your left foot, and you are moving toward Mr. Rittenhouse, at that point, True?

Gaige Grosskreutz:

Yes. 

Defense attorney Corey Chirafisi:

So, When you were shot; Can you bring up the photo? Do you agree, and now wait, how close were you, in the… How close were you, from the background.

Gaige Grosskreutz:

Three feet. If I was five feet before, so

Defense attorney Corey Chirafisi:

At this point, you are holding a loaded chambered Glock 27 in your right hand, Yes?

Gaige Grosskreutz:

That is correct, yes.

Defense attorney Corey Chirafisi:

You are advancing on Mr. Rittenhouse, who is seated on his butt, right?

Gaige Grosskreutz:

That is correct.

Defense attorney Corey Chirafisi:

 You are moving forward and your right hand drops down so your gun, your hands are no longer up, your hand has dropped down and now your gun is pointed in the direction, at Mr. Rittenhouse, agreed?  I will give you another  (exhibit?), and maybe that will help.

Defense attorney Corey Chirafisi:

So Mr. Grosskruetz, I am going to show you what has has been marked as exhibit #67.

exhibit #67 from Rittenhouse Trial

That is a photo of you, Yes?

Gaige Grosskreutz:

Yes.

Defense attorney Corey Chirafisi:

That is Mr. Rittenhouse?

Gaige Grosskreutz:

Correct.

Defense attorney Corey Chirafisi:

Do you agree your firearm is pointed at Mr. Rittenhouse? Correct?

Gaige Grosskreutz:

 Yes.

Defense attorney Corey Chirafisi:

 Ok. And, Once your firearm is pointed at Mr. Rittenhouse, that’s when he fires, Yes?

Gaige Grosskreutz:

Yeah.

Defense attorney Corey Chirafisi:

Does this look like right when he was firing the shot?  (#67, moment of Rittenhouse’s shot)

Gaige Grosskreutz:

That looks like my bicept being vaporized, yes.

Defense attorney Corey Chirafisi:

And it was vaporized at the time you are pointing your gun directly at him?

Gaige Grosskreutz:

Yes.

Defense attorney Corey Chirafisi:

When you were standing 3-5 feet from him, with your arms up in the air, he never fired? Right?

Gaige Grosskruetz:

Correct.

Defense attorney Corey Chirafisi:

It wasn’t until you pointed your gun at him, advanced on him,  with your gun, now your hand is down, pointed at him, that he fired? Right?

Gaige Grosskreutz:

Correct. 

The camera is pointed at Gaige Grosskreutz, We cannot see the prosecutor’s table. The camera then shows Kyle Rittenhouse for a few seconds. Then it shows the prosecutor’s table.  A dramatic image is captured, which will probably become an iconic graphic of images you do not want to present at court.

Prosecutor’s table, Kraus, left, Binger with glasses, right

Of interest, Gaige Grosskreutz showed significant function in his right arm and hand. He was able to hold and raise a water bottle, and the microphone easily, with considerable fine motor control in his fingers.

Becky Sullivan, National Public Radio (NPR) reporter and producer, who misreported information about a critical juncture of the shooting of Joseph Rosenbaum, in the Rittenhouse trial, had this take on the Gaige Grosskreutz testimony. From NPR:

 |  WBOI-FM

Updated November 8, 2021 at 3:27 PM ET

Gaige Grosskreutz, the only person who survived being shot by Kyle Rittenhouse last year at a chaotic demonstration in Kenosha, Wis., took the stand in a pivotal moment in Rittenhouse’s homicide trial. In three hours of dramatic testimony Monday, Grosskreutz, 27, acknowledged that he was armed with a pistol on the evening of Aug. 25, 2020, but said that his hands were raised when Rittenhouse raised his rifle at him and that he feared for his life.

Ms. Sullivan failed to mention Gaige Grosskreutz testified he was pointing his pistol at Kyle Rittenhouse when Kyle shot him.

The trial has had another day, where the prosecution witnesses appear to be defense witnesses.


Complete Live Trial Video:


About Dean Weingarten:

Dean Weingarten has been a peace officer, a military officer, was on the University of Wisconsin Pistol Team for four years, and was first certified to teach firearms safety in 1973. He taught the Arizona concealed carry course for fifteen years until the goal of Constitutional Carry was attained. He has degrees in meteorology and mining engineering and retired from the Department of Defense after a 30-year career in Army Research, Development, Testing, and Evaluation.

Dean Weingarten

AmmoLand.com