Mapping Sci-Fi Locations in Real Space

https://theawesomer.com/photos/2023/09/mapping_sci_fi_locations_in_space_t.jpg

Mapping Sci-Fi Locations in Real Space

Link

Science fiction books, TV shows, and movies often set their stories in real locations in space. The Overview Effect put together a visualization that charts the relative locations of stories in fictional works like Star Trek and Alien, and Dune, using actual places in the universe to illustrate their distances and relationships.

The Awesomer

How to Use FreedomGPT to Install Free ChatGPT Alternatives Locally on Your Computer

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/9ddb7f2adc0fe8bb0feb3673ef11a842.jpg

The number of ways you can chat with generative AI engines continues to grow, from ChatGPT to Claude to Google Bard to Bing AI, and FreedomGPT is one of the latest options you can add to your list of potential conversational partners. Here we’re going to take you through the key features that mark it out as being a little different and show you what it’s like to use.

Why is Everyone Suing AI Companies? | Future Tech

There are two main reasons you might want to use FreedomGPT: First, it can run locally on your computer, without any internet access. Second, it’s completely uncensored, which may or may not be an advantage depending on where in the world you live, the restrictions placed on your web access, and what you want out of your AI bots.

FreedomGPT wants to offer something different.
Screenshot: FreedomGPT

In other words, the engine will produce some very controversial takes if prompted in the right way, and you should be prepared for that if you’re going to make use of it. As you may have noticed, ChatGPT will refuse to answer certain categories of questions—covering areas such as financial advice or illegal activities—but FreedomGPT has no such qualms.

It’s also free to use at this point, which combined with the local installation option, may make it worth your while to at least try out. There is a web version available too, which confusingly deals with a different set of AI models and can’t be accessed for free, but it’s the downloadable version that we’re going to be focusing on here.

You can set up FreedomGPT on your computer by heading to the FreedomGPT website and following the link for either the Windows or the macOS download. You’ll then be asked to pick the AI model you want to use with FreedomGPT, and whether you want the full (and slower) version or the fast (and less complete) version of the model.

You’ve got a selection of language models to pick from.
Screenshot: FreedomGPT

Your two options are LLaMA, as released publicly by Meta, and Alpaca, a version of LLaMA fine-tuned by Stanford researchers which is more ChatGPT-like in its behavior. You’ll also be shown the download size for each model, and the amount of RAM you need on your local machine.

With the downloading and the installing out of the way, you’re free to start experimenting with FreedomGPT on your Windows or macOS machine. We’ll give you a few ideas here for how you can use the software while staying well away from anything ethically or legally dubious—prompts that ChatGPT would flat-out refuse to respond to.

Using FreedomGPT

FreedomGPT will open up with a ChatGPT-style interface, but at the moment it’s not quite as friendly as the OpenAI-developed alternative. All of your chats are bunched together in the same conversation, and to start again from scratch you need to close down and restart the app, or choose View and Reload from the menus at the top.

At that point, all of your existing chats will be lost for good, and the AI bot isn’t great at remembering what you’ve already said to it either, which means it’s best suited for standalone questions rather than ongoing chats. These aren’t necessarily dealbreakers when it comes to using FreedomGPT, but it’s important to be aware of its limitations.

FreedomGPT works in a similar way to other AI bots.
Screenshot: FreedomGPT

Up in the top left of the interface, you can switch between the different AI models on offer, and download any that aren’t currently stored on your computer. You’ll also find links to the program’s Discord and GitHub locations online, and these are your best bets for getting help and support with FreedomGPT.

You can interact with FreedomGPT in all the ways you’ll be familiar with from other AI bots: Get explainers on difficult concepts, get ideas for particular projects and activities, hear the pros and cons of a decision you’re weighing up, set up outlines for research that you’re undertaking and so on. It’ll write poetry, come up with idea prompts, and take instructions about the style and tone of its responses.

We didn’t notice too much of a difference between the LLaMA and the Alpaca models, although the latter seemed more comprehensive and more conversational a lot of the time. You can switch between models in the same conversation if you need to, although the program doesn’t leave behind any indication of which model has answered which conversation, which can get confusing.

You can switch between models in the same conversation.
Screenshot: FreedomGPT

It’s worth bearing in mind that the offline mode offered by FreedomGPT does offer you certain protections in terms of privacy and not having your conversations monitored, which is something you need to be wary about when using other similar services. We tested FreedomGPT in fully offline mode and can confirm it works as normal—the benefit of having everything installed locally.

In the AI gold rush that we’re currently living through, it’s not clear exactly who the winners and the losers are going to be, but FreedomGPT certainly offers something different for the time being—and if you’ve got more than a passing interest in what AI can offer, it’s something to try out.

Gizmodo

Cut & Paste a User Creation Statement with MySQL 8

https://i0.wp.com/lefred.be/wp-content/uploads/2023/09/Selection_533-1.png?w=914&ssl=1

Sometimes it’s convenient to retrieve the user creation statement and to copy it to another server.

However, with the new authentication method used as default since MySQL 8.0, caching_sha2_password, this can become a nightmare as the output is binary and some bytes can be hidden or decoded differently depending of the terminal and font used.

Let’s have a look:

If we cut the create user statement and paste it into another server what will happen ?

We can see that we get the following error:

ERROR: 1827 (HY000): The password hash doesn't have the expected format.

How could we deal with that ?

The solution to be able to cut & paste the authentication string without having any issue, if to change it as a binary representation (hexadecimal) like this:

And then replace the value in the user create statement:

The user creation succeeded, and now let’s test to connect to this second server using the same credentials:

Using MySQL Shell Plugins

I’ve updated the MySQL Shell Plugins available on GitHub to use the same technique to be able to cut & paste the user creation and the grants:

And for MySQL HeatWave on OCI ?

Can we use the generated user creation statement and grants with MySQL HeatWave ?

For the user creation, there is no problem and it will work. However for the grants there is a limitation as some of the grants are not compatible or allowed within MySQL HeatWave.

The list of grants allowed in HeatWave is available on this page.

Let’s try:

As you can see, some of the privileges are not allowed and the GRANT statements fail.

You have the possibility to ask to the MySQL Shell Plugin to strip the incompatible privileges, using the option ocimds=True:

Now you can use the generated SQL statements with a MySQL HeatWave instance:

Conclusion

As you can see, the default authentication plugin for MySQL 8.0 and 8.1 is more secure but can be complicated to cut and paste. But as we say, “if there is no solution, there is no problem !”, and in this case we have also a solution to copy and paste the authentication string.

Enjoy MySQL !

Planet MySQL

Pagination Laravel : Display continuous numbering of elements per page


A tutorial for displaying continuous numbering of collection items on all pagination pages in a Laravel project.

🌍 The French version of this publication : Pagination Laravel : Afficher une numérotation continue des éléments par page

Pagination in Laravel is a mechanism for dividing data into several pages to facilitate presentation and navigation when there are a large number of results.

Let’s consider a collection of publications or $posts that we retrieve and paginate in the controller’s index method to display 100 per page:

public function index()
{
    $posts = Post::paginate(100);
    return view("posts.index", compact('posts'));
}

On the view resources/views/posts/index.blade.php we can display 100 posts per page and present the links of the different pages of the pagination like this:

@extends('layouts.app')
@section('content')
<table>
        <thead>
            <tr>
                <th>No.</th>
                <th>Title</th>
            </tr>
        </thead>
        <tbody>
            @foreach ($posts as $post)
            <tr>
                <td></td>
                <td></td>
            </tr>
            @endforeach
        </tbody>
    </table>
    
    
    
@endsection

In this source code, $loop->iteration displays the iteration number inside the loop and $posts->links() displays the pagination links.

But notice, for each individual page the iteration starts at 1. This means that if we’re on page 2, the first iteration of that page will be considered the first iteration in the whole pagination.

If we want to display continuous numbering on all pages of a pagination, we can combine the number of elements per page, the current page number and the current iteration:

@foreach ($posts as $post)
<tr>
    <td></td>
    <td></td>
</tr>
@endforeach

In this source code we have :

  • $posts->perPage() : number of elements per page
  • $posts->currentPage() : current page number

By multiplying the number of elements per page by the current page number minus one, we obtain the starting index for that page. By adding $loop->iteration, we obtain the continuous index for each element of the paginated collection.

So even if you go from page 1 to page 2, the numbering continues from the last index on the previous page.

Take care! 😎

Laravel News Links

Unlocking Real-Time with WebSockets in Laravel with Soketi

https://fajarwz.com/blog/unlocking-real-time-with-websockets-in-laravel-with-soketi/featured-image_hu30557a156be601fd3510404f00108f87_1133773_1200x630_fill_q75_bgffffff_box_smart1_3.jpg

Imagine creating web applications that respond instantly, where data updates and interactions happen in the blink of an eye. Welcome to the world of real-time web development. In this article, we’ll try to create a simple example of how to use WebSocket connection in Laravel application using Soketi.

We’ll introduce Soketi, set up the necessary tools, and configure Laravel to work with WebSockets. By the end of this article, you’ll have a basic WebSocket system ready to go.

What is Soketi?

Soketi is a simple, fast, and resilient open-source WebSockets server. It simplifies the WebSocket setup process, allowing you to focus on building real-time features without the complexities of WebSocket server management.

Installation

You can read more about Soketi installation instructions in the Soketi official docs.

Before installing the CLI Soketi WebSockets server, make sure you have the required tools:

  • Python 3.x
  • GIT
  • The gcc compiler and the dependencies for build

Read more here about CLI installation here.

Step 1: Install Soketi

Begin by installing Soketi globally via npm. Open your terminal and run the following command:

npm install -g @soketi/soketi

This command will install Soketi on your system, allowing you to run it from the command line.

Step 2: Start Soketi Server

With Soketi installed, start the WebSocket server using this command:

Soketi will now serve as the WebSocket server for your Laravel application.

Step 3: Install Required Packages

We’ll use the Pusher protocol with Soketi. The pusher/pusher-php-server library provides a PHP interface for the Pusher API, which allows you to send and receive messages from Pusher channels.

composer require pusher/pusher-php-server

For receiving events on the client-side, you’ll also need to install two packages using NPM:

npm install --save-dev laravel-echo pusher-js

These steps will set us up for seamless real-time communication in our Laravel application.

Step 4: Configure Broadcasting

Next, configure broadcasting in Laravel. Open the config/broadcasting.php file and add or modify the Pusher configuration as follows:

'connections' => [

        'pusher' => [
            'driver' => 'pusher',
            'key' => env('PUSHER_APP_KEY'),
            'secret' => env('PUSHER_APP_SECRET'),
            'app_id' => env('PUSHER_APP_ID'),
            'options' => [
                'cluster' => env('PUSHER_APP_CLUSTER'),
                'host' => env('PUSHER_HOST') ?: 'api-'.env('PUSHER_APP_CLUSTER', 'mt1').'.pusher.com',
                'port' => env('PUSHER_PORT', 443),
                'scheme' => env('PUSHER_SCHEME', 'https'),
                'encrypted' => true,
                'useTLS' => env('PUSHER_SCHEME', 'https') === 'https',
            ],
            'client_options' => [
                // Guzzle client options: https://docs.guzzlephp.org/en/stable/request-options.html
            ],
        ],
        // ...

This configuration sets up Pusher as the broadcasting driver for WebSockets.

Read also:

Step 5: Set Environment Variables

Now, configure the Pusher environment variables in your .env file. Replace the placeholders with your Pusher credentials:

BROADCAST_DRIVER=pusher

# other keys ...

PUSHER_APP_ID=app-id
PUSHER_APP_KEY=app-key
PUSHER_APP_SECRET=app-secret
PUSHER_HOST=127.0.0.1
PUSHER_PORT=6001
PUSHER_SCHEME=http
PUSHER_APP_CLUSTER=mt1

VITE_APP_NAME="${APP_NAME}"
VITE_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
VITE_PUSHER_HOST="${PUSHER_HOST}"
VITE_PUSHER_PORT="${PUSHER_PORT}"
VITE_PUSHER_SCHEME="${PUSHER_SCHEME}"
VITE_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"

These variables are essential for Laravel to connect to the WebSocket server.

By default, when we start the Soketi server without additional configuration, it will run on 127.0.0.1:6001 and use the following application credentials:

  • App ID: app-id
  • App Key: app-key
  • Secret: app-secret

These credentials play a crucial role in authenticating your frontend and backend applications, enabling them to send and receive real-time messages. It’s important to note that for production use, you should strongly consider changing these default settings to enhance security and ensure the smooth operation of Soketi.

Step 6: Configure JavaScript for Laravel Echo

In your JavaScript file (typically resources/js/bootstrap.js), configure Laravel Echo to use Pusher:

import Echo from 'laravel-echo';

import Pusher from 'pusher-js';
window.Pusher = Pusher;

window.Echo = new Echo({
    broadcaster: 'pusher',
    key: import.meta.env.VITE_PUSHER_APP_KEY,
    cluster: import.meta.env.VITE_PUSHER_APP_CLUSTER ?? 'mt1',
    wsHost: import.meta.env.VITE_PUSHER_HOST ? import.meta.env.VITE_PUSHER_HOST : `ws-${import.meta.env.VITE_PUSHER_APP_CLUSTER}.pusher.com`,
    wsPort: import.meta.env.VITE_PUSHER_PORT ?? 80,
    wssPort: import.meta.env.VITE_PUSHER_PORT ?? 443,
    forceTLS: (import.meta.env.VITE_PUSHER_SCHEME ?? 'https') === 'https',
    enabledTransports: ['ws', 'wss'],
});

This JavaScript setup allows your Laravel frontend app to communicate with the WebSocket server.

Step 7: Create a Broadcast Event

Now, let’s create a Laravel event that you want to broadcast using WebSockets. For this example, we’ll create a simple event called NewEvent. Create a new file in the app/Events directory called NewEvent.php. We can make the event with the following command:

php artisan make:event NewEvent

Update the NewEvent.php with the following codes:

namespace App\Events;

use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Broadcasting\PresenceChannel;
use Illuminate\Broadcasting\PrivateChannel;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
use Illuminate\Queue\SerializesModels;

// make sure to implement the ShouldBroadcast interface. 
// This is needed so that Laravel knows to broadcast the event over a WebSocket connection
class NewEvent implements ShouldBroadcast 
{
    use Dispatchable, InteractsWithSockets, SerializesModels;

    public $message;

    public function __construct($message)
    {
        $this->message = $message;
    }

    public function broadcastOn(): array
    {
        return [
            // we'll broadcast the event on a public channel called new-public-channel.
            new Channel('new-public-channel'),
        ];
    }
}

This event will be broadcasted to the specified channel.

Step 8: Broadcast the Event

In your Laravel routes (e.g., routes/web.php), create a route that dispatches the event:

use App\Events\NewEvent;

Route::get('/event', function () {
    NewEvent::dispatch(request()->msg);

    return 'Message sent!';
});

This route will dispatch the NewEvent event when accessed, simulating a real-time event trigger.

Step 9: Listen for the Event in JavaScript

In your JavaScript, you can now listen for the broadcasted event and handle it accordingly. For example:

// resources/js/bootstrap.js

window.Echo = new Echo({
    // ...
});

window.Echo.channel("new-public-channel").listen("NewEvent", (e) => {
  console.log(e);
});

This JavaScript code listens for the NewEvent broadcast on the new-public-channel and logs the event data.

Read also:

Step 10: Include the app.js in Our Laravel Frontend

To enable event reception, we need to include app.js, which imports bootstrap.js, in our Laravel frontend. For example, let’s include it in the welcome.blade.php file:

<html lang="">
    <head>
        <meta charset="utf-8">
        <meta name="viewport" content="width=device-width, initial-scale=1">

        <title>Laravel</title>

        <!-- Fonts -->
        <link rel="preconnect" href="https://fonts.bunny.net">
        <link href="https://fonts.bunny.net/css?family=figtree:400,600&display=swap" rel="stylesheet" />

        <!-- Styles -->
        <style>
            <!-- long css -->
        </style>
        <!-- include app.js -->
        @vite(['resources/js/app.js'])
    </head>

Step 11: Test the WebSocket

To test your WebSocket implementation, follow these steps:

  1. Visit the home page that serves welcome.blade.php (e.g., http://127.0.0.1:8000) and open the Developer Tools and navigate to Console tab.
    home.png
  2. Open a new tab in your web browser.
  3. Visit the /event route in your Laravel application.
  4. Add an additional query parameter, for example, /event?msg=itworks, to send an event with a message. This action will dispatch the NewEvent event and trigger the JavaScript listener, allowing you to test and verify your WebSocket functionality.
    event-route.png
  5. Back to the home page tab and check the Dev Tools Console.
    home-receive-event.png

Congratulations! You’ve successfully set up the foundation for Laravel WebSockets using Soketi.

Conclusion

We’ve learned how to set up a WebSocket server, broadcast events, and make our app respond instantly to what users do. It’s like making your app come alive!

Real-time apps can do amazing things, like letting people chat instantly or work together smoothly. With Laravel and Soketi, you’ve got the tools to make these cool things happen. Happy coding!

The repository for this example can be found at fajarwz/blog-laravel-soketi.

Laravel News Links

Laravel 10 Gate And Policy Tutorial With Examples

https://www.laravelia.com/storage/logo.png

Gate and policy are the most important topics in Laravel. Like roles and permissions, we can also handle user access control using the gate and policy in the Laravel application. In Laravel, gates are simply closures that determine if a user is authorized to perform a given action. Policies are classes that organize also authorization logic around a particular model or resource.

In this tutorial, we will look up the gate as well as policy with perfect examples. I will use the latest Laravel 10 application to create this laravel gates and policies example. I will create a roles table and will create different types of roles and finally show you how to use the gate to control user access from that roles table.

laravel-10-gate-and-policy-tutorial-with-example

First of all, let’s see the example of the Laravel Gates tutorial:

Step 1: Download Laravel

First of all, we need a complete fresh Laravel 10 application. So download it by this command:

composer create-project --prefer-dist laravel/laravel blog

 

Step 2: Create Migration 

Run the following command to create a roles table:

php artisan make:migration add_role_column_to_users_table

 

Now open this migration file and update with the below code.

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    /**
     * Run the migrations.
     */
    public function up(): void
    {
        Schema::create('roles', function (Blueprint $table) {
            $table->id();
            $table->enum('role',  ['admin', 'staff'])->default('staff');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     */
    public function down(): void
    {
        Schema::dropIfExists('roles');
    }
};

 

Step 3: Connect Database

Now connect the database, cause we need migration for this Laravel roles and permissions example:

.env

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=root
DB_PASSWORD=password

 

Now run migrate to add this table.

php artisan migrate

 

Read also: Laravel 10 Roles And Permissions Without Package

 

Step 4:  Create Gates

Now time to create gates for our application. So update the below file as like below:

app/Providers/AuthServiceProvider.php

<?php

namespace App\Providers;

use Illuminate\Support\Facades\Gate;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;

class AuthServiceProvider extends ServiceProvider
{
    /**
     * The model to policy mappings for the application.
     *
     * @var array<class-string, class-string>
     */
    protected $policies = [
        //
    ];

    /**
     * Register any authentication / authorization services.
     */
    public function boot(): void
    {
        Gate::define('isAdmin', function ($user) {
            return $user->role == 'admin';
        });

        Gate::define('isStaff', function ($user) {
            return $user->role == 'staff';
        });
    }
}

 

Step 5:  Use Gates in Blade

Now our gate declaration is done. Now we will see how to see that gate in the Laravel blade file. See the example:

resources/views/welcome.blade.php


@extends('layouts.app')
  
@section('content')
<div class="container">
    <div class="row justify-content-center">
        <div class="col-md-8">
            <div class="card">
                <div class="card-header">Dashboard</div>
   
                <div class="card-body">
                    @can('isAdmin')
                        <div class="btn btn-success btn-lg">
                          You have Admin Access
                        </div>
                    @endcan

                    @can('isStaff')
                        <div class="btn btn-primary btn-lg">
                          You have Staff Access
                        </div>
                    @else
                        <div class="btn btn-info btn-lg">
                          Default access
                        </div>
                    @endcan
  
                </div>
            </div>
        </div>
    </div>
</div>
@endsection 

 

Now let’s see how to use Laravel Gate in the controller:

App\Http\Controllers\TutorialController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Support\Facades\Gate;

class TutorialController extends Controller
{
    public function __invoke()
    {
        if (Gate::allows('isAdmin')) {
            dd('Admin allowed');
        }

        abort(403);
    }
}

 

We can also use the gate in our route file like below:

routes/web.php

<?php

use Illuminate\Support\Facades\Route;
use App\Http\Controllers\HomeController;

Route::get('/home', [HomeController::class, 'index'])->middleware('can:isAdmin')->name('home');

 

Step 6: Create Policy

Now time to talk about Laravel policy. There is an artisan command make:policy to create policy in Laravel. So now create a policy using the below command:

php artisan make:policy PostPolicy

//Yu may specify a --model when executing the command:

php artisan make:policy PostPolicy --model=Post

 

Step 8: Register policies

In this step, we need to register our policy before using it. Register it in the following file:

app/Providers/AuthServiceProvider.php

<?php

namespace App\Providers;

use App\Models\Post;
use App\Policies\PostPolicy;
use Illuminate\Foundation\Support\Providers\AuthServiceProvider as ServiceProvider;

class AuthServiceProvider extends ServiceProvider
{
    protected $policies = [
        Post::class => PostPolicy::class,
    ];

    public function boot(): void
    {
        $this->registerPolicies();
    }
}

 

Once the policy has been registered, we may define methods for each action it authorizes in the PostPolicy class like below.

app/Policies/PostPolicy.php

<?php

namespace App\Policies;

use App\Models\Post;
use App\Models\User;
use Illuminate\Auth\Access\Response;

class PostPolicy
{
    /**
     * Determine whether the user can view any models.
     */
    public function viewAny(User $user): bool
    {
        //
    }

    /**
     * Determine whether the user can view the model.
     */
    public function view(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can create models.
     */
    public function create(User $user): bool
    {
        //
    }

    /**
     * Determine whether the user can update the model.
     */
    public function update(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can delete the model.
     */
    public function delete(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can restore the model.
     */
    public function restore(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can permanently delete the model.
     */
    public function forceDelete(User $user, Post $post): bool
    {
        //
    }
}

 

The above look was the default policy. Now we will update it like this:

<?php

namespace App\Policies;

use App\Models\Post;
use App\Models\User;
use Illuminate\Auth\Access\Response;

class PostPolicy
{
    /**
     * Determine whether the user can view any models.
     */
    public function viewAny(User $user): bool
    {
        //
    }

    /**
     * Determine whether the user can view the model.
     */
    public function view(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can create models.
     */
    public function create(User $user): bool
    {
        //
    }

    /**
     * Determine whether the user can update the model.
     */
    public function update(User $user, Post $post): bool
    {
        return $user->id === $post->user_id;
    }

    /**
     * Determine whether the user can delete the model.
     */
    public function delete(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can restore the model.
     */
    public function restore(User $user, Post $post): bool
    {
        //
    }

    /**
     * Determine whether the user can permanently delete the model.
     */
    public function forceDelete(User $user, Post $post): bool
    {
        //
    }
}

 

Now let’s see how to use this policy in our controller update method:

App\Http\Controllers\TutorialController.php

<?php

namespace App\Http\Controllers;

use App\Models\Post;
use Illuminate\Support\Facades\Gate;

class TutorialController extends Controller
{
    public function update(Post $post)
    {
        if (auth()->user()->can('update', $post)) {
            //user is authorized to perform this action
        }
        return view('welcome');
    }
}

 

Or you can authorize user like this:

App\Http\Controllers\TutorialController.php

<?php

namespace App\Http\Controllers;

use App\Models\Post;
use Illuminate\Support\Facades\Gate;

class TutorialController extends Controller
{
    public function update(Post $post)
    {
        $this->authorize('update', $post);
    }
}

 

We can also use policy in Laravel routes like below:

routes/web.php

<?php

use Illuminate\Support\Facades\Route;
use App\Http\Controllers\TutorialController;

Route::get('post/{post}', TutorialController::class)->middleware('can:update,post');

 

Now in the final part, we will see how to use laravel policy in blade file:

@can('update', $post)
    
@elsecan('create', App\Models\Post::class)
    
@endcan

@cannot('update', $post)
    
@elsecannot('create', App\Models\Post::class)
    
@endcannot

 

Read also: Laravel 10 PayPal Payment And Dive Into Details

 

Conclusion

You know how to use laravel policy and laravel gates in laravel application. Hope this laravel gate policy tutorial will help you to handle user authorization.

Laravel News Links

All the ways to handle null values in PHP

https://www.amitmerchant.com/cdn/all-the-ways-to-handle-null-values-in-php.png

Null is a special data type in PHP that represents a variable with no value. A variable is considered to be null if:

  • it has been assigned the constant null.
  • it has been set to null using the settype() function.

In this article, we’ll go through all the ways you can handle null values in PHP.

The is_null() function

The is_null() function is used to check whether a variable is null or not. It returns true if the variable is null and false otherwise.

$foo = null;

if (is_null($foo)) {
    echo '$foo is null';
}

// $foo is null

The is_null() function can not check whether a variable is set or not. So, you need to use the isset() function to check whether a variable is set or not before using the is_null() function.

The null coalescing operator

The null coalescing operator (??) is used to check whether a variable is null or not. It returns the value of the variable if it’s not null and returns the value of the second operand if the variable is null.

The operator was introduced in PHP 7.0.

$foo = null;

echo $foo ?? 'bar';
// bar

You can also chain the null coalescing operator to check multiple variables.

$foo = null;

echo $foo ?? $bar ?? 'baz';

// baz

The one gotcha here is that the null coalescing operator will return the value of the second operand even if the variable is not set or it’s undefined.

echo $bar ?? 'foo';

// foo

The null coalescing assignment operator

The null coalescing assignment operator (??=) is used to assign a value to a variable if it’s null. It assigns the value of the right-hand operand to the left-hand operand if the left-hand operand is null.

$foo = null;

$foo ??= 'bar';

echo $foo;
// bar

The nullsafe operator

The nullsafe operator (?->) is used to safely call a method/property on a variable if it’s not null. It returns the value of the method if the variable is not null and returns null otherwise.

The operator is introduced back in PHP 8.0.

class User
{
    public function getName()
    {
        return 'John Doe';
    }
}

$user = null;
echo $user?->getName();
// null

$user = new User;
echo $user?->getName();
// John Doe

The ternary operator

You can still use the good old ternary operator to check whether a variable is null or not. But it’s not recommended to use it since it’s verbose and not as readable as the null coalescing operator.

$foo = null;

echo $foo ? $foo : 'bar';
// bar

Nullable types

Sometimes, you might want to explicitly declare a function parameter or a return type to be nullable. You can do so by using the ? operator before the type declaration.

function foo(?string $bar): ?string
{
    return $bar;
}

echo foo(null);
// null

echo foo('bar');
// bar

This way, you can explicitly tell the function that the parameter $bar can be null and the function can return null as well.

Nullable types have existed since PHP 7.1.

Null as a standalone type

From PHP 8.2 onwards, you can use null as a standalone type.

This means you can declare a variable to be of type null and it will only accept null as a value.

Similarly, you can also set the return type of a function to be null and it will only return null as a value.

class Nil
{
    public null $nil = null;
 
    public function isNull(null $nil): null 
    { 
        return null;
    }
}

$nil = new Nil;

$nil->isNull(null);
// returns null

$nil->isNull('foo');
// PHP Fatal error:  Uncaught TypeError: Nil::isNull(): 
// Argument #1 ($nil) must be of type null, string given

One thing to keep in mind when type-hinting method and variables using null is, that if you try to mark null as nullable (?null), it will result in a compile-time error which is in line with PHP’s current type resolving redundancy rules.

Laravel News Links

Unorthodox Eloquent

https://muhammedsari.me/open-graph/blog/unorthodox-eloquent.png

Eloquent is a razor-sharp tool that is adored by many. It allows you to carry out database operations with ease while maintaining an easy-to-use API. Implementing the Active Record (AR) pattern, as decribed by Fowler in PoEAA, it is one of the best AR implementations that is available today.

In this blog post, I’d like to go over a few tips and tricks I have learned along the way while experimenting with different options. For example, have you ever considered sharing your eager loads one way or another? No? Well, then I’m pretty sure you’ll learn at least a thing or two so make sure you stick around until the end!

Like every tool in existence, Eloquent comes with its own set of trade-offs. As responsible developers, we should always be aware of the things we are trading off against. If you’d like to learn more about AR and its design philosophy, I highly recommend this article by Shawn McCool.

Quick navigation

Tappable scopes

Traditionally, reusable query scopes have always been defined within the target model itself using the magical scopeXXX syntax, macros or a dedicated Builder class. The problem with the first two approaches is that they both rely on opaque runtime magic making it (almost) impossible to receive IDE assistance. Even worse, naming clashes can occur in the case of macro registrations. However, there is a fourth—and in my opinion superior—approach: tappable scopes. Tappable scopes is one of those hidden gems that is extremely valuable, but at the same time completely unknown to the masses because it is not documented anywhere.

Explained by example

Let’s take the following Controller method as an example:

public function index(): Collection
{
    return Company::query()
        ->oldest()
        ->whereNull('user_id')
        ->whereNull('verified_at')
        ->get();
}

We can see that it applies some conditions to the Builder instance, and returns the result as-is without any transformation. While this is a perfectly valid way to write a query, it leaks internals and the where clauses do not tell us anything about the domain language. Perhaps the requirement was: “Create an endpoint that returns the oldest orphaned & unverified companies”. Orphaned in this case means that a company was abandoned during registration and it is the notion that our domain experts use. Thus we can do much better:

public function index(): Collection
{
    return Company::query()
        ->oldest()
        ->tap(new Orphan())
        ->tap(new Unverified())
        ->get();
}

This almost reads like the requirement itself, right? Now, if we quickly take a gander at one of these tappable scopes:

final readonly class Orphan
{
    public function __invoke(Builder $builder): void
    {
        $builder->whereNull('user_id');
    }
}

That’s it! This simplicity allows us to compose queries in any way, shape or form we’d like and not be restricted to using a particular trait or something that pollutes our Eloquent models.

Now imagine that a new requirement comes in that demands a brand new endpoint that should list orphaned members that belong to a certain company. At this point we might start sweating because there is no commonality between a Company and a Member, but don’t fret! Tappable scopes to the rescue! Let’s just reuse the scope and call it a day:

public function index(Request $request): Collection
{
    return Member::query()
        ->whereBelongsTo($request->company)
        ->tap(new Orphan())
        ->get();
}

This is the power of tappable scopes. I think it should also be mentioned that this example requires both models to possess the concept of being “orphaned” , which is the state of abandonment after an initiated registration process, as signalled by a nullable user_id column. The registration is complete once a user is linked to all models. Needless to say, you can’t just go and use any scope with any model. It must be supported by the backing database table.

Specification pattern note

Have you ever heard about the Specification pattern and tried applying it dogmatically? As you may know, dogma is the root of all evil. This way of applying query constraints offers the best of many worlds.

Are you a package author?

Tappable scopes are especially useful for package authors who’d like to share reusable scopes alongside their packages. Let’s take laravel-adjacency-list as an example. scopeIsRoot could be refactored as follows:

final readonly class IsRoot
{
    public function __invoke(Builder $builder): void
    {
        $builder->whereNull('parent_id');
    }
}

This approach also solves the issue of clashing method and scope names, thanks to simply avoiding the ancient magics that are still available in the framework from the early days. All in all, the utilization of tappable scopes yields a net positive for 90% of use cases out there.

Not-so-global global scopes

I know that the title doesn’t make a lot of sense when read out of context, but please bear with me. Every now and then a couple of posts regarding global scopes appear on my X (formerly Twitter) timeline. The general sentiment is always along the lines of “global scopes are bad, local scopes are good“. The reason for this is that the Laravel documentation on global scopes does them a huge disfavor by making it look like the documented way of applying global scopes is the only way, but nothing could be further from the truth.

A while ago, I was thinking about this common complaint and then it struck me: what happens if you flout this convention? I took a gander at the global scopes API and quickly realized that it was in fact not a requirement to declare the scopes within your Eloquent models’ booted lifecycle method. As a matter of fact, there are no restrictions at all! They can be applied in a ServiceProvider, Middleware, Job etc. the possibilities are endless. The best use, however, is—in my opinion—when combined with middleware. So let’s take a look at an example.

Explained by example: country restriction

Imagine that you are working on an application like IMDb that has a front-facing public website and an internal administration panel. One of the requirements might be that certain movies should only be shown to users if the user’s country is available in a certain whitelist, otherwise it should be as if the movie doesn’t exist at all. Long story short, you have to partition the data based on the origin country. However, this restriction should only be applied to the public website, not the internal administration panel. An effortless way to implement this requirement is by leveraging not-so-global global scopes.

First, create your global scope like you always would:

final readonly class CountryRestrictionScope implements Scope
{
    public function __construct(private Country $country) {}

    public function apply(Builder $builder, Model $model): void
    {
        // pseudocode: do the actual country-based filtering here
        $builder->isAvailableIn($this->country);
    }
}

Next, create an HTTP middleware whose responsibility is going to be applying the scope to the relevant models:

final readonly class RestrictByCountry
{
    public const NAME = 'country.restrict';

    public function __construct(private Repository $geo) {}

    public function handle(Request $request, Closure $next): mixed
    {
        $scope = new CountryRestrictionScope($this->geo->get());

        Movie::addGlobalScope($scope);
        Rating::addGlobalScope($scope);
        Review::addGlobalScope($scope);

        return $next($request);
    }
}

Note: Repository in this example can be anything that returns the user’s country, like laravel-geo.

Finally, open up your web.php routes file and apply the middleware to the relevant group:

$router->group([
    'middleware' => ['web', RestrictByCountry::NAME],
], static function ($router) {
    $router->resource('movies', Site\MovieController::class);
    $router->resource('ratings', Site\RatingController::class);
    $router->resource('reviews', Site\ReviewController::class);
	
    // Front-facing public website routes...
});

$router->group([
    'middleware' => ['web', 'auth'],
    'prefix' => 'admin',
], static function ($router) {
    $router->resource('movies', Admin\MovieController::class);
    $router->resource('ratings', Admin\RatingController::class);
    $router->resource('reviews', Admin\ReviewController::class);
	
    // Admin routes...
});

Pay close attention to the fact that the middleware only gets applied to the public website routes. This has the following implications:

  • Whenever a user visits any of the public website routes, the content will be automatically filtered based on country. This might result in harmless 404 pages.
  • Whenever new routes need to be added due to new feature requests, the developers do not have to remember the fact that each model should be filtered based on the user’s country. This has been dealt with already and there’s no way this restriction can be bypassed unless done deliberately.
  • Whenever developers use a REPL like tinker, they won’t be caught off-guard because a hidden, nasty global scope was altering the queries. Remember, our global scopes are not-so-global.
  • Whenever an administrator visits the internal administration panel, they are always going to see all content regardless of their origin. This is exactly what we want.

To put it succinctly, if we embrace the global nature of global scopes while thinking about the precise placement and its area of effect, we can actually create joyful development experiences while eliminating potential frustrations in the future. It doesn’t have to be infuriating!

There is nothing stopping us for doing things like this:

final readonly class FileScope implements Scope
{
    public function __invoke(Builder $builder): void
    {
        $this->apply($builder, File::make());
    }

    /** @param File $model */
    public function apply(Builder $builder, Model $model): void
    {
        $builder
            ->where($model->qualifyColumn('model_type'), 'directory')
            ->where($model->qualifyColumn('collection_name'), 'file');
    }
}

__invoke is meant to make the Scope tappable, and apply is to adhere to the Scope contract which is a requirement for (not-so-global) global scopes.

  • You’d like to use the scope as a truly global scope in certain contexts? Check.
  • You’d like to apply the scope to certain queries using the tappable way? Also check.

Phantom properties

In a fairly recent project I worked on, I had to display a huge amount of markers on an interactive map like Google Maps, Leaflet or Mapbox. These interactive maps accept a list of geometry types according to the GeoJSON spec. A Point type, which is exactly what I needed, must provide a coordinates property with a tuple as its value representing (lat,lon) respectively. The problem here is that coordinates represents a composite value, whereas the data is flattened as addresses:id,latitude,longitude in the database. The table was designed that way because of the chosen administration panel: Laravel Nova. It’s much easier to handle record creations in Nova if you keep the structure as flat as possible. I could have simply dealt with this problem in an Eloquent Resource (aka a transformer), but the curious programmer in me told me that there ought to be a better way. The inner me was definitely right: there is a better way thanks to—what I call—Phantom properties.

Explained by example: Coordinates

To solve the problem at hand, we first need to create the ValueObject that will represent address Coordinates:

final readonly class Coordinates implements JsonSerializable
{
    private const LATITUDE_MIN  = -90;
    private const LATITUDE_MAX = 90;
    private const LONGITUDE_MIN = -180;
    private const LONGITUDE_MAX = 180;

    public float $latitude;

    public float $longitude;

    public function __construct(float $latitude, float $longitude)
    {
        Assert::greaterThanEq($latitude, self::LATITUDE_MIN);
        Assert::lessThanEq($latitude, self::LATITUDE_MAX);
        Assert::greaterThanEq($longitude, self::LONGITUDE_MIN);
        Assert::lessThanEq($longitude, self::LONGITUDE_MAX);

        $this->latitude  = $latitude;
        $this->longitude = $longitude;
    }

    public function jsonSerialize(): array
    {
        return [$this->latitude, $this->longitude];
    }

    public function __toString(): string
    {
        return "({$this->latitude},{$this->longitude})";
    }
}

Next, we should define our AsCoordinates object cast:

final readonly class AsCoordinates implements CastsAttributes
{
    public function get(
        $model, 
        string $key,
        $value, 
        array $attributes,
    ): Coordinates {
        return new Coordinates(
            (float) $attributes['latitude'], 
            (float) $attributes['longitude'],
        );
    }

    public function set(
        $model, 
        string $key,
        $value, 
        array $attributes,
    ): array {
        return $value instanceof Coordinates ? [
            'latitude' => $value->latitude,
            'longitude' => $value->longitude,
        ] : throw new InvalidArgumentException('Invalid value.');
    }
}

Finally, we should assign it as a cast in our Address model:

final class Address extends Model
{
    protected $casts = [
        'coordinates' => AsCoordinates::class,
    ];
}

Now, we can just simply use it in our Eloquent Resource:

/** @mixin \App\Models\Address */
final class FeatureResource extends JsonResource
{
    public function toArray($request): array
    {
        return [
            'geometry' => [
                'type' => 'Point',
                'coordinates' => $this->coordinates,
            ],
            'properties' => $this->only('name', 'phone'),
            'type' => 'Feature',
        ];
    }
}
  • Coordinates is now fully responsible for the concept of Coordinates (representing a two-dimensional point on earth).
  • We don’t have to call jsonSerialize() by ourselves because of the implemented interface. It’ll be taken care of for us due to Laravel calling json_encode somewhere in the call stack.
  • If something were to change about Coordinates, it’ll be trivial to find where the concept of Coordinates is being used.

This is what we ended up with as expected:

{
    "geometry": {
        "type": "Point",
        "coordinates": [4.5, 51.5]
    },
    "properties": {
        "name": "Acme Ltd.",
        "phone": "123 456 789 0"
    },
    "type": "Feature"
}

Explained by another example: rendering address lines

Another handy way of using phantom properties is to help you with template rendering. Normally, if you wanted to render the address as HTML, you’d have to do something like this:

<address>
  <span></span>
  @if($two = $address->line_two)<span></span>@endif
  @if($three = $address->line_three)<span></span>@endif
</address>

As you can see, it can get out of hand really quickly. While I do recognize this is a rather contrived example, as addresses are generally rendered differently based on country, it helps with painting the picture that it can become a mess rather fast. What if we could do something like this:

<address>
  @foreach($address->lines as $line)
  <span></span>
  @endforeach
</address>

Much nicer, right? Our template is no longer concerned with how complex it can suddenly become due to different rules in different nations. It does what it does best: rendering. The phantom property responsible for this could look as follows:

final readonly class AsLines implements CastsAttributes
{
    public function get(
        $model, 
        string $key,
        $value, 
        array $attributes,
    ): array {
        return array_filter([
            $attributes['line_one'],
            $attributes['line_two'],
            $attributes['line_three'],
        ]);
    }

    public function set(
        $model, 
        string $key,
        $value, 
        array $attributes,
    ): never {
        throw new RuntimeException('Set the lines explicitly.');
    }
}

If we had to swap line_two and line_three for Antarctica for example, we can do the adjustment in AsLines and won’t have to adjust the blade template at all. Thinking out-of-the-box can greatly simplify how we can render UIs and prevent us from creating overly smart ones which is generally frowned upon and considered an anti-pattern.

A note on what’s available in the docs

These properties do not directly map to a database column, and are comparable to the Accessors & Mutators combination. The documentation calls it Value Object Casting, but I think it’s heavily misleading as it’s not a requirement to cast it to a ValueObject with this approach. Reason being is that besides the examples I gave above, another use case might be the generation of product numbers that are comprised of segments. You’d want to generate and persist a value like CA‑01‑00001 but actually save it in three distinct columns (number_countrynumber_departmentnumber_sequence) for much easier querying:

final readonly class AsProductNumber implements CastsAttributes
{
    public function get(
        $model, 
        string $key,
        $value, 
        array $attributes
    ): string {
        return implode('-', [
            $attributes['number_country'],
            Str::padLeft($attributes['number_department'], 2, '0'),
            Str::padLeft($attributes['number_sequence'], 5, '0'),
        ])
    }

    public function set(
        $model, 
        string $key,
        $value, 
        array $attributes,
    ): array {
        [$country, $dept, $sequence] = explode('-', $value, 2);

        return [
            'number_country' => $country,
            'number_department' => (int) $dept,
            'number_sequence' => (int) $sequence,
        ];
    }
}

Needless to say that you should also create a unique, composite index that spans these three columns. The phantom property responsible for this would create the composed string value CA‑01‑00001 and not a ValueObject, hence why it is misleading.

Fluent query objects

I already mentioned custom Builder classes in the first section tappable scopes. While they’re a first good step towards more readable and maintainable queries, I think that they quickly fall apart when a lot of custom constraints need to be added to the custom Builder classes. They just tend to become another type of God object. It’s also a lot of hassle to get the IDE to the point where it starts helping you with suggestions. You could also create a dedicated Repository for your models, but I dislike that option heavily. You see, to my eye, a Repository and Eloquent are mutually exclusive.—Before you take out the pitchforks, I know that it’s not strictly true. However, if you know why ActiveRecord exists and why a Repository exists, then you’ll understand where I’m coming from.—You can read more on that here.

Another alternative is the utilization of what is known as a QueryObject. It is an object that’s responsible for the composition and execution of a single query kind. While this doesn’t fully conform to the definition of Martin Fowler in PoEAA, it is close enough and I think that borrowing the notion for this particular purpose is fine. If you have a Laracasts subscription, you might have already seen the lesson that’s available on this subject. Even though the philosophy and the way of thinking are identical, I’d like to present an alternate API that’s much, much nicer to use: fluent query objects.

Explained by example: notification center

Imagine that we have an SPA, powered by an HTTP JSON API, that has a notification bell at the top. The back-end exposes an endpoint that we can use to retrieve unread notifications of the logged in user. The Controller method responsible for retrieving unread notifications might look as follows:

public function index(Request $request): AnonymousResourceCollection
{
    $notifications = Notification::query()
        ->whereBelongsTo($request->user())
        ->latest()
        ->whereNull('read_at')
        ->get();

    return NotificationResource::collection($notifications);
}

This was all straightforward until a new feature request came in that required us to create a dedicated page to manage all of the notifications: read, unread, type of notification etc. To make the lives of our front-end developers easier, we decided to create a dedicated endpoint per view type. One of them, which is responsible for retrieving read notifications, might look as follows

public function index(Request $request): AnonymousResourceCollection
{
    $notifications = Notification::with('notifiable')
        ->whereBelongsTo($request->user())
        ->latest()
        ->whereNotNull('read_at')
        ->get();

    return NotificationResource::collection($notifications);
}

Eagle-eyed readers may already have noticed that everything in this snippet is the same as the previous one except the whereNotNull clause and the eager loading of the notifiable relation. Now we have to repeat this process for the other types as well:

public function index(Request $request): AnonymousResourceCollection
{
    $notifications = Notification::query()
        ->whereBelongsTo($request->user())
        ->latest()
        ->where('data->type', '=', $request->type)
        ->get();

    return NotificationResource::collection($notifications);
}

I think you get the gist of it. This is too much repetition, and something must be done about it. Enter fluent query objects.
First, we’re going to create the query class that will be responsible for “getting my notifications”:

final readonly class GetMyNotifications
{
}

Next, we’re going to move the base query (the conditions that’ll need to be applied at all times) to the constructor of our brand new, shiny object:

final readonly class GetMyNotifications
{
    private Builder $builder;

    private function __construct(User $user)
    {
        $this->builder = Notification::query()
            ->whereBelongsTo($user)
            ->latest();
    }

    public static function query(User $user): self
    {
        return new self($user);
    }
}

Now, we need to tap into the prowess of composition by utilizing the ForwardsCalls trait:

/** @mixin \Illuminate\Database\Eloquent\Builder */
final readonly class GetMyNotifications
{
    use ForwardsCalls;

    // omitted for brevity

    public function __call(string $name, array $arguments): mixed
    {
        return $this->forwardDecoratedCallTo(
            $this->builder, 
            $name, 
            $arguments,
        );
    }
}

Observations:

  • ForwardsCalls allows us to treat the class as if it’s a part of the “base class” \Illuminate\Database\Eloquent\Builder even though there is no inheritance in place. I love composition.
  • The @mixin annotation will help the IDE to provide us with useful autocompletion suggestions.
  • You could also choose to add Conditionable to have an even more fluent API, but it’s not really necessary here due to our design choice (a distinct endpoint per view type).

The only things remaining now are the custom query constraints, so let’s add them:

/** @mixin \Illuminate\Database\Eloquent\Builder */
final readonly class GetMyNotifications
{
    // omitted for brevity
	
    public function ofType(NotificationType ...$types): self
    {
        return $this->whereIn('data->type', $types);
    }

    public function read(): self
    {
        return $this->whereNotNull('read_at');
    }

    public function unread(): self
    {
        return $this->whereNull('read_at');
    }
	
    // omitted for brevity
}

Awesome. We now have everything that’s needed to properly implement the feature “get my notifications” aka the notification center. So with everything stitched together:

/** @mixin \Illuminate\Database\Eloquent\Builder */
final readonly class GetMyNotifications
{
    use ForwardsCalls;

    private Builder $builder;

    private function __construct(User $user)
    {
        $this->builder = Notification::query()
            ->whereBelongsTo($user)
            ->latest();
    }

    public static function query(User $user): self
    {
        return new self($user);
    }

    public function ofType(NotificationType ...$types): self
    {
        return $this->whereIn('data->type', $types);
    }

    public function read(): self
    {
        return $this->whereNotNull('read_at');
    }

    public function unread(): self
    {
        return $this->whereNull('read_at');
    }

    public function __call(string $name, array $arguments): mixed
    {
        return $this->forwardDecoratedCallTo(
            $this->builder, 
            $name, 
            $arguments,
        );
    }
}

Let’s refactor one of the previous Controllers and see how it looks:

public function index(Request $request): AnonymousResourceCollection
{
    $notifications = GetMyNotifications::query($request->user())
        ->read()
        ->with('notifiable')
        ->get();

    return NotificationResource::collection($notifications);
}

I don’t know about you, but this piece of elegant code is just beautiful to look at. You can keep using all of the regular Illuminate\Database\Eloquent\Builder methods while also having the ability to call enhanced, specific methods dedicated to this distinct query alone. Takeaways:

  • Key concepts are encapsulated behind meaningful interfaces.
  • Adheres to SRP
  • Groks the framework and its tools instead of fighting against it like using a Repository
  • Easily reusable in multiple places
  • Easily testable

There is nothing stopping us from doing things like this:

final readonly class GetMyNotifications
{
    // omitted for brevity
	
    public function ofType(NotificationType ...$types): self
    {
        return $this->tap(new InType(...$types));
    }

    public function read(): self
    {
        return $this->tap(new Read());
    }

    public function unread(): self
    {
        return $this->tap(new Unread());
    }
	
    // omitted for brevity
}

Perhaps we needed to create these scopes for a not-so-global use case. It makes a lot of sense to reuse them here in order to maintain consistency. Your imagination is the only limiting factor here.

Note on using a Pipeline

There are plenty of tutorials on YouTube that show how we could be using a Pipeline to logically split a chain of operations, or do some complex filtering. Some readers might think that it’s a waste of time to deal with your own QueryObject. The thing is though, I don’t believe that a Pipeline and a QueryObject are mutually exclusive. They can be complementary to eachother and help one another to complete the task more efficiently. Instead of type-hinting Builder in the pipes, we can type-hint our custom QueryObjects. Essentially building our own laravel-query-builder, but with a more specific API.

The Pipeline could look as follows:

$orders = Pipeline::send(
    GetMyOrders::query($request->user())
)->through([
    Filter\Cancelled::class,
    Filter\Delayed::class,
    Filter\Shipped::class,
])->thenReturn()->get();

A Pipe could look as follows:

final readonly class Cancelled
{
    public function __construct(private Request $request) {}

    public function handle(GetMyOrders $query, Closure $next): mixed
    {
        if ($this->request->boolean('cancelled')) {
            $query->cancelled();
        }

        return $next($query);
    }
}

There’s nothing wrong with amalgamating different concepts to reach your end goal. Just make sure it makes sense for your current context and that you’re not introducing accidental complexity.

Sharing eager loads

This is a concise, but indispensable one (for me, at least). At some point you probably found yourself wondering how you could be sharing eager loads, especially those that do some additional refining, but nevertheless still ended up just copy-pasting the relevant code. While copy-paste is a perfectly valid option to choose, there are in fact better ways to solve this issue. It can quickly become cumbersome to repeat these kinds of eager loads because of the application of additional query constraints. This could be the case when using Spatie’s phantasmagorical package laravel-medialibrary, for example.

Imagine that you have 10 different models. Each model defines multiple, distinct MediaCollections and each model also defines a thumbnail for display purposes on the storefront. For various reasons, Controller code etc. cannot be shared (you shouldn’t anyway). The package works with one big media relation that loads in all attached Media objects and uses Collection magic in the background in order to partition them. Eager loading the entire media relation can quickly become a problem on an index page that lists a model with tons of MediaCollections. After all, the only thing we need on an index page is the model’s thumbnail. In order to solve this problem, we can apply a query constraint like so:

public function index(): View
{
    $products = Product::with([
        'categories',
        'media' => static function (MorphMany $query) {
            $query->where('collection_name', 'thumbnail')
        },
        'variant.media' => static function (MorphMany $query) {
            $query->where('collection_name', 'thumbnail')
        },
    ])->tap(new Available())->get();

    return $this->view->make('products.index', compact('products'));
}

While this does solve the problem of overfetching, it is not pretty to look at. Now repeat this 9 more times. Yuck! Actually it is very, very straightforward to properly tackle this problem. First, think about what you want to eager load. Got it? LoadThumbnail. Then, create a class that represents this constraint:

final readonly class LoadThumbnail implements Arrayable
{
    public function __invoke(MorphMany $query): void
    {
        $query->where('collection_name', 'thumbnail');
    }

    public function toArray(): array
    {
        return ['media' => $this];
    }
}

Now simply use it:

public function index(): View
{
    $products = Product::with([
        'categories',
        'media' => new LoadThumbnail(),
        'variant.media' => new LoadThumbnail(),
    ])->tap(new Available())->get();

    return $this->view->make('products.index', compact('products'));
}

Amazing, right? You may have also noticed the toArray at the bottom. That would come in handy if you’d like to define eager loads 1 relation at a time with consecutive with((new LoadThumbnail)->toArray()) calls. This technique is so simple to execute it’s just unfair almost. Please don’t overfetch and instead make sure minimal data is returned over the wire from the database. Laziness is no excuse!

Invokable accessors

We’ve already talked about techniques like Phantom properties. If you haven’t read that section yet, please read it first and come back to this one. Anyway, the biggest flaw with Phantom properties is that it requires us to define a set (inbound) cast even if we won’t be using it, like the $address->lines example; and also that it has no mechanism that automatically memoizes the computed results. It’s a bummer that there’s no CastsOutboundAttributes, but that’s where invokeable accessors shine. The main benefits are:

  • Memoization
  • No model clutter
  • Testable units
  • Composable, like any other object

An example definition (Attribute::get is real, by the way):

final class File extends Model
{
    protected function stream(): Attribute
    {
        return Attribute::get(new StreamableUrl($this));
    }
}

That’s all it takes to define an invokeable accessor. Take note of the constructor argument, because that’s a repeating pattern with invokeable accessors. It’s necessary to gain access to the model that’s being used, otherwise we won’t be able to gather contextual information necessary to carry out our tasks. In this example, StreamableUrl is responsible for—you guessed it—generating streamable URLs. We could have inlined the logic and used the classic Closure way, but that would start filling up our model rather quickly. The actual model this snippet is coming from, has fourteen other accessors (!). A sneak peek from this particular invokeable accessor:

final readonly class StreamableUrl
{
    private const S3_PROTOCOL = 's3://';

    public function __construct(private File $file) {}

    public function __invoke(): string
    {
        $basePath = UuidPathGenerator::getInstance()
            ->getPath($this->file);

        if ($this->file->supports('s3')) {
            return self::S3_PROTOCOL 
                . $this->file->bucket 
                . DIRECTORY_SEPARATOR 
                . $basePath 
                . $this->file->basename;
        }

        return Storage::disk($this->file->disk)
            ->url($basePath . rawurlencode($this->file->basename));
    }
}

The exact details are not that important, but the takeaway is that it properly encapsulates the logic for generating optimized streamable URLs. Returning direct s3:// paths is much more efficient for streaming files from S3.

The main point is, imagine if we had defined this code inside a traditional Closure within the accessor on the model itself, and also done something similar for the other thirteen accessors. Our model would have very quickly become overstuffed. Using invokable accessors also allows us to extract out such code and keep our models clean and tidy.

Multiple read models for the same table

This is one of those rare moments where I actually appreciated the limitations of Laravel Nova, which is normally notoriously difficult to customize compared to other administration panel solutions. It allowed me to discover a novel use case for read-only models.

Recently, a feature request came in that required us to create a fully-fledged file exporer in order to share files with third parties and customers. Rolling out our own file management solution was out of the question because 1. it is a solved problem and 2. it is a complex and edgecase-ridden problem. We decided to go with laravel-medialibrary (as any other sane person would, thanks Spatie!), but there was a huge hurdle to overcome. We had to create a UX-friendly interface in Nova under the Directory resource, which would be housing the Files that belonged to that particular Directory and it had to be sortable. While the default Media model did its job well, it was incompatible with Nova’s most popular sorting library (also from Spatie). We had to come up with an original solution. That’s when it suddenly struck me to create a read-only model for the media table and put the theory to the test:

final class File extends Model implements Sortable
{
    use SortableTrait;

    public array $sortable = [
        'order_column_name' => 'order_column';
    ];

    protected $table = 'media';

    public function buildSortQuery(): Builder
    {
        return $this->newQuery()->where($this->only('model_id'));
    }
}

While this was already looking promising, there was another hurdle that had to be overcome. This model could be used to query anything in the media table which could have resulted in unexpected data losses. This was not acceptable of course, because this model specifically represents a File which is actually a Media object that fulfills two criteria:

  • It must belong to the model Directory
  • The collection_name must be file

I decided to create a true global scope and register it in a ServiceProvider to enforce these rules at all times (another rare moment where a truly global scope actually makes sense):

final class FileScope implements Scope
{
    /** @param File $model */
    public function apply(Builder $builder, Model $model): void
    {
        $builder
            ->where($model->qualifyColumn('model_type'), 'directory')
            ->where($model->qualifyColumn('collection_name'), 'file');
    }
}

Models that did not fit these criteria were no longer being returned and instead started throwing ModelNotFoundExceptions. This was exactly what we wanted. Perfect, but we couldn’t declare victory just yet. The Nova interface required a bunch of information which were simply not possible to extract from the default Media model. But then it struck me again: since this was our custom model, I could do whatever I wanted! I could even declare it as a relation now in the Directory model:

public function files(): HasMany
{
    return $this->hasMany(File::class, 'model_id')->ordered();
}

Did you notice something “weird”? No? Take a look at the relationship type. If you know how MediaLibrary works, you’d know that the media table actually makes use of a MorphMany relationship. But since we defined a global FileScope that always refines the queries on model_type, we can simply use the HasMany relation type by itself and everything just works. This is when my mind was blown. Calling $directory->files would now return a Collection of File objects and not Media objects. Long story short, File now possessed everything that was needed to serve a FileSharing context. We didn’t need to alter any configuration or something else—nothing. Just some cleverness and novel approaches. The end result was pure excellence.

e.g. I could also add a bunch of (invokeable) accessors in order to fulfill the UI needs:

// other accessors ommitted, there's simply too many

protected function realpath(): Attribute
{
    return Attribute::get(new Realpath($this));
}

protected function stream(): Attribute
{
    return Attribute::get(new StreamableUrl($this));
}

protected function extension(): Attribute
{
    return Attribute::get(fn () => $this->type->extension());
}

protected function type(): Attribute
{
    return Attribute::get(fn () => FileType::from($this->mime));
}

Takeways

  • You should use a read-only model when things get complex on the UI side.
  • Global scopes are not always bad.
  • These models allow for fine-tuning according to the use cases that they need to support.
  • This approach can also be used if a package does not allow you to override the “base model” that it uses. Simply create your own model that references the package’s table and start solving problems.

WithoutRelations for queue performance

Last but not least, the topic I’d like to talk about is the mysterious WithoutRelations attribute or the withoutRelations method. Avid and eagle-eyed source divers of Laravel packages may have already noticed some usage while browsing through the source code. In fact, it is indeed used in a Livewire component by Laravel Jetstream. Though the reason why it’s being used here is to prevent too much information leaking to the client-side, which is—even though completely valid—not the use case I’d like to talk about.

As you may already know, you should be using the SerializesModels trait if you’d like to enqueue a Job that houses Eloquent models. (Its purpose is briefly described in the documentation, so I’m not going to repeat that.) But there is a catch that a lot of developers are not aware of: SerializesModels also remembers which relationships were loaded at serialization-time and uses that information to reload all relationships when the models are deserialized. An example payload:

{
    "user": {
        "class": "App\\Models\\User",
        "id": 10269,
        "relations": ['company', 'orders', 'likes'],
        "connection": "mysql",
        "collectionClass": null
    }
}

As you can see, the relations property contains three relationships. These will be eagerly loaded upon the deserialization of this Job. A relationship like likes and orders can potentially pull in hundreds or even thousands of records, hurting the Jobs performance tremendously. Even worse, the Job from which I grabbed this snapshot didn’t even need any of these relations to carry out its main task.

Method option

An easy way to fix this problem is by—you guessed it—using the withoutRelations method before passing along your Eloquent models to the Jobs constructor. An example:

final class AccessIdentitySubscriber
{
    public function subscribe(Dispatcher $events): void
    {
        $events->listen(
            Registered::class, 
            $this->whenRegistered(...),
        );
    }

    private function whenRegistered(Registered $event): void
    {
        CreateProspect::dispatch($event->user->withoutRelations());
    }
}

This event subscriber is responsible for creating a new prospect in a disparate CRM system whenever a new user is registered in our application. Before dispatching CreateProspect, withoutRelations is called in order to make sure no useless relationships are serialized beyond this point, ensuring optimal performance. If we now inspect the serialized payload, we can see that the array has been emptied:

{
    "user": {
        "class": "App\\Models\\User",
        "id": 10269,
        "relations": [],
        "connection": "mysql",
        "collectionClass": null
    }
}

Perfect.

Attribute option

While preparing this blog post, I realized that a fellow Laravel developer contributed a brand new #[WithoutRelations] attribute that automatically takes care of stripping away all model relations upon Job serialization:

#[WithoutRelations]
final class CreateProspect implements ShouldQueue
{
    use Dispatchable;
    use InteractsWithQueue;
    use SerializesModels;

    public function __construct(private User $user) {}

    public function handle(Gateway $crm): void
    {
        // omitted for brevity
    }
}

This will definitely be my new default way of defining Jobs. I also don’t know about you, but I have had zero use cases where have I said to myself “Darn it, I should have left the relations alone”. This behavior introduces more hidden bugs than anything (in my experience). Most of the time, lazy loading does the job just fine. Remember, there are no bad tools. There are only bad tools within a particular context. That’s why I’m not a huge fan of the newish Model::preventLazyLoading feature. Sorry, namesake.

Wrap‑up

At this point my fingers are numb, but I think it was worth it. Curiosity makes you a better programmer, so get out of the tutorial hell and start experimenting. Trust me, the worst thing that can happen is you learning. And please… don’t forget to read up on the trade‑offs of Active Record. The worst thing that can happen is—again—you learning.

Join the discussion on X (formerly Twitter)! I’d love to know what you thought about this blog post.

Thanks for reading!

Laravel News Links

iPhone 15 Pro Max versus iPhone 12 Pro Max — Specs and features, compared

https://photos5.appleinsider.com/gallery/56413-114570-ippm15v12-xl.jpg

iPhone 15 Pro Max vs iPhone 12 Pro Max –


Most iPhone owners don’t upgrade every year. If history is any indication, there will be a lot of iPhone 12 Pro Max owners looking to upgrade to the iPhone 15 Pro Max. Here’s what has changed to the top model in three years.

The amount of time an iPhone owner waits between upgrades is lengthening, with surveys pointing to a three-year wait between purchases. That’s up from the previous typical upgrade cycle of every two years.

Under a three-year cycle, that would mean owners of the iPhone 12 collection will be seeking out replacement devices. For the iPhone 12 Pro Max, the obvious upgrade is to the modern-day equivalent, the iPhone 15 Pro Max.

A lot of change can happen over three years. Here’s what those iPhone 12 Pro Max owners will discover when they compare their daily driver against Apple’s top-of-the-range option.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Specifications

Specifications iPhone 12 Pro Max iPhone 15 Pro Max
Dimensions (inches) 6.33 x 3.07 x 0.29 6.29 x 3.02 x 0.32
Weight (ounces) 8.03 7.81
Processor A14 Bionic A17 Pro
Storage 128GB, 256GB, 512GB 256GB, 512GB, 1TB
Display type 6.7-inch Super Retina XDR,
ProMotion
6.7-inch Super Retina XDR,
ProMotion, always-on display
Resolution 2,778 x 1,284 at 458ppi 2,796 x 1,290 at 460 ppi
True Tone Yes Yes
Biometrics Face ID Face ID
Connectivity 5G (Sub-6GHz and mmWave)
Gigabit-class LTE
Wi-Fi 6
Bluetooth 5.0
Ultra Wideband
NFC
Lightning
5G (Sub-6GHz and mmWave)
Gigabit-class LTE
Wi-Fi 6E
Bluetooth 5.3
Ultra Wideband Gen 2
NFC
Emergency SOS via Satellite
Roadside Assistance via Satellite
USB-C
Rear Cameras 12MP Wide
12MP Ultra Wide
12MP Telephoto with 2.5x optical zoom
48MP Wide
12MP Ultra Wide
12MP Telephoto with 5x optical zoom
Video 4K 60fps,
4K 60fps HDR with Dolby Vision,
1080p 240fps Slo-Mo
4K 60fps,
4K 60fps HDR with Dolby Vision,
1080p 240fps Slo-Mo
ProRes 4K 60fps with external recording,
Cinematic Mode,
Action Mode
Front Camera 12MP TrueDepth with Autofocus 12MP TrueDepth with Autofocus
Battery Size (Video playback time) Up to 20 hours Up to 29 hours
Colors Pacific Blue,
Gold,
Graphite,
Silver
Black Titanium,
White Titanium,
Blue Titanium,
Natural Titanium

iPhone 15 Pro Max vs iPhone 12 Pro Max – Physical Dimensions

The first thing that should strike the iPhone 12 Pro Max owner is that there’s not really that much difference in the physical nature of the two devices. They are similarly styled, with a glass sandwich around a central metal chassis, and a glass camera bump with the lenses all in the same place.

But even here, there are subtle changes.

For a start, the materials used to make the smartphones has changed, with the 12 using a Stainless Steel chassis while the 15 has a Titanium version. There’s also the side Ring/Silent switch on the iPhone 12 Pro Max, which has been switched out for the multi-functional Action Button for the iPhone 15 Pro Max.

At the bottom, the Lightning port from the 12 has been switched out for the 15’s USB-C.

Looking at the dimensions, the iPhone 12 Pro Max has a larger footprint than the iPhone 15 Pro Max, at 6.33 inches tall by 3.07 inches wide versus 6.29 inches by 3.02 inches. Apple did say that it had worked on shrinking the chassis of the iPhone 15 Pro Max without cutting the screen size down.

The thickness is also on the iPhone 12 Pro Max’s side, with it being a svelte 0.29 inches thin, against the latest model’s 0.32-inch thickness.

The Titanium comes into play with weight, with the older model at 8.03 ounces to the modern 7.81 ounces.

These are all small externally-visible changes, but it’s the components that count more.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Displays

Apple hasn’t changed the physical size of the 6.7-inch Super Retina XDR display over the years, but it has made some changes to the component.

Firstly, there’s a tiny bit more resolution on the latest edition, at 2,796 by 1,290 pixels versus 2,778 by 1,284 pixels on the iPhone 12 Pro Max. That’s a change from a pixel density of 458 pixels per inch to 460ppi.

iPhone 15 Pro Max vs iPhone 12 Pro Max – notch versus Dynamic Island

Another missing feature on the older model is the Dynamic Island, Apple’s answer to the ever-complained-about notch that sits at the top of the three-year-old smartphone. That’s before you bring in ProMotion, which Apple introduced in the iPhone 13 Pro models, and continues to use in its latest Pro releases.

The iPhone 15 Pro Max is also equipped with an always-on display, which the iPhone 12 Pro Max simply doesn’t have.

Other elements continue to be present in both models, such as HDR support, Wide Color (P3), Haptic Touch, True Tone, and a 2 million to one contrast ratio.

Then there’s brightness. The iPhone 12 Pro manages 800 nits of typical brightness, and 1,200 nits for peak brightness for HDR content.

These are high figures, but the iPhone 15 Pro Max now handles 1,000 nits of max brightness for typical usage and 1,600 nits of peak HDR brightness. It even goes up to 2,000 nits of peak brightness for outdoor usage.

Suffice it to say, Apple did a lot with the display while still keeping it looking similar to its older counterpart.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Cameras

The camera arrangement hasn’t changed for the Pro Max line in three years. But the cameras certainly have.

The iPhone 12 Pro Max has three LiDAR-assisted 12-megapixel cameras, with an f/1.6 aperture Main, f/2.4 aperture Ultra Wide, and an f/2.2 aperture Telephoto. It also used Sensor-Shift optical image stabilization.

The iPhone 15 Pro Max still uses three cameras and LiDAR, but the Main is now a 48-megapixel shooter with an f/1.78 aperture, the Ultra Wide is 12MP with an f/2.2 aperture, and the Telephoto is a 12MP camera with an f/2.8 aperture. Sensor-Shift has also been moved to the second generation.

The Main camera’s massive resolution does allow for higher-resolution shots to be taken, but also for a virtual fourth camera sensor range to be used. By cropping to the sensor’s center, Apple creates another 12MP camera that offers an “optical zoom” midway between the Main and Telephoto options.

That Telephoto on the iPhone 15 Pro Max has an extra trick in the form of a tetraprism lens. The system reflects light and increases the distance it must travel through the lenses, allowing Apple to push the optical zoom level further.

In effect, the iPhone 12 Pro Max has optical zooms of 0.5x (Ultra Wide), 1x (Main), and 2.5x (Telephoto). The iPhone 15 Pro Max offers 0.5x (Ultra Wide), 1x (Main), 2x (Cropped Main), and 5x (Telephoto).

iPhone 15 Pro Max vs iPhone 12 Pro Max – camera bumps

Computational photography continues to be alive and well, with Deep Fusion on the 12 accompanied by the Photonic Engine on the 15. Both offer Portrait Mode with Depth Control, Portrait Lighting, Night Mode, and Apple ProRAW, but the newer model also deals with Photographic Styles and macro photography.

On to video, and both can do 4K video at 60fps, including HDR recording with Dolby Vision, and 1080p 240fps Slo-Mo, among other features. The iPhone 15 goes further in offering the 4K HDR 30fps Cinematic Mode, Action Mode, ProRes video at 4K 60fps with external recording, Log video recording, Academy Color Encoding System support and macro video.

The TrueDepth camera hasn’t changed that much over the years, with Apple still using a 12MP sensor, albeit with an f/1.9 aperture in the newer model against f/2.2 in the older one. Deep Fusion is used by both, though there’s the Photonic Engine on the 15.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Processing Performance

There’s been three years of chip updates in the Pro line, with the 12 using the A14 Bionic and the 14 equipped with the latest A17 Pro.

Both chips use the time-tested arrangement of two high-performance cores and four efficiency cores for the CPU. The GPU switches from a 4-core version in the 12 to a 6-core GPU with hardware-acclerated ray tracing and a dedicated AV1 decoder for streaming video.

The Neural Engine is still a 16-core component, but the newest edition in the A17 Pro can handle almost 17 trillion operations per second. That’s against 11 trillion operations per second for the A14 Bionic.

We know that there is a three-generation jump to consider, which isn’t easy to spell out considering that no-one’s been able to use the A17 Pro in the real world yet, but it’s going to be pretty obvious that there will be a bit of a difference.

According to Geekbench, the iPhone 12 Pro scores 2,048 for its single-core test and 4,667 for the multi-core result. For Metal, it manages a respectable 16,009.

The nearest equivalent we can look at for the moment would be the A16 Bionic, the A17 Pro’s predecessor. It scores 2,521 for the single-core, 6,376 for the multi-core, and 22,287 for the Metal score.

There is an obvious difference between the chips, but consider that the A17 Pro should be faster than the A16. Apple says the CPU should be 10 percent faster, the GPU should see a 20-percent gain, and the Neural Engine should be twice as fast as the A16’s version.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Connectivity

There’s not really been any change on the cellular connectivity side of things, with Apple supporting sub-6Ghz and mmWAVE in both of its models, as well as Gigabit LTE.

On local wireless connectivity, Apple has moved from Wi-Fi 6 to Wi-Fi 6E in three years, with Bluetooth 5.0 switched out for Bluetooth 5.3. Unless you happen to have hardware that uses these technologies, you’re not really going to see much difference in everyday life.

Ultra Wideband is present in both, but the iPhone 15 Pro Max uses a second-generation chip, enabling for range and directional information about other iPhone 15 handsets with the same chip when searching in Find My. There’s also Thread support in the iPhone 15 Pro Max, which is missing in the iPhone 12 Pro Max.

NFC is present in both cases, enabling Apple Pay to function.

If it wasn’t for Lightning, many would think this iPhone 12 Pro Max was the iPhone 15 version at first glance.

Turning to extended range communications, and the iPhone 15 Pro Max is capable of communicating with satellites, with both Emergency SOS via satellite and Roadside Assistance via Satellite functional on the model. There’s no satellite communications for the iPhone 12 Pro Max.

The physical connectivity has changed significantly, with Lightning in the iPhone 12 Pro Max switched out for USB-C. This change does mean there are more power connections open to iPhone 15 Pro Max users, who can even use chargers for many Android devices that already use the connection.

There’s also the bonus of data transfers, as Lightning is limited to USB 2.0 speeds, namely 480Mbps. The USB-C in the iPhone 15 Pro models can transfer at up to 10Gbps, and if you’re using the camera, you can record video directly to an external drive.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Power and Battery

As time marched on, battery technology and internal hardware designs have improved to allow Apple to include higher capacities in its devices. That, coupled with Apple’s work to improve efficiency tends to result in hardware slowly getting more battery life over time.

This also holds true for the iPhone 12 Pro Max and iPhone 15 Pro Max.

According to Apple, the iPhone 12 Pro Max has a 20-hour battery life for watching locally-stored video, or up to 12 hours for streamed video, and up to 80 hours of audio playback.

The iPhone 15 Pro Max soundly beats those figures, with 29 hours for local video playback, 25 hours for streamed video, and up to 95 hours for audio.

Getting power into the iPhones hasn’t changed much, with MagSafe and Qi support as well as wired charging, albeit with differing connectors.

To get to a 50% charge, Apple says it can take about 30 minutes using a 20W or higher power adapter for both models.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Other Features

Apple maintains an IP68 rating for both models, meaning they can survive a water depth of up to 6 meters (19.7 feet) for 30 minutes.

The iPhone 15 Pro Max is the only one of the two to have Crash Detection, a feature where the iPhone’s sensors are used to determine if the user has been in a car accident. If detected, the iPhone will attempt to get assistance, unless the user stops it.

One advantage for the iPhone 12 Pro Max is that it supports dual SIMs, specifically one physical nano-SIM and one eSIM. Apple does offer the iPhone 15 Pro Max with the same nano-SIM and eSIM combo in many territories, but in some, it’s only allowing dual eSIMs and doesn’t offer physical sim support at all.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Capacity, Color, and Pricing

At the time of release, the iPhone 12 Pro Max starting from $1,099, with a choice of three capacities: 128GB, 256GB, and 512GB. It can be found on Apple’s Certified Refurbished page, starting from $679.

iPhone 15 Pro Max vs iPhone 12 Pro Max – color selections

The iPhone 15 Pro Max starts at $1,199 for a 256GB capacity, the same cost as the 256GB capacity iPhone 12 Pro Max at launch. The 15 is also available in 512GB and 1TB capacities, priced at $1,399 and $1,599 respectively.

Apple sold the iPhone 12 Pro Max in a choice of four colors: Graphite, Silver, Gold, and Pacific Blue. Likewise, the iPhone 15 Pro Max is available in four colors: Natural Titanium, Blue Titanium, Black Titanium, and White Titanium.

iPhone 15 Pro Max vs iPhone 12 Pro Max – Worth the Upgrade?

It’s fair to say that there has been a lot of change in just three years for the top of Apple’s iPhone product range.

In that time, the iPhone 15 Pro Max has seen its processing performance grow significantly, the cameras improve in resolution and in zoom capability, the video features blossom into something videographers and other creatives can get their teeth into, and with highly extended battery life.

All this, while also keeping the basic styling of the Pro Max model fairly static over time. With small exceptions such as the Action button and Lightning changed to USB-C, there’s little visible change at all.

Under the hood, where it matters, is where the big alterations have taken place.

Owners of the iPhone 12 Pro Max will naturally want to see big changes in their next smartphone, and while the iPhone 15 Pro Max is a very familiar package, it certainly offers a lot to potential upgraders.

Where to buy the iPhone 15 Pro Max

The iPhone 15 Pro Max is available to order, with profoundly long delays already. Shipments start on September 22.

AppleInsider News

Clorox Products In Short Supply After Cyberattack

An anonymous reader quotes a report from CNN: A cyberattack at Clorox is causing wide-scale disruption of the company’s operations, hampering its ability to make its cleaning materials, Clorox said Monday. Clorox said some of its products are now in short supply as it has struggled to meet consumer demand during the disruption. Clorox didn’t specify which of its products are affected. The company on Monday revealed in a regulatory filing that it detected unauthorized activity in some of its information technology systems in August. Clorox said it immediately took action to stop the attack, including reducing its operations. It now believes the attack has been contained. Still, Clorox has not been able to get its manufacturing operations back up to full speed. The company said it is fulfilling and processing orders manually. The company doesn’t expect to begin the process of returning to normal operations until next week. "Clorox has already resumed production at the vast majority of its manufacturing sites and expects the ramp up to full production to occur over time," the company said. "At this time, the company cannot estimate how long it will take to resume fully normalized operations." The company said the cyberattack and the delays will hurt its current-quarter financial results materially, although Clorox said determining any longer-term impact would be premature, "given the ongoing recovery."

Read more of this story at Slashdot.

Slashdot