How to Store JSON Data in Database in Laravel (with example)

https://laracoding.com/wp-content/uploads/2023/06/how-to-store-json-data-in-database-in-laravel-with-example_841.png

Storing JSON data in a Laravel database provides a flexible solution for managing dynamic attributes or unstructured data. In this tutorial, we will walk through the process of storing JSON data in a Laravel database, using a practical example of storing product attributes. By the end of this tutorial, you will have a clear understanding of how to store and retrieve JSON data efficiently using Laravel.

Step 1: Setting up a Laravel Project

Create a new Laravel project using the following command:

laravel new json-data-storage

Step 2: Creating the Products Table Migration

Generate a migration file to create the products table using the command:

php artisan make:migration create_products_table --create=products

Inside the generated migration file (database/migrations/YYYY_MM_DD_create_products_table.php), define the table schema with a JSON column for the attributes:

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    /**
     * Run the migrations.
     */
    public function up(): void
    {
        Schema::create('products', function (Blueprint $table) {
            $table->id();
            $table->json('attributes');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     */
    public function down(): void
    {
        Schema::dropIfExists('products');
    }
};

Step 3: Running the Migration

Run the migration using the following command:

php artisan migrate

Step 4: Creating the Product Model

Generate a Product model using the command:

php artisan make:model Product

In the Product model (app/Models/Product.php), add the $casts property to specify that the attributes attribute should be treated as JSON:

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;

class Product extends Model
{
    use HasFactory;

    protected $casts = [
        'attributes' => 'json',
    ];
}

Step 5: Storing Product Attributes as JSON

To store product attributes as JSON, we can simply use Laravel’s Eloquent model. Create a new Product instance, set the desired attributes as an array, and save it:

use App\Models\Product;

$product = new Product;
$product->attributes = [
    'color' => 'red',
    'size' => 'medium',
    'weight' => 0.5,
];
$product->save();

Step 6: Retrieving JSON Data

To retrieve the attributes of a product, you can access the attributes property on the model:

$product = Product::find(1);
$attributes = $product->attributes;

If you want to access a specific attribute within the JSON data, you can do so by using array access:

$color = $product->attributes['color'];

By accessing the attributes property, you can retrieve the JSON data associated with the product and access specific attributes as needed.

Step 7: Manipulating JSON Data (Full Array)

To update all the attributes of the product at once, you can assign a new array with all the key-value pairs and use Eloquent’s save() method to update the record directly.

$product = \App\Models\Product::find(1);
$product->attributes = [
    'color' => 'green',
    'size' => 'large',
    'weight' => 2.5,
];
$product->save();

Step 8: Manipulating JSON Data (One Value)

Updating a single value within the JSON data requires a slightly different approach in Laravel and has one important caveat. Directly modifying the attribute like $product->attributes['weight'] = 1.0 and saving the product will result in an ErrorException: “Indirect modification of overloaded property App\Models\Product::$attributes has no effect.”

To overcome this issue, you can follow the solution below:

$product = \App\Models\Product::find(1);
$attributes = $product->attributes; // create a copy of the array
$attributes['weight'] = 0.6; // modify the value in the copied array
$product->attributes = $attributes; // assign the copied array back to $product->attributes
$product->save();

Conclusion

Storing JSON data in a database using Laravel provides flexibility and convenience when working with dynamic or unstructured data. By following the steps outlined in this tutorial, you have learned how to create the necessary migrations and models, store and retrieve JSON data, and manipulate the JSON data efficiently. This knowledge equips you with the tools to handle various use cases in your Laravel applications and opens up possibilities for efficiently managing complex data structures.

Start implementing JSON data storage in your Laravel projects and unlock the full potential of your application’s data management capabilities. Happy coding!

Laravel News Links

Dynamic Database Config


README

Latest Version on Packagist
Quality Score
Code Quality
Vulnerability
Github Workflow Status
Total Downloads
Licence

This laravel package helps you dynamically set more database configurations through the .env file or database.

REQUIREMENTS

STEPS TO INSTALL

composer require ikechukwukalu/dynamicdatabaseconfig

Introduction

The need for this package came up when I once handled an already existing project that, due to certain constraints, had 9 databases implemented for each country their application was being utilised. This application also had a central database that was used by every country as well.

The config/database file wasn’t pretty. I’d prefer to have all configurations within the .env file only. The Big question was, what if the databases required grew to 19? These were the problems, both pending and existing that needed a clean hack/solution.

Middlewares

  • env.database.config
  • dynamic.database.config

Env.database.config Middleware

This middleware fetches database configurations from the .env file using postfixes like ONE. This dynamically declares an additional database connection for your laravel application.

DB_HOST_ONE=127.0.0.1
DB_PORT_ONE=3306
DB_DATABASE_ONE=second_db
DB_USERNAME_ONE=root
DB_PASSWORD_ONE=
  • Sample middleware implementation
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;

/**
 * mysql is the type of relational database connection being replicated - $database
 * mysql_1 is the new connection name - $name
 * ONE is the postfix - $postfix
 */

Route::middleware(['env.database.config:mysql,mysql_1,ONE'])->group(function () {
    Route::post('/user', function (Request $request) {
        /**
         * $request->_db_connection === 'mysql_1'
         */
        return \App\Models\User::on('mysql_1')->find(1);
    });
});

Route::post('/user', function (Request $request) {
        /**
         * $request->_db_connection === 'mysql_1'
         */
        return \App\Models\User::on('mysql_1')->find(1);
})->middleware('env.database.config:mysql,mysql_1,ONE');

You would not need to add a postfix, ONE, parameter to the middleware for the $postFix variable if you simply set the following session value session(config('dynamicdatabaseconfig.session_postfix')), but when a postfix parameter has been set, it will be used instead of the session value.

Dynamic.database.config Middleware

This middleware fetches database configurations from the database_configurations table within the primary migration database. It utilises a unique $ref variable. It’s recommended that the unique $ref variable should be human readable, that way it becomes easier to run the package’s console commands for running migrations. This will also dynamically declare an additional database connection for your laravel application.

use Ikechukwukalu\Dynamicdatabaseconfig\Models\DatabaseConfiguration;

protected $hidden = [
        'ref',
        'name',
        'database',

        /**
         * Accepts only arrays
         */
        'configuration'
];
  • Sample eloquent database insert
$countries = ['nigeria', 'ghana', 'togo', 'kenya'];
$config = \Config::get('database.connections.mysql');

foreach ($countries as $country) {
    $config['database'] = $country . '_db';
    DatabaseConfiguration::firstOrCreate(
    ['ref' => $country],
    [
        'ref' => $country,
        'name' => 'mysql_' . $country,
        'database' => 'mysql',
        'configuration' => $config
    ]);
}
  • Sample middleware implementation
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Route;

/**
 * nigeria is $ref value
 */

Route::middleware(['dynamic.database.config:nigeria'])->group(function () {
    Route::post('/user', function (Request $request) {
        /**
         * $request->_db_connection === 'mysql_nigeria'
         */
        return \App\Models\User::on('mysql_nigeria')->find(1);
    });
});

Route::post('/user', function (Request $request) {
        /**
         * $request->_db_connection === 'mysql_nigeria'
         */
        return \App\Models\User::on('mysql_nigeria')->find(1);
})->middleware('dynamic.database.config:nigeria');

You would not need to add a ref, nigeria, parameter to the middleware for the $ref variable if you simply set the following session value session(config('dynamicdatabaseconfig.session_ref')), but when a ref parameter has been set, it will be used instead of the session value.

By default, the values stored within the configuration field will be hashed, but you can adjust this from the .env file by setting DB_CONFIGURATIONS_HASH=false.

Migration

It’s compulsory to first migrate laravel’s initial database.

Other Migrations

  • Default migrations
  • Isolated migrations

Default Migrations

This will only migrate files within laravel’s default migration path database/migrations

php artisan env:migrate mysql mysql_1 ONE

php artisan dynamic:migrate nigeria

Isolated Migrations

This will only migrate files within the specified migration path database/migrations/folder

php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder

php artisan dynamic:migrate nigeria --path=database/migrations/folder

Both Migrations

Running the migrations as displayed below will result in the respective database having the migrated data from migrations within database/migrations and database/migrations/folder.

php artisan env:migrate mysql mysql_1 ONE
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder

php artisan dynamic:migrate nigeria
php artisan dynamic:migrate nigeria --path=database/migrations/folder

Database Seeding

php artisan env:migrate mysql mysql_1 ONE --seed
php artisan env:migrate mysql mysql_1 ONE --seeder=DatabaseSeederOne
php artisan env:migrate mysql mysql_1 ONE --seeder=DatabaseSeederOne  --path=database/migrations/folder

php artisan dynamic:migrate nigeria --seed
php artisan dynamic:migrate nigeria --seeder=DatabaseSeederNigeria
php artisan dynamic:migrate nigeria --seeder=DatabaseSeederNigeria  --path=database/migrations/folder

Re-runing Migrations Afresh

php artisan env:migrate mysql mysql_1 ONE --fresh
php artisan env:migrate mysql mysql_1 ONE --fresh --seed
php artisan env:migrate mysql mysql_1 ONE --fresh --seeder=DatabaseSeederOne
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --fresh
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --fresh --seeder=DatabaseSeederOne

php artisan dynamic:migrate nigeria --fresh
php artisan dynamic:migrate nigeria --fresh --seed
php artisan dynamic:migrate nigeria --fresh --seeder=DatabaseSeederNigeria
php artisan dynamic:migrate nigeria --path=database/migrations/folder --fresh
php artisan dynamic:migrate nigeria --path=database/migrations/folder --fresh --seeder=DatabaseSeederNigeria

Refreshing Migrations

php artisan env:migrate mysql mysql_1 ONE --refresh
php artisan env:migrate mysql mysql_1 ONE --refresh --seed
php artisan env:migrate mysql mysql_1 ONE --refresh --seeder=DatabaseSeederOne
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --refresh
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --refresh --seeder=DatabaseSeederOne

php artisan dynamic:migrate nigeria --refresh
php artisan dynamic:migrate nigeria --refresh --seed
php artisan dynamic:migrate nigeria --refresh --seeder=DatabaseSeederNigeria
php artisan dynamic:migrate nigeria --path=database/migrations/folder --refresh
php artisan dynamic:migrate nigeria --path=database/migrations/folder --refresh --seeder=DatabaseSeederNigeria

Rolling Back Migrations

php artisan env:migrate mysql mysql_1 ONE --rollback
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --rollback

php artisan dynamic:migrate nigeria --rollback
php artisan dynamic:migrate nigeria --path=database/migrations/folder --rollback

NOTE

  • A primary database is needed before any other database can be migrated.
  • A database will be created if it does not exist.
  • Each database will retain it’s own independent migration table.
  • It’s recommended that you do not publish the package’s migration file, unless you want the database_configurations table to be migrated into every extra database created when running Default migrations.

PUBLISH MIGRATIONS

  • php artisan vendor:publish --tag=ddc-migrations

PUBLISH CONFIG

  • php artisan vendor:publish --tag=ddc-config

LICENSE

The DDC package is an open-sourced software licensed under the MIT license.

Laravel News Links

The implications of this technology are staggering

http://img.youtube.com/vi/vSPhhw-2ShI/0.jpg

 

I was astonished to read of the wide-ranging implications of a new laser weeding technology now available to farmers.

Carbon Robotics is now shipping its LaserWeeder to farms around the United States; the machine uses the power of lasers and robotics to rid fields of weeds … The LaserWeeder can eliminate over 200,000 weeds per hour and offer up to 80% cost savings in weed control. 

. . .

The LaserWeeder is a 20-foot-wide unit comprised of three rows of 10 lasers that are pulled behind a tractor.

Thirty lasers are at work as the unit travels across a field destroying weeds "with millimeter accuracy, skipping the plant and killing the weed," said Mikesell. 

The LaserWeeder "does the equivalent work of about 70 people," he continued.

. . .

The technology "makes for a much more consistent growing process and adds a bunch of health to your yield. You get big yield improvements because you’re not damaging the crops with herbicides."

There’s more at the link.

Here’s a publicity video from Carbon Robotics showing the LaserWeeder in action.

The economic implications for farmers and farm workers are mind-boggling.

  • The workers normally hired to manage weeds in crops won’t be needed any more – or, at any rate, far fewer of them.  That’s a huge money-saver for farmers, but how many workers will end up unemployed, with no jobs available to replace those they’ve lost?  What will that do to the unemployment rate overall?
  • I’ve no idea how much per acre farmers normally spend on herbicides, but it’s got to add up.  It probably varies from region to region.  If those expenses are no longer needed, the robotic/laser technology of the LaserWeeder becomes that much more affordable.
  • What will this mean for fertilizers and other input costs?  If crops are no longer threatened by weed incursion, will farmers still need as much fertilizer to obtain high yields, or will the absence of weeds – and the saving of time and money through not having to fight them – mean that less fertilizer can be used, because overall crop productivity will be higher even without it?
  • Can this technology be scaled according to the size of farm and type of crop?  The video above shows a big machine in a big field.  Can a smaller machine be made at a lower cost?  Can smaller farms use it cost-effectively?  Can the technology be adapted to (say) market gardening in greenhouses, rather than fields?  These things may not be possible now, but if they become feasible, they may make even the small-scale, backyard growing of fruit and vegetables much easier and cheaper.  Might we be able to grow a certain proportion of our own food, more practically and affordably than before, thereby reducing our dependence on "Big Ag"?
  • Do these input cost savings mean that farmers (and Big Ag in particular) can/will accept lower prices for their produce, because they’ll have lower input costs to grow them?
  • Can this technology be adapted to (say) gardening in greenhouses and back yards, rather than larger fields?  It may not be possible now, but if it becomes feasible, it may make the small-scale, private cultivation of fruit and vegetables much easier and cheaper.  You might see groups of neighbors hiring or buying such technology to share among themselves, at home or in allotments.
  • Over time, this technology may revolutionize the production of food, thereby addressing some of the "woke" or "green" concerns about modern farming practices.  There’s a lot of concern about the over-use of farm chemicals and resultant pollution problems (see, for example, the so-called "dead zone" in the Gulf of Mexico, caused by such chemicals draining down the Mississippi River and out to sea).  Could such technology help reduce that problem, by needing less fertilizer and/or herbicides?

Just the thought of no longer having to spend hours weeding in the back yard is enormously tempting.  This will bear watching.

Peter

Bayou Renaissance Man

This Rig Turns Walls/Windows Into Animated Lite-Brites

https://s3files.core77.com/blog/images/1414093_81_124043_JrOgOXAJJ.jpg

Interior/exterior decoration, 2020s-style: Govee Curtain Lights are essentially a curtain of hanging LED-embedded strips. Measuring 1.5m (5′) wide and 2m (6.6′) tall, this provides a grid of 520 evenly-spaced LEDs that you can program via an app, allowing you to turn walls or windows into gigantic animated Lite-Brites.

You can either choose from stock animations provided by the company, or create your own. The company also boasts of a "Music Mode" that makes the lights move in accordance with music.

If this is your jam, startup Govee paid the YouTuber below to produce this video on setting up and using the product.

All I can think is how dated this is going to look in a few years; I imagine if this style of decoration catches on, the consumer will demand a much higher level of resolution.

Core77

Data School: Make your own *private* GPT with Python 🔒

https://www.dataschool.io/content/images/2023/06/ai.jpegMake your own *private* GPT with Python ????

ChatGPT is amazing, but its knowledge is limited to the data on which it was trained.

Wouldn&apost it be great if you could use the power of Large Language Models (LLMs) to interact with your own private documents, without uploading them to the web?

The great news is that you can do this TODAY! Let me show you how…

privateGPT is an open source project that allows you to parse your own documents and interact with them using a LLM. You ask it questions, and the LLM will generate answers from your documents.

All using Python, all 100% private, all 100% free!

Below, I&aposll walk you through how to set it up. (Note that this will require some familiarity with the command line.)


1️⃣ Clone or download the repository

If git is installed on your computer, then navigate to an appropriate folder (perhaps "Documents") and clone the repository (git clone https://github.com/imartinez/privateGPT.git). That will create a "privateGPT" folder, so change into that folder (cd privateGPT).

Alternatively, you could download the repository as a zip file (using the green "Code" button), move the zip file to an appropriate folder, and then unzip it. It will create a folder called "privateGPT-main", which you should rename to "privateGPT". You&aposll then need to navigate to that folder using the command line.


2️⃣ Create and activate a new environment

I highly recommend setting up a virtual environment for this project. My tool of choice is conda, which is available through Anaconda (the full distribution) or Miniconda (a minimal installer), though many other tools are available.

If you&aposre using conda, create an environment called "gpt" that includes the latest version of Python using conda create -n gpt python. Then, activate the environment using conda activate gpt. Use conda list to see which packages are installed in this environment.

(Note: privateGPT requires Python 3.10 or later.)


3️⃣ Install the packages listed in requirements.txt

First, make sure that "privateGPT" is your working directory using pwd. Then, make sure that "gpt" is your active environment using conda info.

Once you&aposve done that, use pip3 install -r requirements.txt to install all of the packages listed in that file into the "gpt" environment. This will take at least a few minutes.

Use conda list to see the updated list of which packages are installed.

(Note: The System Requirements section of the README may be helpful if you run into an installation error.)


4️⃣ Download the LLM model

In the Environment Setup section of the README, there&aposs a link to an LLM. Currently, that LLM is ggml-gpt4all-j-v1.3-groovy.bin. Download that file (3.5 GB).

Then, create a subfolder of the "privateGPT" folder called "models", and move the downloaded LLM file to "models".


5️⃣ Copy the environment file

In the "privateGPT" folder, there&aposs a file named "example.env". Make a copy of that file named ".env" using cp example.env .env. Use ls -a to check that it worked.

(Note: This file has nothing to do with your virtual environment.)


6️⃣ Add your documents

Add your private documents to the "source_documents" folder, which is a subfolder of the "privateGPT" folder. Here&aposs a list of the supported file types.

I recommend starting with a small number of documents so that you can quickly verify that the entire process works. (The "source_documents" folder already contains a sample document, "state_of_the_union.txt", so you can actually just start with this document if you like.)


7️⃣ Ingest your documents

Once again, make sure that "privateGPT" is your working directory using pwd.

Then, run python ingest.py to parse the documents. This may run quickly (< 1 minute) if you only added a few small documents, but it can take a very long time with larger documents.

Once this process is done, you&aposll notice that there&aposs a new subfolder of "privateGPT" called "db".


8️⃣ Interact with your documents

Run python privateGPT.py to start querying your documents! Once it has loaded, you will see the text Enter a query:

Type in your question and hit enter. After a minute, it will answer your question, followed by a list of source documents that it used for context.

(Keep in mind that the LLM has "knowledge" far outside your documents, so it can answer questions that have nothing to do with the documents you provided to it.)

When you&aposre done asking questions, just type exit.


???? Troubleshooting

This project is less than two months old, and it depends on other libraries which are also quite new! Thus it&aposs highly likely that you will run into bugs, unexplained errors, and crashes.

For example, if you get an "unknown token" error after asking a question, my experience has been that you can ignore the error and you will still get an answer to your question.

On the other hand, if you get a memory-related error, you will need to end the process by hitting "Ctrl + C" on your keyboard. (Then, just restart it by running python privateGPT.py.)

You might be able to find a workaround to a particular problem by searching the Issues in the privateGPT repository.

If you post your own GitHub issue, please be kind! This is an open source project being run by one person in his spare time (for free)!


???? Usage tips

Want to query more documents? Add them to the "source_documents" folder and re-run python ingest.py.

Want to start over? Delete the "db" folder, and a new "db" folder will be created the next time you ingest documents.

Want to hide the source documents for each answer? Run python privateGPT.py -S instead of python privateGPT.py.

Want to try a different LLM? Download a different LLM to the "models" folder and reference it in the ".env" file.

Want to use the latest version of the code? Given its popularity, it&aposs likely that this project will evolve rapidly. If you want to use the latest version of the code, run git pull origin main.


???? Disclaimer

I think it&aposs worth repeating the disclaimer listed at the bottom of the repository:

This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production-ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vector stores to improve performance.

Planet Python

Laravel Schema Rules

https://opengraph.githubassets.com/f6873d9fcbab9cd1f811527eeed98ab28e08f5ef751780aa0ad435c91f426320/laracraft-tech/laravel-schema-rules

Laravel Schema Rules

Latest Version on Packagist
Tests
Check & fix styling
License

Automatically generate basic Laravel validation rules based on your database table schema!
Use these as a starting point to fine-tune and optimize your validation rules as needed.

Installation

You can install the package via composer:

composer require laracraft-tech/laravel-schema-rules --dev

Then publish the config file with:

php artisan vendor:publish --tag="schema-rules-config"

ToC

Usage

Let’s say you’ve migrated this fictional table:

Schema::create('persons', function (Blueprint $table) {
    $table->id();
    $table->string('first_name', 100);
    $table->string('last_name', 100);
    $table->string('email');
    $table->foreignId('address_id')->constrained();
    $table->text('bio')->nullable();
    $table->enum('gender', ['m', 'f', 'd']);
    $table->date('birth');
    $table->year('graduated');
    $table->float('body_size');
    $table->unsignedTinyInteger('children_count')->nullable();
    $table->integer('account_balance');
    $table->unsignedInteger('net_income');
    $table->boolean('send_newsletter')->nullable();
});

Generate rules for a whole table

Now if you run:

php artisan schema:generate-rules persons

You’ll get:

Schema-based validation rules for table "persons" have been generated!
Copy & paste these to your controller validation or form request or where ever your validation takes place:
[
    'first_name' => ['required', 'string', 'min:1', 'max:100'],
    'last_name' => ['required', 'string', 'min:1', 'max:100'],
    'email' => ['required', 'string', 'min:1', 'max:255'],
    'address_id' => ['required', 'exists:addresses,id'],
    'bio' => ['nullable', 'string', 'min:1'],
    'gender' => ['required', 'string', 'in:m,f,d'],
    'birth' => ['required', 'date'],
    'graduated' => ['required', 'integer', 'min:1901', 'max:2155'],
    'body_size' => ['required', 'numeric'],
    'children_count' => ['nullable', 'integer', 'min:0', 'max:255'],
    'account_balance' => ['required', 'integer', 'min:-2147483648', 'max:2147483647'],
    'net_income' => ['required', 'integer', 'min:0', 'max:4294967295'],
    'send_newsletter' => ['nullable', 'boolean']
]

As you may have noticed the float-column body_size, just gets generated to ['required', 'numeric'].
Proper rules for float, decimal and double, are not yet implemented!

Generate rules for specific columns

You can also explicitly specify the columns:

php artisan schema:generate-rules persons --columns first_name,last_name,email

Which gives you:

Schema-based validation rules for table "persons" have been generated!
Copy & paste these to your controller validation or form request or where ever your validation takes place:
[
    'first_name' => ['required', 'string', 'min:1', 'max:100'],
    'last_name' => ['required', 'string', 'min:1', 'max:100'],
    'email' => ['required', 'string', 'min:1', 'max:255']
]

Generate Form Request Class

Optionally, you can add a --create-request or -c flag,
which will create a form request class with the generated rules for you!

# creates app/Http/Requests/StorePersonRequest.php (store request is the default)
php artisan schema:generate-rules persons --create-request 

# creates/overwrites app/Http/Requests/StorePersonRequest.php
php artisan schema:generate-rules persons --create-request --force
 
# creates app/Http/Requests/UpdatePersonRequest.php
php artisan schema:generate-rules persons --create-request --file UpdatePersonRequest

# creates app/Http/Requests/Api/V1/StorePersonRequest.php
php artisan schema:generate-rules persons --create-request --file Api\\V1\\StorePersonRequest

# creates/overwrites app/Http/Requests/Api/V1/StorePersonRequest.php (using shortcuts)
php artisan schema:generate-rules persons -cf --file Api\\V1\\StorePersonRequest

Supported Drivers

Currently, the supported database drivers are MySQL, PostgreSQL, and SQLite.

Please note, since each driver supports different data types and range specifications,
the validation rules generated by this package may vary depending on the database driver you are using.

Testing

Changelog

Please see CHANGELOG for more information on what has changed recently.

Contributing

Please see CONTRIBUTING for details.

Security Vulnerabilities

Please review our security policy on how to report security vulnerabilities.

Credits

License

The MIT License (MIT). Please see License File for more information.

Laravel News Links

MySQL or PostgreSQL: Which is Better?

https://www.percona.com/blog/wp-content/uploads/2023/06/MySQL-or-PostgreSQL-150×150.pngMySQL or PostgreSQL

For more than a quarter of a century, people have been discussing “Which is better, MySQL or PostgreSQL?” — with no resolution. When people ask me which is better, I have to ask them what they want to do and how they want to do it. 

I’ll explain using a bad analogy: 

What type of car is best? This depends on your needs. If you want to go fast, a top fuel dragster will set you back close to a million dollars by the time you buy the chassis, spare engines, tooling, and transporter. That is for a car that goes a quarter-mile at a time. If you want fast, it will do more than 300 miles per hour for about 1,000 feet. But you cannot parallel park it or use it to run down to the store for some chips and beer.

A small economy car is better for that store run and is operated at a fraction of the cost of the dragster. You do not need to wear a flame-resistant suit to drive it or repack the parachutes each time you want to stop after a trip. For most purposes, this is enough of a car for most people. If you are a drag racer, that car will not be competitive. 

Both MySQL and PostgreSQL do the basics very well

From a high level, one relational database management system is pretty much like every other relational database management system. Pick either PostgreSQL or MySQL, and you can be happy, leading a fulfilling and satisfying life full of joy. Both store data more than adequately and will do at least eighty percent of what you want to do with ease.

However, if you have certain data processing needs, budget requirements, limitations in the skill level of support staff, or infrastructure issues, then you need to be a little pickier.  

MySQL was criticized for years for doing dumb things with data such as allowing bad calendar dates, truncating data without warning, and other idiosyncrasies that its users learned to avoid. These problems have been rectified, but the old reputation lives on in the annals of the Internet and the memories of critics. It does not follow the SQL Standard as closely as other databases, and some useful functions such as MERGE() are missing.

PostgreSQL has enjoyed a reputation of being the open source database closest to the SQL Standard, but close might not be good enough for you. When you look at the multi-version concurrency control (MVCC) design that PostgreSQL uses and compare it to the designs of other databases, it looks chaotic. Dealing with dead tuples can be tricky but is much improved with automatic vacuuming, as long as the automated process runs well. And frankly, index bloat seems like something that should have been fixed years ago.

Neither is perfect, and each has its peccadillos that must be honored. Those will be covered a little later herein, but first, in honest fashion, you should ask yourself: “What do I need in a database?”

Determining whether MySQL or PostgreSQL better suits your needs

The vast majority of databases being used today do not get close to using all of their capabilities. Most of the work is CRUD — Create, Read (SELECT), Update, and Delete queries — which will probably never scratch the surface of the advanced features found in that database. Both of the databases being considered here do this type of work exceedingly well.

If you need a feature found in one but not the other, such as JSON_TABLE() in MySQL or MERGE() in PostgreSQL, then your choice has been made for you, maybe. JSON_TABLE() may make it into PG 17 in 2024 after being dropped at the last moment from PG 15. Or if you process a lot of JSON formatted data, you have two choices in PostgreSQL with JSON or JSONB that have their quirks while MySQL has one JSON datatype.

The question becomes this: Is either database good enough for your needs? And these: Does it have the functions you need? Window Functions are great for analytics, but is that something you are going to use? Both MySQL and PostgreSQL have Window Functions, but PostgreSQL is a little more elaborate in its offerings. Both have various ways of replicating data to other servers, but the implementation details of one particular approach might be unpalatable for you. The more exacting your needs, the easier it becomes to identify your choice.

If at this point you realize that you could conceivably use either database, at least in some abstract theoretical way, then we should make the next big step — to the care and feeding of your database.

Care and feeding — which database is easier to manage?

Databases are the toddlers of the software world. While other products can be installed and ignored without worry, databases need attention, constant attention, and lots of it. Ignoring your database can be disastrous.

MySQL is easier to take care of and administer in most cases. MySQL distributions, like Percona’s, tend to be one-stop offerings. The server, client, connectors, etc., are usually offered in one place.”

With PostgreSQL, it can be harder to get all the requisite components because you might have to visit several websites. There are several options for connection poolers, load balancers, and replication packages from various. Installing extensions to the server is easy, but does that new extension work with other parts of your server? In this case, you have to do the testing, whereas MySQL tests its components as a group.

PostgreSQL has benefitted from a lot of engineering in the past several years, which helps make it perform overhead tasks much easier. MySQL has automated most of those overhead tasks so you do not have to worry about them. For example, in PostgreSQL, copies of outdated rows in a table must be vacuumed separately to avoid bloat while InnoDB in MySQL handles this automatically. That alone makes MySQL easier to administer than PostgreSQL.

There are differences in connecting to the server to submit a query. MySQL uses a pool of threads, which is much less work for the server than PostgreSQL’s needing to fork off a process to make the connection. That is a higher load on the server, but it can be rectified by using a connection pooler.

Backups are a necessary part of owning a database. Both databases have many backup tools, which again requires you to make another choice. In the MySQL arena, Percona XtraBackup is the best tool hands down, and I am not just saying that as a Percona employee, but as someone who has used the product. 

PostgreSQL has many options, but no one product stands head and shoulders above the rest. If your database instance is in the cloud, then you can peruse the feature set of your cloud vendor’s offering. But I advise you to make copies of your backups off-premise or off-cloud, no matter your choice.

I used to be the Certification Manager for MySQL AB (and Sun Microsystems and Oracle) and spoke frequently to hiring managers who routinely told me it was very hard to find qualified MySQL DBAs. And they said it was impossible to find qualified PostgreSQL DBAs. If you and your staff have experience and skill in one of the databases, then you will probably skew your criteria in that direction 

MySQL’s InnoDB Cluster is the best thought-out and easiest-to-implement replication architecture. PostgreSQL is playing catchup, as its alternatives are not as simple to implement. Both do logical replication well, but Oracle’s product is more polished.

PostgreSQL is a richer environment with more data types and more operators, and it’s closer to the SQL standard implementation. I am a big fan of the MERGE() function, as I spent part of my career in the processing of cash register transaction logs where this function shines. This might seem like a trivial thing unless you are processing similar data and then it becomes of major importance. PostgreSQL has an almost embarrassing number of index types and the ability to index only some values in a column.  

The PostgreSQL and MySQL communities

Both MySQL and PostgreSQL have large, thriving communities. There are meetups, conferences, mailing lists, slack channels, and tutorials galore for both. One big difference is that PostgreSQL is pretty much developed by contributors using mailing lists while MySQL is mainly produced by Oracle’s MySQL Engineers. The difference is also notable in that Oracle determines the future of upstream MySQL, while PostgreSQL is vendor-neutral.

In both cases, a few hundred individuals work on the main server code. The main difference in the development is that PostgreSQL’s new functionality is open for observation (if you are on the right mailing list) while Oracle often provides little or no notice of something new. 

Conclusion: Do I choose PostgreSQL or MySQL?

Right now, both PostgreSQL and MySQL are great choices for a database. MySQL is easier to implement and run but might lack the features you need. PostgreSQL is feature-rich but needs more care to configure and operate.

Another option is Percona software for MySQL, which has enterprise features such as data basking, at-rest encryption, RocksDB, and an improved connection pooler. The software is also freely available. Percona software for PostgreSQL is also a high-quality offering with many of the most popular extensions already available, making it easier to run PostgreSQL in your production and mission-critical environments. 

Learn more about Percona software for MySQL

 

Learn more about Percona software for PostgreSQL

Percona Database Performance Blog

Migrate passwords from a legacy PHP application to Laravel

https://leopoletto.com/assets/images/migrate-legacy-passwords-to-laravel.png

Migrating a legacy PHP application to Laravel will probably require a custom hashing driver.

This happens because Laravel’s default hashing driver is bcrypt and has argon as another built-in option, while MD5, SHA-1, SHA-256, and SHA-512 were and still are widely used, especially when the application does not rely on a modern framework.

Considering that we already have a table storing the hashed passwords, we need to make Laravel use the correct hash algorithm to compare the users’ raw passwords when authenticating.

Create a custom hash drive on Laravel

It should implement the Illuminate\Contracts\Hashing\Hasher
interface and extend the Illuminate\Hashing\AbstractHasher class:

app/Hashing/Md5Hasher.php

namespace App\Hashing;

use Illuminate\Contracts\Hashing\Hasher;
use Illuminate\Hashing\AbstractHasher;

class Md5Hasher extends AbstractHasher implements Hasher
{
    public function make($value, array $options = []): string
    {
        return md5($value . config('hashing.md5.salt'));
    }

    public function check($value, $hashedValue, array $options = []): bool
    {
        return $this->make($value) === $hashedValue;
    }

    public function needsRehash($hashedValue, array $options = []): bool
    {
        return false;
    }
}

Register the new driver in your application

Register it in the boot method of the following class:

app/Providers/AuthServiceProvider.php

namespace App\Providers;

use App\Hashing\Md5Hasher;
use Illuminate\Support\Facades\Hash;
use Illuminate\Support\ServiceProvider;

class AuthServiceProvider extends ServiceProvider
{
    // ...

    public function boot(): void
    {
        // ...

        Hash::extend('md5', static function () {
            return new Md5Hasher();
        });
    }
}

Define the hashing SALT (Optional)

Your legacy application may use a SALT to concatenate before hashing the password. We can define it in the config and delegate its value to the .env file.
If your legacy application does not use SALT, you won’t need to add it to the .env file.

config/hashing.php

return [
    // ...

    'md5' => [
        'salt' => env('MD5_SALT'),
    ],
];

.env

MD5_SALT=my_salt

Update the passwords

To rehash the password, we can intercept the users’ attempts to login and check if the MD5 hashed password matches the one in the database.
We can do that by listening to the Illuminate\Auth\Events\Attempting::class event.

php artisan make:listener UpdateMd5Password

app/Providers/EventServiceProvider.php

class EventServiceProvider extends ServiceProvider
{
    //...

    protected $listen = [

        //...

        'Illuminate\Auth\Events\Attempting::class' => [
            'App\Listeners\UpdateMd5Password::class',
        ],
    ];

    //...
}

The following implementation checks if the credentials match the legacy algorithm (MD5) and update to the new one.
The authentication flow continues, and the user will be successfully authenticated using the default driver (bcrypt).

app/Listeners/UpdateSha1Password.php

namespace App\Listeners;

use App\Models\User;
use Illuminate\Support\Facades\Hash;

class UpdateMd5Password
{
    public function handle(object $event): void
    {
        $user = User::where('email', $event->credentials['email'])->first();

        $md5Password = Hash::driver('md5')->make($event->credentials['password']);

        if ($user && $user->getAuthPassword() === $md5Password) {
            $user->password = Hash::make($event->credentials['password']);
            $user->save();
        }
    }
}

In closing

You may have another hashing algorithm on your legacy PHP application. You can make the necessary changes to achieve the same behavior.

Join the discussion on Twitter.

Laravel News Links