https://media.notthebee.com/articles/649f422861c4a649f422861c4b.jpg
BOOM:
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/649f422861c4a649f422861c4b.jpg
BOOM:
Not the Bee
https://theawesomer.com/photos/2023/06/mandalorian_a_team_t.jpg
The Mandalorian has some great action. So did The A-Team. So Nebulous Bee thought it would be fun to combine the two. This edit reimagines the credits for the Star Wars series in the style of the 1980s hit series, with Bo-Katan, Paz Vizsla, The Armorer, and Din Djarin standing in for Templeton Peck B.A. Baracus, Hannibal Smith, and Howling Mad Murdock.
The Awesomer
https://www.louderwithcrowder.com/media-library/image.png?id=34222424&width=980
JK Rowling has been the head transphobe in charge for quite some time now. You know how leftist sh*tc*nts get when you express a different opinion on them on the internet. They go right to the -isms and -phobias they accuse you of. In Rowling’s case, she also gets people threatening her life. All for the think-crime of, four years ago, defending someone who was fired for saying boys are boys and girls are girls.
If death threats don’t phase her, what do you think her putting her mouth on your weiner is going to do? One loser found this out the hard way and if he has any friends, I hope they’re all laughing at him now.,
So how did things escalate this quickly? It starts with Maya Forstater, the woman who got fired from her job for "’publishing “offensive” tweets questioning government proposals to allow people to self-identify as the opposite sex.’" Defending this woman is what started Rowling on her road to TERFdom. Forstater had just won a settlement when the organisation "was found to have engaged in unlawful discrimination in its decision not to offer her an employment contract or to renew her visiting fellowship."
Turns out that having a common belief about sex and gender does NOT equal bigotry.
Rowling offered Maya her congratulations.
Which led to Joshua D’silva telling Rowlings to suck his dick. This WAS the link to the tweet. It was literally deleted while I was working on this post after Rowling informed the world that D’silva allegedly had a penis so small, it was barely detectable. How embarrassing.
The whole Rowling thing still cracks me up. She is in no way, shape, or form on our side politically AT ALL. When leftists were "resisting" Trump in 2015-16, they would identify themselves as sects from Hogwarts. We would dare them to read another book.
All it took was one tweet and one single opinion for the left to turn on JK Rowling as if she was literally Voldermort. It’s hilarious. Especially knowing Rowling is sleeping on a giant pile of money as she responds. to each of her haters.
><><><><><><
Brodigan is Grand Poobah of this here website and when he isn’t writing words about things enjoys day drinking, pro-wrestling, and country music. You can find him on the Twitter too.
Facebook doesn’t want you reading this post or any others lately. Their algorithm hides our stories and shenanigans as best it can. The best way to stick it to Zuckerface? Bookmark LouderWithCrowder.com and check us out throughout the day! Also, follow us on Instagram and Twitter.
Louder With Crowder
This laravel package helps you dynamically set more database configurations through the .env
file or database
.
composer require ikechukwukalu/dynamicdatabaseconfig
The need for this package came up when I once handled an already existing project that, due to certain constraints, had 9 databases implemented for each country their application was being utilised. This application also had a central database that was used by every country as well.
The config/database
file wasn’t pretty. I’d prefer to have all configurations within the .env
file only. The Big question was, what if the databases required grew to 19? These were the problems, both pending and existing that needed a clean hack/solution.
env.database.config
dynamic.database.config
Env.database.config
MiddlewareThis middleware fetches database configurations from the .env
file using postfixes like ONE
. This dynamically declares an additional database connection for your laravel application.
DB_HOST_ONE=127.0.0.1 DB_PORT_ONE=3306 DB_DATABASE_ONE=second_db DB_USERNAME_ONE=root DB_PASSWORD_ONE=
use Illuminate\Http\Request; use Illuminate\Support\Facades\Route; /** * mysql is the type of relational database connection being replicated - $database * mysql_1 is the new connection name - $name * ONE is the postfix - $postfix */ Route::middleware(['env.database.config:mysql,mysql_1,ONE'])->group(function () { Route::post('/user', function (Request $request) { /** * $request->_db_connection === 'mysql_1' */ return \App\Models\User::on('mysql_1')->find(1); }); }); Route::post('/user', function (Request $request) { /** * $request->_db_connection === 'mysql_1' */ return \App\Models\User::on('mysql_1')->find(1); })->middleware('env.database.config:mysql,mysql_1,ONE');
You would not need to add a postfix, ONE
, parameter to the middleware for the $postFix
variable if you simply set the following session value session(config('dynamicdatabaseconfig.session_postfix'))
, but when a postfix parameter has been set, it will be used instead of the session value.
Dynamic.database.config
MiddlewareThis middleware fetches database configurations from the database_configurations
table within the primary migration database. It utilises a unique $ref
variable. It’s recommended that the unique $ref
variable should be human readable, that way it becomes easier to run the package’s console commands for running migrations. This will also dynamically declare an additional database connection for your laravel application.
use Ikechukwukalu\Dynamicdatabaseconfig\Models\DatabaseConfiguration; protected $hidden = [ 'ref', 'name', 'database', /** * Accepts only arrays */ 'configuration' ];
$countries = ['nigeria', 'ghana', 'togo', 'kenya']; $config = \Config::get('database.connections.mysql'); foreach ($countries as $country) { $config['database'] = $country . '_db'; DatabaseConfiguration::firstOrCreate( ['ref' => $country], [ 'ref' => $country, 'name' => 'mysql_' . $country, 'database' => 'mysql', 'configuration' => $config ]); }
use Illuminate\Http\Request; use Illuminate\Support\Facades\Route; /** * nigeria is $ref value */ Route::middleware(['dynamic.database.config:nigeria'])->group(function () { Route::post('/user', function (Request $request) { /** * $request->_db_connection === 'mysql_nigeria' */ return \App\Models\User::on('mysql_nigeria')->find(1); }); }); Route::post('/user', function (Request $request) { /** * $request->_db_connection === 'mysql_nigeria' */ return \App\Models\User::on('mysql_nigeria')->find(1); })->middleware('dynamic.database.config:nigeria');
You would not need to add a ref, nigeria
, parameter to the middleware for the $ref
variable if you simply set the following session value session(config('dynamicdatabaseconfig.session_ref'))
, but when a ref parameter has been set, it will be used instead of the session value.
By default, the values stored within the configuration
field will be hashed, but you can adjust this from the .env
file by setting DB_CONFIGURATIONS_HASH=false
.
It’s compulsory to first migrate laravel’s initial database.
This will only migrate files within laravel’s default migration path database/migrations
php artisan env:migrate mysql mysql_1 ONE php artisan dynamic:migrate nigeria
This will only migrate files within the specified migration path database/migrations/folder
php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder php artisan dynamic:migrate nigeria --path=database/migrations/folder
Running the migrations as displayed below will result in the respective database having the migrated data from migrations within database/migrations
and database/migrations/folder
.
php artisan env:migrate mysql mysql_1 ONE php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder php artisan dynamic:migrate nigeria php artisan dynamic:migrate nigeria --path=database/migrations/folder
php artisan env:migrate mysql mysql_1 ONE --seed php artisan env:migrate mysql mysql_1 ONE --seeder=DatabaseSeederOne php artisan env:migrate mysql mysql_1 ONE --seeder=DatabaseSeederOne --path=database/migrations/folder php artisan dynamic:migrate nigeria --seed php artisan dynamic:migrate nigeria --seeder=DatabaseSeederNigeria php artisan dynamic:migrate nigeria --seeder=DatabaseSeederNigeria --path=database/migrations/folder
php artisan env:migrate mysql mysql_1 ONE --fresh php artisan env:migrate mysql mysql_1 ONE --fresh --seed php artisan env:migrate mysql mysql_1 ONE --fresh --seeder=DatabaseSeederOne php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --fresh php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --fresh --seeder=DatabaseSeederOne php artisan dynamic:migrate nigeria --fresh php artisan dynamic:migrate nigeria --fresh --seed php artisan dynamic:migrate nigeria --fresh --seeder=DatabaseSeederNigeria php artisan dynamic:migrate nigeria --path=database/migrations/folder --fresh php artisan dynamic:migrate nigeria --path=database/migrations/folder --fresh --seeder=DatabaseSeederNigeria
php artisan env:migrate mysql mysql_1 ONE --refresh php artisan env:migrate mysql mysql_1 ONE --refresh --seed php artisan env:migrate mysql mysql_1 ONE --refresh --seeder=DatabaseSeederOne php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --refresh php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --refresh --seeder=DatabaseSeederOne php artisan dynamic:migrate nigeria --refresh php artisan dynamic:migrate nigeria --refresh --seed php artisan dynamic:migrate nigeria --refresh --seeder=DatabaseSeederNigeria php artisan dynamic:migrate nigeria --path=database/migrations/folder --refresh php artisan dynamic:migrate nigeria --path=database/migrations/folder --refresh --seeder=DatabaseSeederNigeria
php artisan env:migrate mysql mysql_1 ONE --rollback php artisan env:migrate mysql mysql_1 ONE --path=database/migrations/folder --rollback php artisan dynamic:migrate nigeria --rollback php artisan dynamic:migrate nigeria --path=database/migrations/folder --rollback
migration
table.database_configurations
table to be migrated into every extra database created when running Default migrations.php artisan vendor:publish --tag=ddc-migrations
php artisan vendor:publish --tag=ddc-config
The DDC package is an open-sourced software licensed under the MIT license.
Laravel News Links
https://laracoding.com/wp-content/uploads/2023/06/how-to-store-json-data-in-database-in-laravel-with-example_841.png
Storing JSON data in a Laravel database provides a flexible solution for managing dynamic attributes or unstructured data. In this tutorial, we will walk through the process of storing JSON data in a Laravel database, using a practical example of storing product attributes. By the end of this tutorial, you will have a clear understanding of how to store and retrieve JSON data efficiently using Laravel.
Create a new Laravel project using the following command:
laravel new json-data-storage
Generate a migration file to create the products table using the command:
php artisan make:migration create_products_table --create=products
Inside the generated migration file (database/migrations/YYYY_MM_DD_create_products_table.php
), define the table schema with a JSON column for the attributes:
<?php
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;
return new class extends Migration
{
/**
* Run the migrations.
*/
public function up(): void
{
Schema::create('products', function (Blueprint $table) {
$table->id();
$table->json('attributes');
$table->timestamps();
});
}
/**
* Reverse the migrations.
*/
public function down(): void
{
Schema::dropIfExists('products');
}
};
Run the migration using the following command:
php artisan migrate
Generate a Product
model using the command:
php artisan make:model Product
In the Product
model (app/Models/Product.php
), add the $casts
property to specify that the attributes
attribute should be treated as JSON:
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Product extends Model
{
use HasFactory;
protected $casts = [
'attributes' => 'json',
];
}
To store product attributes as JSON, we can simply use Laravel’s Eloquent model. Create a new Product
instance, set the desired attributes as an array, and save it:
use App\Models\Product;
$product = new Product;
$product->attributes = [
'color' => 'red',
'size' => 'medium',
'weight' => 0.5,
];
$product->save();
To retrieve the attributes of a product, you can access the attributes
property on the model:
$product = Product::find(1);
$attributes = $product->attributes;
If you want to access a specific attribute within the JSON data, you can do so by using array access:
$color = $product->attributes['color'];
By accessing the attributes
property, you can retrieve the JSON data associated with the product and access specific attributes as needed.
To update all the attributes of the product at once, you can assign a new array with all the key-value pairs and use Eloquent’s save()
method to update the record directly.
$product = \App\Models\Product::find(1);
$product->attributes = [
'color' => 'green',
'size' => 'large',
'weight' => 2.5,
];
$product->save();
Updating a single value within the JSON data requires a slightly different approach in Laravel and has one important caveat. Directly modifying the attribute like $product->attributes['weight'] = 1.0
and saving the product will result in an ErrorException: “Indirect modification of overloaded property App\Models\Product::$attributes has no effect.”
To overcome this issue, you can follow the solution below:
$product = \App\Models\Product::find(1);
$attributes = $product->attributes; // create a copy of the array
$attributes['weight'] = 0.6; // modify the value in the copied array
$product->attributes = $attributes; // assign the copied array back to $product->attributes
$product->save();
Storing JSON data in a database using Laravel provides flexibility and convenience when working with dynamic or unstructured data. By following the steps outlined in this tutorial, you have learned how to create the necessary migrations and models, store and retrieve JSON data, and manipulate the JSON data efficiently. This knowledge equips you with the tools to handle various use cases in your Laravel applications and opens up possibilities for efficiently managing complex data structures.
Start implementing JSON data storage in your Laravel projects and unlock the full potential of your application’s data management capabilities. Happy coding!
Laravel News Links
https://www.dataschool.io/content/images/2023/06/ai.jpeg
ChatGPT is amazing, but its knowledge is limited to the data on which it was trained.
Wouldn&apost it be great if you could use the power of Large Language Models (LLMs) to interact with your own private documents, without uploading them to the web?
The great news is that you can do this TODAY! Let me show you how…
privateGPT is an open source project that allows you to parse your own documents and interact with them using a LLM. You ask it questions, and the LLM will generate answers from your documents.
All using Python, all 100% private, all 100% free!
Below, I&aposll walk you through how to set it up. (Note that this will require some familiarity with the command line.)
If git is installed on your computer, then navigate to an appropriate folder (perhaps "Documents") and clone the repository (git clone https://github.com/imartinez/privateGPT.git
). That will create a "privateGPT" folder, so change into that folder (cd privateGPT
).
Alternatively, you could download the repository as a zip file (using the green "Code" button), move the zip file to an appropriate folder, and then unzip it. It will create a folder called "privateGPT-main", which you should rename to "privateGPT". You&aposll then need to navigate to that folder using the command line.
I highly recommend setting up a virtual environment for this project. My tool of choice is conda, which is available through Anaconda (the full distribution) or Miniconda (a minimal installer), though many other tools are available.
If you&aposre using conda, create an environment called "gpt" that includes the latest version of Python using conda create -n gpt python
. Then, activate the environment using conda activate gpt
. Use conda list
to see which packages are installed in this environment.
(Note: privateGPT requires Python 3.10 or later.)
First, make sure that "privateGPT" is your working directory using pwd
. Then, make sure that "gpt" is your active environment using conda info
.
Once you&aposve done that, use pip3 install -r requirements.txt
to install all of the packages listed in that file into the "gpt" environment. This will take at least a few minutes.
Use conda list
to see the updated list of which packages are installed.
(Note: The System Requirements section of the README may be helpful if you run into an installation error.)
In the Environment Setup section of the README, there&aposs a link to an LLM. Currently, that LLM is ggml-gpt4all-j-v1.3-groovy.bin. Download that file (3.5 GB).
Then, create a subfolder of the "privateGPT" folder called "models", and move the downloaded LLM file to "models".
In the "privateGPT" folder, there&aposs a file named "example.env". Make a copy of that file named ".env" using cp example.env .env
. Use ls -a
to check that it worked.
(Note: This file has nothing to do with your virtual environment.)
Add your private documents to the "source_documents" folder, which is a subfolder of the "privateGPT" folder. Here&aposs a list of the supported file types.
I recommend starting with a small number of documents so that you can quickly verify that the entire process works. (The "source_documents" folder already contains a sample document, "state_of_the_union.txt", so you can actually just start with this document if you like.)
Once again, make sure that "privateGPT" is your working directory using pwd
.
Then, run python ingest.py
to parse the documents. This may run quickly (< 1 minute) if you only added a few small documents, but it can take a very long time with larger documents.
Once this process is done, you&aposll notice that there&aposs a new subfolder of "privateGPT" called "db".
Run python privateGPT.py
to start querying your documents! Once it has loaded, you will see the text Enter a query:
Type in your question and hit enter. After a minute, it will answer your question, followed by a list of source documents that it used for context.
(Keep in mind that the LLM has "knowledge" far outside your documents, so it can answer questions that have nothing to do with the documents you provided to it.)
When you&aposre done asking questions, just type exit
.
This project is less than two months old, and it depends on other libraries which are also quite new! Thus it&aposs highly likely that you will run into bugs, unexplained errors, and crashes.
For example, if you get an "unknown token" error after asking a question, my experience has been that you can ignore the error and you will still get an answer to your question.
On the other hand, if you get a memory-related error, you will need to end the process by hitting "Ctrl + C" on your keyboard. (Then, just restart it by running python privateGPT.py
.)
You might be able to find a workaround to a particular problem by searching the Issues in the privateGPT repository.
If you post your own GitHub issue, please be kind! This is an open source project being run by one person in his spare time (for free)!
Want to query more documents? Add them to the "source_documents" folder and re-run python ingest.py
.
Want to start over? Delete the "db" folder, and a new "db" folder will be created the next time you ingest documents.
Want to hide the source documents for each answer? Run python privateGPT.py -S
instead of python privateGPT.py
.
Want to try a different LLM? Download a different LLM to the "models" folder and reference it in the ".env" file.
Want to use the latest version of the code? Given its popularity, it&aposs likely that this project will evolve rapidly. If you want to use the latest version of the code, run git pull origin main
.
I think it&aposs worth repeating the disclaimer listed at the bottom of the repository:
This is a test project to validate the feasibility of a fully private solution for question answering using LLMs and Vector embeddings. It is not production-ready, and it is not meant to be used in production. The models selection is not optimized for performance, but for privacy; but it is possible to use different models and vector stores to improve performance.
Planet Python
https://s3files.core77.com/blog/images/1414093_81_124043_JrOgOXAJJ.jpg
Interior/exterior decoration, 2020s-style: Govee Curtain Lights are essentially a curtain of hanging LED-embedded strips. Measuring 1.5m (5′) wide and 2m (6.6′) tall, this provides a grid of 520 evenly-spaced LEDs that you can program via an app, allowing you to turn walls or windows into gigantic animated Lite-Brites.
You can either choose from stock animations provided by the company, or create your own. The company also boasts of a "Music Mode" that makes the lights move in accordance with music.
If this is your jam, startup Govee paid the YouTuber below to produce this video on setting up and using the product.
All I can think is how dated this is going to look in a few years; I imagine if this style of decoration catches on, the consumer will demand a much higher level of resolution.
Core77
http://img.youtube.com/vi/vSPhhw-2ShI/0.jpg
I was astonished to read of the wide-ranging implications of a new laser weeding technology now available to farmers.
Carbon Robotics is now shipping its LaserWeeder to farms around the United States; the machine uses the power of lasers and robotics to rid fields of weeds … The LaserWeeder can eliminate over 200,000 weeds per hour and offer up to 80% cost savings in weed control.
. . .
The LaserWeeder is a 20-foot-wide unit comprised of three rows of 10 lasers that are pulled behind a tractor.
Thirty lasers are at work as the unit travels across a field destroying weeds "with millimeter accuracy, skipping the plant and killing the weed," said Mikesell.
The LaserWeeder "does the equivalent work of about 70 people," he continued.
. . .
The technology "makes for a much more consistent growing process and adds a bunch of health to your yield. You get big yield improvements because you’re not damaging the crops with herbicides."
There’s more at the link.
Here’s a publicity video from Carbon Robotics showing the LaserWeeder in action.
The economic implications for farmers and farm workers are mind-boggling.
Just the thought of no longer having to spend hours weeding in the back yard is enormously tempting. This will bear watching.
Peter
Bayou Renaissance Man
https://opengraph.githubassets.com/f6873d9fcbab9cd1f811527eeed98ab28e08f5ef751780aa0ad435c91f426320/laracraft-tech/laravel-schema-rules
Automatically generate basic Laravel validation rules based on your database table schema!
Use these as a starting point to fine-tune and optimize your validation rules as needed.
You can install the package via composer:
composer require laracraft-tech/laravel-schema-rules --dev
Then publish the config file with:
php artisan vendor:publish --tag="schema-rules-config"
Let’s say you’ve migrated this fictional table:
Schema::create('persons', function (Blueprint $table) { $table->id(); $table->string('first_name', 100); $table->string('last_name', 100); $table->string('email'); $table->foreignId('address_id')->constrained(); $table->text('bio')->nullable(); $table->enum('gender', ['m', 'f', 'd']); $table->date('birth'); $table->year('graduated'); $table->float('body_size'); $table->unsignedTinyInteger('children_count')->nullable(); $table->integer('account_balance'); $table->unsignedInteger('net_income'); $table->boolean('send_newsletter')->nullable(); });
Now if you run:
php artisan schema:generate-rules persons
You’ll get:
Schema-based validation rules for table "persons" have been generated!
Copy & paste these to your controller validation or form request or where ever your validation takes place:
[
'first_name' => ['required', 'string', 'min:1', 'max:100'],
'last_name' => ['required', 'string', 'min:1', 'max:100'],
'email' => ['required', 'string', 'min:1', 'max:255'],
'address_id' => ['required', 'exists:addresses,id'],
'bio' => ['nullable', 'string', 'min:1'],
'gender' => ['required', 'string', 'in:m,f,d'],
'birth' => ['required', 'date'],
'graduated' => ['required', 'integer', 'min:1901', 'max:2155'],
'body_size' => ['required', 'numeric'],
'children_count' => ['nullable', 'integer', 'min:0', 'max:255'],
'account_balance' => ['required', 'integer', 'min:-2147483648', 'max:2147483647'],
'net_income' => ['required', 'integer', 'min:0', 'max:4294967295'],
'send_newsletter' => ['nullable', 'boolean']
]
As you may have noticed the float-column body_size
, just gets generated to ['required', 'numeric']
.
Proper rules for float
, decimal
and double
, are not yet implemented!
You can also explicitly specify the columns:
php artisan schema:generate-rules persons --columns first_name,last_name,email
Which gives you:
Schema-based validation rules for table "persons" have been generated!
Copy & paste these to your controller validation or form request or where ever your validation takes place:
[
'first_name' => ['required', 'string', 'min:1', 'max:100'],
'last_name' => ['required', 'string', 'min:1', 'max:100'],
'email' => ['required', 'string', 'min:1', 'max:255']
]
Optionally, you can add a --create-request
or -c
flag,
which will create a form request class with the generated rules for you!
# creates app/Http/Requests/StorePersonRequest.php (store request is the default) php artisan schema:generate-rules persons --create-request # creates/overwrites app/Http/Requests/StorePersonRequest.php php artisan schema:generate-rules persons --create-request --force # creates app/Http/Requests/UpdatePersonRequest.php php artisan schema:generate-rules persons --create-request --file UpdatePersonRequest # creates app/Http/Requests/Api/V1/StorePersonRequest.php php artisan schema:generate-rules persons --create-request --file Api\\V1\\StorePersonRequest # creates/overwrites app/Http/Requests/Api/V1/StorePersonRequest.php (using shortcuts) php artisan schema:generate-rules persons -cf --file Api\\V1\\StorePersonRequest
Currently, the supported database drivers are MySQL
, PostgreSQL
, and SQLite
.
Please note, since each driver supports different data types and range specifications,
the validation rules generated by this package may vary depending on the database driver you are using.
Please see CHANGELOG for more information on what has changed recently.
Please see CONTRIBUTING for details.
Please review our security policy on how to report security vulnerabilities.
The MIT License (MIT). Please see License File for more information.
Laravel News Links
Learn how to manage your S3 bucket for Laravel.Laravel News Links