https://assets.amuniversal.com/984b0ea0226b013bc746005056a9545d
Dilbert Daily Strip
Just another WordPress site
https://assets.amuniversal.com/984b0ea0226b013bc746005056a9545d
Dilbert Daily Strip
https://www.percona.com/blog/wp-content/uploads/2022/10/altinity-6.png
MySQL is an outstanding database for online transaction processing. With suitable hardware, it is easy to execute more than 1M queries per second and handle tens of thousands of simultaneous connections. Many of the most demanding web applications on the planet are built on MySQL. With capabilities like that, why would MySQL users need anything else?
Well, analytic queries for starters. Analytic queries answer important business questions like finding the number of unique visitors to a website over time or figuring out how to increase online purchases. They scan large volumes of data and compute aggregates, including sums, averages, and much more complex calculations besides. The results are invaluable but can bog down online transaction processing on MySQL.
Fortunately, there’s ClickHouse: a powerful analytic database that pairs well with MySQL. Altinity is working closely with our partner Percona to help users add ClickHouse easily to existing MySQL applications. You can read more about our partnership in our recent press release as well as about our joint MySQL-to-ClickHouse solution.
This article provides tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities. We then show three important patterns for integrating MySQL and ClickHouse. The result is more powerful, cost-efficient applications that leverage the strengths of both databases.
Let’s start by digging into some obvious signs that your MySQL database is overburdened with analytics processing.
Tables that drive analytics tend to be very large, rarely have updates, and may also have many columns. Typical examples are web access logs, marketing campaign events, and monitoring data. If you see a few outlandishly large tables of immutable data mixed with smaller, actively updated transaction processing tables, it’s a good sign your users may benefit from adding an analytic database.
Analytic processing produces aggregates, which are numbers that summarize large datasets to help users identify patterns. Examples include unique site visitors per week, average page bounce rates, or counts of web traffic sources. MySQL may take minutes or even hours to compute such values. To improve performance it is common to add complex batch processes that precompute aggregates. If you see such aggregation pipelines, it is often an indication that adding an analytic database can reduce the labor of operating your application as well as deliver faster and more timely results for users.
A final clue is the in-depth questions you don’t ask about MySQL-based applications because it is too hard to get answers. Why don’t users complete purchases on eCommerce sites? Which strategies for in-game promotions have the best payoff in multi-player games? Answering these questions directly from MySQL transaction data often requires substantial time and external programs. It’s sufficiently difficult that most users simply don’t bother. Coupling MySQL with a capable analytic database may be the answer.
MySQL is an outstanding database for transaction processing. Yet the features of MySQL that make it work well–storing data in rows, single-threaded queries, and optimization for high concurrency–are exactly the opposite of those needed to run analytic queries that compute aggregates on large datasets.
ClickHouse on the other hand is designed from the ground up for analytic processing. It stores data in columns, has optimizations to minimize I/O, computes aggregates very efficiently, and parallelizes query processing. ClickHouse can answer complex analytic questions almost instantly in many cases, which allows users to sift through data quickly. Because ClickHouse calculates aggregates so efficiently, end users can pose questions in many ways without help from application designers.
These are strong claims. To understand them it is helpful to look at how ClickHouse differs from MySQL. Here is a diagram that illustrates how each database pulls in data for a query that reads all values of three columns of a table.
MySQL stores table data by rows. It must read the whole row to get data for just three columns. MySQL production systems also typically do not use compression, as it has performance downsides for transaction processing. Finally, MySQL uses a single thread for query processing and cannot parallelize work.
By contrast, ClickHouse reads only the columns referenced in queries. Storing data in columns enables ClickHouse to compress data at levels that often exceed 90%. Finally, ClickHouse stores tables in parts and scans them in parallel.
The amount of data you read, how greatly it is compressed, and the ability to parallelize work make an enormous difference. Here’s a picture that illustrates the reduction in I/O for a query reading three columns.
MySQL and ClickHouse give the same answer. To get it, MySQL reads 59 GB of data, whereas ClickHouse reads only 21 MB. That’s close to 3000 times less I/O, hence far less time to access the data. ClickHouse also parallelizes query execution very well, further improving performance. It is little wonder that analytic queries run hundreds or even thousands of times faster on ClickHouse than on MySQL.
ClickHouse also has a rich set of features to run analytic queries quickly and efficiently. These include a large library of aggregation functions, the use of SIMD instructions where possible, the ability to read data from Kafka event streams, and efficient materialized views, just to name a few.
There is a final ClickHouse strength: excellent integration with MySQL. Here are a few examples.
For all of these reasons, ClickHouse is a natural choice to extend MySQL capabilities for analytic processing.
Just as ClickHouse can add useful capabilities to MySQL, it is important to see that MySQL adds useful capabilities to ClickHouse. ClickHouse is outstanding for analytic processing but there are a number of things it does not do well. Here are some examples.
In fact, MySQL and ClickHouse are highly complementary. Users get the most powerful applications when ClickHouse and MySQL are used together.
There are three main ways to integrate MySQL data with ClickHouse analytic capabilities. They build on each other.
ClickHouse can run queries on MySQL data using the MySQL database engine, which makes MySQL data appear as local tables in ClickHouse. Enabling it is as simple as executing a single SQL command like the following on ClickHouse:
CREATE DATABASE sakila_from_mysql ENGINE=MySQLDatabase('mydb:3306', 'sakila', 'user', 'password')
Here is a simple illustration of the MySQL database engine in action.
The MySQL database engine makes it easy to explore MySQL tables and make copies of them in ClickHouse. ClickHouse queries on remote data may even run faster than in MySQL! This is because ClickHouse can sometimes parallelize queries even on remote data. It also offers more efficient aggregation once it has the data in hand.
Migrating large tables with immutable records permanently to ClickHouse can give vastly accelerated analytic query performance while simultaneously unloading MySQL. The following diagram illustrates how to migrate a table containing web access logs from ClickHouse to MySQL.
On the ClickHouse side, you’ll normally use MergeTree table engine or one of its variants such as ReplicatedMergeTree. MergeTree is the go-to engine for big data on ClickHouse. Here are three important features that will help you get the most out of ClickHouse.
These features can make an enormous difference in performance. We cover them and add more performance tips in Altinity videos (look here and here.) as well as blog articles.
The ClickHouse MySQL database engine can also be very useful in this scenario. It enables ClickHouse to “see” and select data from remote transaction tables in MySQL. Your ClickHouse queries can join local tables on transaction data whose natural home is MySQL. Meanwhile, MySQL handles transactional changes efficiently and safely.
Migrating tables to ClickHouse generally proceeds as follows. We’ll use the example of the access log shown above.
Migration can take as little as a few days but it’s more common to take weeks to a couple of months in large systems. This helps ensure that everything is properly tested and the roll-out proceeds smoothly.
The other way to extend MySQL is to mirror the data in ClickHouse and keep it up to date using replication. Mirroring allows users to run complex analytic queries on transaction data without (a) changing MySQL and its applications or (b) affecting the performance of production systems.
Here are the working parts of a mirroring setup.
ClickHouse has a built-in way to handle mirroring: the experimental MaterializedMySQL database engine, which reads binlog records directly from the MySQL primary and propagates data into ClickHouse tables. The approach is simple but is not yet recommended for production use. It may eventually be important for 1-to-1 mirroring cases but needs additional work before it can be widely used.
Altinity has developed a new approach to replication using Debezium, Kafka-compatible event streams, and the Altinity Sink Connector for ClickHouse. The mirroring configuration looks like the following.
The externalized approach has a number of advantages. They include working with current ClickHouse releases, taking advantage of fast dump/load programs like mydumper or direct SELECT using MySQL database engine, support for mirroring into replicated tables, and simple procedures to add new tables or reset old ones. Finally, it can extend to multiple upstream MySQL systems replicating to a single ClickHouse cluster.
ClickHouse can mirror data from MySQL thanks to the unique capabilities of the ReplacingMergeTree table. It has an efficient method of dealing with inserts, updates, and deletes that is ideally suited for use with replicated data. As mentioned already, ClickHouse cannot update individual rows easily, but it inserts data extremely quickly and has an efficient process for merging rows in the background. ReplicatingMergeTree builds on these capabilities to handle changes to data in a “ClickHouse way.”
Replicated table rows use version and sign columns to represent the version of changed rows as well as whether the change is an insert or delete. The ReplacingMergeTree will only keep the last version of a row, which may in fact be deleted. The sign column lets us apply another ClickHouse trick to make those deleted rows inaccessible. It’s called a row policy. Using row policies we can make any row where the sign column is negative disappear.
Here’s an example of ReplacingMergeTree in action that combines the effect of the version and sign columns to handle mutable data.
Mirroring data into ClickHouse may appear more complex than migration but in fact is relatively straightforward because there is no need to change MySQL schema or applications and the ClickHouse schema generation follows a cookie-cutter pattern. The implementation process consists of the following steps.
At this point, users are free to start running analytics or build additional applications on ClickHouse whilst changes replicate continuously from MySQL.
MySQL to ClickHouse migration is an area of active development both at Altinity as well as the ClickHouse community at large. Improvements fall into three general categories.
Dump/load utilities – Altinity is working on a new utility to move data that reduces schema creation and transfer of data to a single. We will have more to say on this in a future blog article.
Replication – Altinity is sponsoring the Sink Connector for ClickHouse, which automates high-speed replication, including monitoring as well as integration into Altinity.Cloud. Our goal is similarly to reduce replication setup to a single command.
ReplacingMergeTree – Currently users must include the FINAL keyword on table names to force the merging of row changes. It is also necessary to add a row policy to make deleted rows disappear automatically. There are pull requests in progress to add a MergeTree property to add FINAL automatically in queries as well as make deleted rows disappear without a row policy. Together they will make handling of replicated updates and deletes completely transparent to users.
We are also watching carefully for improvements on MaterializedMySQL as well as other ways to integrate ClickHouse and MySQL efficiently. You can expect further blog articles in the future on these and related topics. Stay tuned!
ClickHouse is a powerful addition to existing MySQL applications. Large tables with immutable data, complex aggregation pipelines, and unanswered questions on MySQL transactions are clear signs that integrating ClickHouse is the next step to provide fast, cost-efficient analytics to users.
Depending on your application, it may make sense to mirror data onto ClickHouse using replication or even migrate some tables into ClickHouse. ClickHouse already integrates well with MySQL and better tooling is arriving quickly. Needless to say, all Altinity contributions in this area are open source, released under Apache 2.0 license.
The most important lesson is to think in terms of MySQL and ClickHouse working together, not as one being a replacement for the other. Each database has unique and enduring strengths. The best applications will build on these to provide users with capabilities that are faster and more flexible than using either database alone.
Percona, well-known experts in open source databases, partners with Altinity to deliver robust analytics for MySQL applications. If you would like to learn more about MySQL integration with ClickHouse, feel free to contact us or leave a message on our forum at any time.
Percona Database Performance Blog
https://opengraph.githubassets.com/cfb4ebaddf85fbe6c915f7eac293d618d78d86d4a2034bf48dc6bf7935d72066/creagia/laravel-sign-pad
A Laravel package to sign documents and optionally generate
certified PDFs associated to a Eloquent model.
Laravel pad signature requires PHP 8.0 or 8.1 and Laravel 8 or 9.
You can install the package via composer:
composer require creagia/laravel-sign-pad
Publish the config and the migration files and migrate the database
php artisan sign-pad:install
Publish the .js assets:
php artisan vendor:publish --tag=sign-pad-assets
This will copy the package assets inside the public/vendor/sign-pad/
folder.
In the published config file config/sign-pad.php
you’ll be able to configure many important aspects of the package, like the route name where users will be redirected after signing the document or where do you want to store the signed documents.
Notice that the redirect_route_name will receive the parameter $uuid
with the uuid of the signature model in the database.
Add the RequiresSignature
trait and implement the CanBeSigned
class to the model you would like.
<?php namespace App\Models; use Creagia\LaravelSignPad\Concerns\RequiresSignature; use Creagia\LaravelSignPad\Contracts\CanBeSigned; class MyModel extends Model implements CanBeSigned { use RequiresSignature; } ?>
If you want to generate PDF documents with the signature, you should implement the ShouldGenerateSignatureDocument
class . Define your document template with the getSignatureDocumentTemplate
method.
<?php namespace App\Models; use Creagia\LaravelSignPad\Concerns\RequiresSignature; use Creagia\LaravelSignPad\Contracts\CanBeSigned; use Creagia\LaravelSignPad\Contracts\ShouldGenerateSignatureDocument; use Creagia\LaravelSignPad\Templates\BladeDocumentTemplate; use Creagia\LaravelSignPad\Templates\PdfDocumentTemplate; class MyModel extends Model implements CanBeSigned, ShouldGenerateSignatureDocument { use RequiresSignature; public function getSignatureDocumentTemplate(): SignatureDocumentTemplate { return new SignatureDocumentTemplate( signaturePage: 1, signatureX: 20, signatureY: 25, outputPdfPrefix: 'document', // optional // template: new BladeDocumentTemplate('pdf/my-pdf-blade-template'), // Uncomment for Blade template // template: new PdfDocumentTemplate(storage_path('pdf/template.pdf')), // Uncomment for PDF template ); } } ?>
A $model
object will be automatically injected into the Blade template, so you will be able to access all the needed properties of the model.
At this point, all you need is to create the form with the sign pad canvas in your template. For the route of the form, you have to call the method getSignatureUrl() from the instance of the model you prepared before:
@if (!$myModel->hasBeenSigned()) <form action="" method="POST"> @csrf <div style="text-align: center"> <x-creagia-signature-pad /> </div> </form> <script src=""></script> @endif
You can retrieve your model signature using the Eloquent relation $myModel->signature
. After that,
you can use
getSignatureImagePath()
method in the relation to get the signature image.getSignedDocumentPath()
method in the relation to get the generated PDF document.echo $myModel->signature->getSignatureImagePath(); echo $myModel->signature->getSignedDocumentPath();
From the same template, you can change the look of the component by passing some properties:
An example with an app using Tailwind would be:
<x-creagia-signature-pad border-color="#eaeaea" pad-classes="rounded-xl border-2" button-classes="bg-gray-100 px-4 py-2 rounded-xl mt-4" clear-name="Clear" submit-name="Submit" />
To certify your signature with TCPDF, you will have to create your own SSL certificate with OpenSSL. Otherwise you can
find the TCPDF demo certificate
here : TCPDF Demo Certificate
To create your own certificate use this command :
cd storage/app
openssl req -x509 -nodes -days 365000 -newkey rsa:1024 -keyout certificate.crt -out certificate.crt
More information in the TCPDF documentation
After generating the certificate, you’ll have to change the value of the variable certify_documents
in the config/sign-pad.php
file and set it to true.
When the variable certify_documents
is set to true, the package will search the file allocated in the certificate_file
path to sign the documents. Feel free to modify the location or the name of certificate file by changing its value.
Inside the same config/sign-pad.php
we encourage you to fill all the fields of the array certificate_info
to be more specific with the certificate.
Finally, you can change the certificate type by modifying the value of the variable cert_type
(by default 2). You can find more information about certificates types at TCPDF setSignature reference.
Laravel News Links
https://mailtrap.io/wp-content/uploads/2021/05/mailtrap_home-2.png
Updated on July 29th, 2020.
When a new user clicks on the Sign up button of an app, he or she usually gets a confirmation email with an activation link (see examples here). This is needed to make sure that the user owns the email address entered during the sign-up. After the click on the activation link, the user is authenticated for the app.
From the user’s standpoint, the email verification process is quite simple. From the developer’s perspective, things are much trickier unless your app is built with Laravel. Those who use Laravel 5.7+ have the user email verification available out-of-the-box. For earlier releases of the framework, you can use a dedicated package to add email verification to your project. In this article, we’ll touch upon each solution you can choose.
Since email verification requires one to send emails in Laravel, let’s create a basic project with all the stuff needed for that. Here is the first command to begin with:
composer create-project --prefer-dist laravel/laravel app
Now, let’s create a database using the mysql
client and then configure the .env file thereupon:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=DB-laravel
DB_USERNAME=root
DB_PASSWORD=root
Run the migrate
command to create tables for users, password resets, and failed jobs:
php artisan migrate
Since our Laravel app will send a confirmation email, we need to set up the email configuration in the .env file.
For email testing purposes, we’ll use Mailtrap Email Sandbox, which captures SMTP traffic from staging and allows developers to debug emails without the risk of spamming users.
The Email Sandbox is one of the SMTP drivers in Laravel. All you need to do is sign up and add your credentials to .env, as follows:
MAIL_MAILER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=<********> //Your Mailtrap username
MAIL_PASSWORD=<********> //Your Mailtrap password
MAIL_ENCRYPTION=tls
For more on Mailtrap features and functions, read the Mailtrap Getting Started Guide.
In Laravel, you can scaffold the UI for registration, login, and forgot password using the php artisan make:auth
command. However, it was removed from Laravel 6. In the latest releases of the framework, a separate package called laravel/ui
is responsible for the login and registration scaffolding with React, Vue, jQuery and Bootstrap layouts. After you install the package, you can use the php artisan ui vue --auth
command to scaffold UI with Vue, for example.
MustVerifyEmail
contractThe Must Verify Email contract is a feature that allows you to send email verification in Laravel by adding a few lines of code to the following files:
Implement the MustVerifyEmail
contract in the User
model:
<?php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Foundation\Auth\User as Authenticatable;
class User extends Authenticatable implements MustVerifyEmail
{
use Notifiable;
protected $fillable = [
'name', 'email', 'password',
];
protected $hidden = [
'password', 'remember_token',
];
}
Add such routes as email/verify
and email/resend
to the app:
Route::get('/', function () {
return view('welcome');
});
Auth::routes(['verify' => true]);
Route::get('/home', 'HomeController@index')->name('home');
Add the verified
and auth
middlewares:
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class HomeController extends Controller
{
public function __construct()
{
$this->middleware(['auth','verified']);
}
public function index()
{
return view('home');
}
}
Now you can test the app.
And that’s what you’ll see in the Mailtrap Demo inbox:
On screenshots above, the default name of the app, Laravel, is used as a sender’s name. You can update the name in the .env file:
APP_NAME=<Name of your app>
To customize notifications, you need to override the sendEmailVerificationNotification
method of the App\User
class. It is a default method, which calls the notify
method to notify the user after the sign-up.
For more on sending notifications in Laravel, read our dedicated blog post.
To override sendEmailVerificationNotification
, create a custom Notification and pass it as a parameter to $this->notify()
within sendEmailVerificationNotification
in the User Model, as follows:
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmail);
}
Now, in the created Notification, CustomVerifyEmail
, define the way to handle the verification. For example, you can use a custom route to send the email.
The MustVerifyEmail
class is a great thing to use. However, you may need to take over the control and manually verify email addresses without sending emails. Why would anyone do so? Reasons may include a need to create and add system users that have no accessible email addresses, import a list of email addresses (verified) to a migrated app, and others.
So, each manually created user will see the following message when signing in:
The problem lies in the timestamp in the Email Verification Column (email_verified_at
) of the user
table. When creating users manually, you need to validate them by setting a valid timestamp. In this case, there will be no email verification requests. Here is how you can do this:
The markEmailAsVerified()
method allows you to verify the user after it’s been created. Check out the following example:
$user = User::create([
'name' => 'John Doe',
'email' => 'john.doe@example.com',
'password' => Hash::make('password')
]);
$user->markEmailAsVerified();
The forceCreate()
method can do the same but in a slightly different way:
$user = User::forceCreate([
'name' => 'John Doe',
'email' => john.doe@example.com',
'password' => Hash::make('password'),
'email_verified_at' => now() //Carbon instance
]);
The most obvious way is to set a valid timestamp in the email_verified_at
column. To do this, you need to add the column to the $fillable
array in the user
model. For example, like this:
protected $fillable = [
'name', 'email', 'password', 'email_verified_at',
];
After that, you can use the email_verified_at
value within the create
method when creating a user:
$user = User::create([
'name' => 'John Doe',
'email' => john.doe@example.com',
'password' => Hash::make('password'),
'email_verified_at' => now() //Carbon instance
]);
The idea of queuing is to dispatch the processing of particular tasks, in our case, email sending, until a later time. This can speed up processing if your app sends large amounts of emails. It would be useful to implement email queues for the built-in Laravel email verification feature. The simplest way to do that is as follows:
CustomVerifyEmailQueued
, which extends the existing one, VerifyEmail
. Also, the new notification should implement the ShouldQueue
contract. This will enable queuing. Here is how it looks:namespace App\Notifications;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Auth\Notifications\VerifyEmail;
class CustomVerifyEmailQueued extends VerifyEmail implements ShouldQueue
{
use Queueable;
}
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmailQueued);
}
We did not touch upon configuration of the queue driver here, which is “sync” by default without actual queuing. If you need some insight on that, check out this Guide to Laravel Email Queues.
laravel-confirm-email
package The laravel-confirm-email package is an alternative way to set up email verification in 5.8 and older versions of Laravel. It works, however, also for the newest releases. You’re likely to go with it if you’re looking for Laravel to customize verification of emails. For example, the package allows you to set up your own confirmation messages and change all possible redirect routes. Let’s see how it works.
Install the laravel-confirm-email
package, as follows:
composer require beyondcode/laravel-confirm-email
You also need to add two fields to your users table: confirmed_at
and confirmation_code
. For this, publish the migration and the configuration file, as follows:
php artisan vendor:publish --provider="BeyondCode\EmailConfirmation\EmailConfirmationServiceProvider"
Run the migrations after:
php artisan migrate
We need to replace the default traits with those provided by laravel-confirm-email
in the following files:
app\Http\Controllers\Auth\LoginController.php
use Illuminate\Foundation\Auth\AuthenticatesUsers;
laravel-confirm-email
traituse BeyondCode\EmailConfirmation\Traits\AuthenticatesUsers;
app\Http\Controllers\Auth\RegisterController.php
use Illuminate\Foundation\Auth\RegistersUsers;
laravel-confirm-email
traituse BeyondCode\EmailConfirmation\Traits\RegistersUsers;
app\Http\Controllers\Auth\ForgotPasswordController.php
use Illuminate\Foundation\Auth\SendsPasswordResetEmails;
laravel-confirm-email
traituse BeyondCode\EmailConfirmation\Traits\SendsPasswordResetEmails;
Add the routes to app/routes/web.php:
Route::name('auth.resend_confirmation')->get('/register/confirm/resend', 'Auth\RegisterController@resendConfirmation');
Route::name('auth.confirm')->get('/register/confirm/{confirmation_code}', 'Auth\RegisterController@confirm');
To set up flash messages that show up after a user clicks on the verification link, append the code to the following files:
resources\views\auth\login.blade.php
@if (session('confirmation'))
<div class="alert alert-info" role="alert">
{!! session('confirmation') !!}
</div>
@endif
@if ($errors->has('confirmation') > 0 )
<div class="alert alert-danger" role="alert">
{!! $errors->first('confirmation') !!}
</div>
@endif
resources\views\auth\passwords\email.blade.php
@if ($errors->has('confirmation') > 0 )
<div class="alert alert-danger" role="alert">
{!! $errors->first('confirmation') !!}
</div>
@endif
Updated the resources/lang/vendor/confirmation/en/confirmation.php file if you want to use custom error/confirmation messages:
<?php
return [
'confirmation_subject' => 'Email verification',
'confirmation_subject_title' => 'Verify your email',
'confirmation_body' => 'Please verify your email address in order to access this website. Click on the button below to verify your email.',
'confirmation_button' => 'Verify now',
'not_confirmed' => 'The given email address has not been confirmed. <a href=":resend_link">Resend confirmation link.</a>',
'not_confirmed_reset_password' => 'The given email address has not been confirmed. To reset the password you must first confirm the email address. <a href=":resend_link">Resend confirmation link.</a>',
'confirmation_successful' => 'You successfully confirmed your email address. Please log in.',
'confirmation_info' => 'Please confirm your email address.',
'confirmation_resent' => 'We sent you another confirmation email. You should receive it shortly.',
];
You can modify all possible redirect routes (the default value is route('login')
) in the registration controller. Keeping in mind that the app was automatically bootstrapped, the registration controller is at app/Http/Controllers/Auth/RegisterController.php. Just include the following values either as properties or as methods returning the route/URL string:
redirectConfirmationTo
– is opened after the user completed the confirmation (opened the link from the email) redirectAfterRegistrationTo
– is opened after the user submitted the registration form (it’s the one where “Go and verify your email now”) redirectAfterResendConfirmationTo
– is opened when you ask to resend the email By redefining the redirect routes you can change not only the flash message but also the status page which you show to the user.
laravel-email-verification
packageThe laravel-email-verification package has been deemed an obsolete solution due to the release of MustVerifyEmail
. Nevertheless, you can still use the package to handle email verification in older Laravel versions (starting from 5.4).
Install the package, as follows:
composer require josiasmontag/laravel-email-verification
Register the service provider in the configuration file (config/app.php):
'providers' => [
Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider::class,
],
In Laravel 5.5, this should have been done automatically, but it did not work for us (version 5.5.48).
You need to update the users table with a verified
column. For this, you can publish the migration:
php artisan migrate --path="/vendor/josiasmontag/laravel-email-verification/database/migrations"
If you want to customize the migration, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="migrations"
And run the migrations after:
php artisan migrate
CanVerifyEmail
is a trait to be implemented in the User Model. You can customize this trait to change the activation email address.
use Illuminate\Foundation\Auth\User as Authenticatable;
use Lunaweb\EmailVerification\Traits\CanVerifyEmail;
use Lunaweb\EmailVerification\Contracts\CanVerifyEmail as CanVerifyEmailContract;
class User extends Authenticatable implements CanVerifyEmailContract
{
use CanVerifyEmail;
// ...
}
VerifiesEmail
is a trait for RegisterController
. To let the authenticated users access the verify
routes, update the middleware exception:
use Lunaweb\EmailVerification\Traits\VerifiesEmail;
class RegisterController extends Controller
{
use RegistersUsers, VerifiesEmail;
public function __construct()
{
$this->middleware('guest', ['except' => ['verify', 'showResendVerificationEmailForm', 'resendVerificationEmail']]);
$this->middleware('auth', ['only' => ['showResendVerificationEmailForm', 'resendVerificationEmail']]);
}
// ...
}
The package listens for the Illuminate\Auth\Events\Registered
event and sends the verification email. Therefore, you don’t have to override register()
. If you want to disable this behavior, use the listen_registered_event
setting.
Add the IsEmailVerified
middleware to the app/Http/Kernel.php:
protected $routeMiddleware = [
// …
'isEmailVerified' => \Lunaweb\EmailVerification\Middleware\IsEmailVerified::class,
And apply it in routes/web.php:
<?php
Route::group(['middleware' => ['web', 'auth', 'isEmailVerified']], function () {
// Verification
Route::get('register/verify', 'App\Http\Controllers\Auth\RegisterController@verify')->name('verifyEmailLink');
Route::get('register/verify/resend', 'App\Http\Controllers\Auth\RegisterController@showResendVerificationEmailForm')->name('showResendVerificationEmailForm');
Route::post('register/verify/resend', 'App\Http\Controllers\Auth\RegisterController@resendVerificationEmail')->name('resendVerificationEmail')->middleware('throttle:2,1');
});
To customize the verification email, override sendEmailVerificationNotification()
of the User model. For example:
class User implements CanVerifyEmailContract
{
use CanVerifyEmail;
/**
* Send the email verification notification.
*
* @param string $token The verification mail reset token.
* @param int $expiration The verification mail expiration date.
* @return void
*/
public function sendEmailVerificationNotification($token, $expiration)
{
$this->notify(new MyEmailVerificationNotification($token, $expiration));
}
}
To customize the resend form, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="views"
Path to the template: resources/views/vendor/emailverification/resend.blade.php
To customize messages and the language used, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="translations"
Path to the files: resources/lang/
Sending a verification email is the most reliable way to check the validity of an email address. The tutorials above will help you implement this feature in your Laravel app. At the same time, if you need to validate a large number of existing addresses, you do not have to send a test email to each of them. There are plenty of online email validators that will do the job for you. In the most extreme case, you can validate an email address manually with mailbox pinging. For more on this, read How to Verify Email Address Without Sending an Email.
Laravel News Links
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/10/Linux_server_in_the_cloud_cover-1.jpg
Hosting web servers on the internet can be very challenging for a first-timer without a proper guide. Cloud service providers have provided numerous ways to easily spin up servers of any kind in the cloud.
AWS is one of the biggest and most reliable cloud-based options for deploying servers. Here’s how you can get your Linux-based server running in the cloud with AWS EC2.
Amazon Elastic Cloud Compute (EC2) is one of the most popular web services offered by Amazon. With EC2, you can create virtual machines in the cloud with different operating systems and resizable compute capacity. This is very useful for launching secure web servers and making them available on the internet.
The AWS web console provides an easy-to-navigate interface that allows you to launch an instance without the use of any scripts or code. Here’s a step-by-step guide to launching a Linux-based EC2 instance on AWS. You’ll also learn how to connect to it securely via the console.
Sign in to your existing AWS account or head over to portal.aws.amazon.com to sign up for a new one. Then, search and navigate to the EC2 dashboard.
Locate the Launch instances button in the top-right corner of the screen and click it to launch the EC2 launch wizard.
The first required step is to enter a name for your instance; next, you choose the operating system image and version (Amazon Machine Image-AMI) of the Linux distribution you wish to use. You’re free to explore other recommended Linux server operating systems other than Ubuntu.
The different EC2 instance types are made up of various combinations of CPU, memory, storage, and networking power. There are up to 10 different instance types you can pick from, depending on your requirements. For demonstration, we’ll go with the default (t2.micro) instance type.
AWS has an article on choosing the right instance type for your EC2 virtual machine, which you can use as a reference.
In most cases, at least for development and debugging purposes, you might need to access your instance via SSH, and to do this securely, you require a key pair. It is an optional configuration, but because you might connect to your instance via SSH later, you must add a key pair.
You can either use an existing key pair or create a new one. To create a new one, click on Create new key pair, and you will see the popup screen below.
Give your key pair a name, and choose an encryption type (RSA is the most popular and recommended option, as it is supported across multiple platforms). You also need to choose a file format (PEM or PPK) for the private keys which would be downloaded on your local machine depending on the SSH client you use.
The Network settings for your EC2 instance come up next. By default, you need to create a new security group to define firewall rules to restrict access to only specific ports on your instance.
It is recommended to restrict SSH connection to only your IP address to reduce the chances of your server getting hacked. You should also allow HTTP traffic if you’ve created the instance to be a web server.
You can always go back to edit your security group rules to add or remove inbound and outbound rules. For instance, adding inbound rules for HTTPS traffic when you set up an SSL certificate for secure HTTP connections.
By default, EC2 will allocate storage based on the instance type selected. But you have an option to attach an Amazon Elastic Block Storage volume (which acts like an external storage disk) to your instance.
This isn’t mandatory, but if you want a virtual disk that you can use across multiple instances or move around with ease, you should consider it. You can now review your instance configuration to be sure everything is set up correctly, then click on the Launch Instance button to create your Linux virtual machine.
You will be redirected to a screen where you have the View Instances button. Click it to see your newly launched instance.
Now that the virtual machine is up and running, you can set up a web server in it. It could be an Apache server, Node.js server, or whatever server you want to use. There are up to four different ways to connect to an EC2 instance, namely:
The most common methods of connection are EC2 instance connect and SSH Client. EC2 instance connect is the quickest and easiest way to connect to your EC2 instance and perform your desired operations on it.
To connect to your Linux instance via EC2 instance connect, select it on the dashboard and click Connect.
Select the EC2 instance connect tab and click on the Connect button. This would automatically open up a screen that looks like a command-line interface.
This confirms a successful login to your Linux machine, and you may now begin to set it up for your web server needs. For instance, to create a simple Apache web server, run the following commands:
sudo apt-get update -y
sudo apt-get install apache2 -y
sudo systemctl start apache2.service
To verify that everything went fine and the Apache server is up and running, check the status using sudo systemctl status apache2.service. If everything is okay, you should have an output similar to the one below:
Finally, you can test the server by copying the Public IPv4 DNS from the instance properties tab and pasting it into your browser. You should see the Apache demo page.
Congratulations on successfully setting up your Linux server in the AWS cloud. You may now build and deploy your applications to production with it.
Now you can easily set up a Linux web server in the cloud with Amazon EC2. While Ubuntu is the most-used operating system for Linux servers, the process to create an EC2 instance is the same for just any other Linux distribution.
You could also set up different kinds of web servers such as Node.js, Git, Golang, or a Docker container. All you have to do is connect to your instance and carry out the steps to set up your preferred application server.
MUO – Feed
https://kongulov.dev/assets/images/posts/database-transactions-in-laravel.png
In web development, data integrity and accuracy are important. Therefore, we need to be sure that we are writing code that securely stores, updates, and deletes data in our databases. In this article, we’ll take a look at what database transactions are, why they’re important, and how to get started using them in Laravel. We will also look at typical problems associated with queued jobs and database transactions.
Before we get started with transactions in Laravel, let’s take a look at what they are and how they are useful.
A transaction is an archive for database queries. It protects your data thanks to the all-or-nothing principle.
Let’s say you transfer money from one account to another. In the application, it looks like several operations
UPDATE `wallets` SET `amount` = `amount` - 100 WHERE `id` = 1;
UPDATE `wallets` SET `amount` = `amount` + 100 WHERE `id` = 2;
What if one request succeeds and the other fails? Then the integrity of the data will be violated. To avoid such situations, the DBMS introduced the concept of a transaction – an atomic impact on data. That is, the transfer of the database from one holistic state to another. In other words, we include several requests in the transaction, which must all be executed, but if at least one is not executed, then all the requests included in the transaction will not be executed. This is the all-or-nothing principle.
Now that we have an idea about transactions, let’s look at how to use them in Laravel.
First, let’s see what we have in the wallets
table
| id | amount | |----|--------| | 1 | 1000 | | 2 | 0 |
I intentionally made a mistake in the transfer
method to see the consequences of a data violation.
public function transfer()
{
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id_', 2)->increment('amount', 100);
}
After executing the code, check the database
| id | amount | |----|--------| | 1 | 900 | | 2 | 0 |
The first request passed, but the second one failed. And in the end: the funds from the first account were gone, but they did not come to the second one. Data integrity has been violated. To prevent this from happening, you need to use transactions.
It’s very easy to get started with transactions in Laravel thanks to the transaction()
method, which we can access from the DB
facade. Based on the previous code example, let’s look at how to use transactions in Laravel.
use Illuminate\Support\Facades\DB;
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id_', 2)->increment('amount', 100); // <-- left an error
});
}
Let’s run the code. But now both requests are in a transaction. Therefore, no query should be executed.
| id | amount | |----|--------| | 1 | 1000 | | 2 | 0 |
An error occurred while executing the second request. Because of this, the transaction as a whole failed. The amounts on the wallets have not changed.
Let’s fix the transfer
method and run the code
use Illuminate\Support\Facades\DB;
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id', 2)->increment('amount', 100);
});
}
After executing the code, check the database
| id | amount | |----|--------| | 1 | 900 | | 2 | 100 |
All requests were completed without errors, so the transaction was successful. The amounts on the wallets have changed.
This was a simple example using a closure. But what if you have third-party services whose response is important and should affect an event in the code? Because not all services return exceptions, some just return a boolean. To do this, Laravel has several methods for manually processing transactions.
DB::beginTransaction()
– for defining a transactionDB::commit()
– to execute all queries after DB::beginTransaction()
DB::rollBack()
– to cancel all requests after DB::beginTransaction()
Let’s consider them with an example. We have a wallet with a balance of $100, and we have a card with a balance of $50, we want to use both balances to transfer $150 to another wallet.
use App\Services\ThirdPartyService;
use Illuminate\Support\Facades\DB;
private ThirdPartyService $thirdPartyService;
public function __construct(ThirdPartyService $thirdPartyService)
{
$this->thirdPartyService = $thirdPartyService;
}
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
$this->thirdPartyService->withdrawal(50); // <-- returns false
Wallet::where('id', 2)->increment('amount', 150);
});
}
Data integrity has been violated. Since the service does not throw an exception so that the transaction is not completed, but only returns a false value and the code continues to work. As a result, we replenish the balance by 150 without deducting 50 from the card
Now we use the above methods to manually use transactions
use App\Services\ThirdPartyService;
use Illuminate\Support\Facades\DB;
private ThirdPartyService $thirdPartyService;
public function __construct(ThirdPartyService $thirdPartyService)
{
$this->thirdPartyService = $thirdPartyService;
}
public function transfer()
{
DB::beginTransaction();
Wallet::where('id', 1)->decrement('amount', 100);
if(!$this->thirdPartyService->withdrawal(50)) {
DB::rollBack();
return;
}
Wallet::where('id', 2)->increment('amount', 150);
DB::commit();
}
Thus, if a third-party service returns false
to us, then by calling DB::rollBack()
we will prevent the execution of requests and preserve the integrity of the data
Laravel News Links
https://laraveldaily.com/storage/117/laravel-gates-override-superadmin.png
If you use Gates in the Laravel project for roles/permissions, you can add one condition to override any gates, making a specific user a super-admin. Here’s how.
Let’s imagine you have this line in app/Providers/AuthServiceProvider.php
, as per documentation:
public function boot()
{
Gate::define('update-post', function (User $user, Post $post) {
return $user->id === $post->user_id;
});
}
And in this way, you define more gates like create-post
, delete-post
, and others.
But then, you want some User with, let’s say, users.role_id == 1
to be able to do ANYTHING with the posts. And with other features, too. In other words, a super-admin.
All you need to do is, within the same boot()
method, add these lines:
Gate::before(function($user, $ability) {
if ($user->role_id == 1) {
return true;
}
});
Depending on your logic of roles/permissions, you may change the condition, like this, for example:
Gate::before(function($user, $ability) {
if ($user->hasPermission('root')) {
return true;
}
});
In other words, for any $ability
you return true
if the User has a certain role or permission.
Then, Laravel wouldn’t even check the Gate logic, and would just grant that user access.
Of course, be careful with that, cause one wrong condition and you may grant access to someone who is not a super-admin.
You can read more about Gates and permissions, in the official documentation.
Laravel News Links
http://img.youtube.com/vi/wo2V1cSVj-w/0.jpg
With New York Comic Con underway this weekend, Paramount shared a new trailer for the final season of Star Trek: Picard. After the previous teasers mostly played up the nostalgia of the principal cast of The Next Generation returning to the franchise, the new trailer finally offers a look at season three’s story. And judging from the clip, Picard will end with a bang.
The trailer opens with Starfleet facing an entirely new threat in the form of an alien vessel called the Shrike. What follows is a fun series of scenes that sees Admiral Jean-Luc Picard recruit his old friends, some of them a little less than willing, to face a villain played by Pulp Fiction’s Amanda Plummer. There’s something satisfying about seeing how characters like Worf have changed in unexpected ways in their later years. Even more unexpected are the two cameos at the end of the trailer. Daniel Davis is back as holographic Professor Moriarty, while Brent Spiner will play Data’s evil Android twin, Lore.
Alongside a new trailer for Star Trek: Picard, Paramount also shared fresh teasers for season five of Discovery and the midseason return of Prodigy. The latter will debut on October 27th, while the former is expected to arrive sometime next year. The final season of Picard will begin streaming on February 16th, 2023.
Engadget
https://media.notthebee.com/articles/634194e661989634194e66198a.jpg
Far-left commentators earlier this week tried to cancel Matt Walsh over some comments he made years ago about unwed pregnancies and the historical ages at which women used to begin childbearing.
Not the Bee
https://cdn0.thetruthaboutguns.com/wp-content/uploads/2022/10/FBI-active-shooter.jpg
The Federal Bureau of Investigation continues to tarnish its own reputation by vastly downplaying — by a factor of more than 10 — the number of incidents in which armed Americans stop spree killers. According to the FBI, the same people who can find no evidence of crime on Hunter Biden’s laptop, only 4.4% of these incidents were stopped by a good guy “civilian” with a gun. Analysis by the Crime Prevention Research Center shows the actual number is closer to 50% or more in some instances.
Not only that, but with each passing year, the numbers of spree killings cut short by everyday Americans carrying firearms continues to steadily grow. That shouldn’t surprise anyone as more and more Americans get concealed carry licenses, to say nothing of the half of the nation now living under constitutional carry laws where good guys don’t need a permission slip to carry.
What’s more, in non-“gun-free” zones where good guys aren’t prohibited from carrying lawfully, the number of mass murders interrupted is over 50%.
The CPRC, John Lott’s group, took the time to do the research and what they found is truly appalling. Example: the FBI claimed the would-be murder spree in a White Settlement, Texas church wasn’t stopped by a civilian good guy. Instead, the FBI massaged that case and sorted it as a “security guard” stopping the attack.
How did the Fibbies’ galaxy brains steer that away from a “good guy with a gun” description? They claimed that because Jack Wilson volunteered as church security, he was a “security guard.”
You be the judge. Was that Mr. Wackenhut or Ms. Securitas who took down this killer, or was it an everyday good guy with a gun?
Then again, this is the same FBI that took weeks to determine that the San Bernardino mass killers were jihadists. Ditto for the Pulse Nightclub killer.
The Crime Prevention Research Center has the details . . .
The shooting that killed three people and injured another at a Greenwood, Indiana, mall on July 17 drew broad national attention because of how it ended – when 22-year-old Elisjsha Dicken, carrying a licensed handgun, fatally shot the attacker.
While Dicken was praised for his courage and skill – squeezing off his first shot 15 seconds after the attack began, from a distance of 40 yards – much of the immediate news coverage drew from FBI-approved statistics to assert that armed citizens almost never stop such attackers: “Rare in US for an active shooter to be stopped by bystander” (Associated Press); “Rampage in Indiana a rare instance of armed civilian ending mass shooting” (Washington Post); and “After Indiana mall shooting, one hero but no lasting solution to gun violence” (New York Times).
Evidence compiled by the Crime Prevention Research Center shows that the sources the media relied on undercounted the number of instances in which armed citizens have thwarted such attacks by an order of more than ten, saving untold numbers of lives. Of course, law-abiding citizens stopping these attacks are not rare. What is rare is national news coverage of those incidents. Although those many news stories about the Greenwood shooting also suggested that the defensive use of guns might endanger others, there is no evidence that these acts have harmed innocent victims.
The FBI reports that armed citizens only stopped 11 of the 252 active shooter incidents it identified for the period 2014-2021. The FBI defines active shooter incidents as those in which an individual actively kills or attempts to kill people in a populated, public area. But it does not include those it deems related to other criminal activity, such as a robbery or fighting over drug turf.
An analysis by my organization identified a total of 360 active shooter incidents during that period and found that an armed citizen stopped 124. A previous report looked at only instances when armed civilians stopped what likely would have been mass public shootings. There were another 24 cases that we didn’t include where armed civilians stopped armed attacks, but the suspect didn’t fire his gun. Those cases are excluded from our calculations, though it could be argued that a civilian also stopped what likely could have been an active shooting event.
The FBI reported that armed citizens thwarted 4.4% of active shooter incidents, while the CPRC found 34.4%.
As usual with John Lott’s research, there’s a ton of details and background information at the link. Go read it.
Two factors explain this discrepancy – one, misclassified shootings; and two, overlooked incidents. Regarding the former, the CPRC determined that the FBI reports had misclassified five shootings: In two incidents, the Bureau notes in its detailed write-up that citizens possessing valid firearms permits confronted the shooters and caused them to flee the scene. However, the FBI did not list these cases as being stopped by armed citizens because police later apprehended the attackers. In two other incidents, the FBI misidentified armed civilians as armed security personnel. Finally, the FBI failed to mention citizen engagement in one incident.
For example, the Bureau’s report about the Dec. 29, 2019 attack on the West Freeway Church of Christ in White Settlement, Texas, that left two men dead does not list this as an incident of “civic engagement.” Instead, the FBI lists this attack as being stopped by a security guard. A parishioner, who had volunteered to provide security during worship, fatally shot the perpetrator. That man, Jack Wilson, told Dr. John Lott that he was not a security professional. He said that 19 to 20 members of the congregation were armed that day, and they didn’t even keep track of who was carrying a concealed weapon.
As for the second factor — overlooked cases — the FBI, more significantly, missed 25 incidents identified by CPRC where what would likely have been a mass public shooting was thwarted by armed civilians. There were another 83 active shooting incidents that they missed.
It’s almost as if the Biden administration and Merrick Garland’s FBI have been working hard to smother the facts showing that good guys with guns do indeed stop bad people with evil in their hearts.
And that’s despite the fact that most of these shootings intentionally occur in “gun-free” zones, where only the good guys are disarmed and the bad guys know they’ll find defenseless victims. Because of this, we, as the armed citizenry, have one hand tied behind our back when it comes to these statistics. Even without the .gov’s stats massaging trickery.
Americans aren’t stupid though.
The Truth About Guns