As I teach students how to create tables in MySQL Workbench, it’s always important to review the meaning of the checkbox keys. Then, I need to remind them that every table requires a natural key from our prior discussion on normalization. I explain that a natural key is a compound candidate key (made up of two or more column values), and that it naturally defines uniqueness for each row in a table.
Then, we discuss surrogate keys, which are typically ID column keys. I explain that surrogate keys are driven by sequences in the database. While a number of databases disclose the name of sequences, MySQL treats the sequence as an attribute of the table. In Object-Oriented Analysis and Design (OOAD), that makes the sequence a member of the table by composition rather than aggregation. Surrogate keys are also unique in the table but should never be used to determine uniqueness like the natural key. Surrogate keys are also candidate keys, like a VIN number uniquely identifies a vehicle.
In a well designed table you always have two candidate keys: One describes the unique row and the other assigns a number to it. While you can perform joins by using either candidate key, you always should use the surrogate key for joins statements. This means you elect, or choose, the surrogate candidate key as the primary key. Then, you build a unique index for the natural key, which lets you query any unique row with human decipherable words.
The column attribute table for MySQL Workbench is:
Key
Meaning
PK
Designates a primary key column.
NN
Designates a not-null column constraint.
UQ
Designates a column contains a unique value for every row.
BIN
Designates a VARCHAR data type column so that its values are stored in a case-sensitive fashion. You can’t apply this constraint to other data types.
UN
Designates a column contains an unsigned numeric data type. The possible values are 0 to the maximum number of the data type, like integer, float, or double. The value 0 isn’t possible when you also select the PK and AI check boxes, which ensures the column automatically increments to the maximum value of the column.
ZF
Designates a zero fill populates zeros in front of any number data type until all space is consumed, which acts like a left pad function with zeros.
AI
Designates AUTO_INCREMENT and should only be checked for a surrogate primary key value.
All surrogate key columns should check the PK, NN, UN, and AI checkboxes. The default behavior checks only the PK and NN checkboxes and leaves the UN and AI boxes unchecked. You should also click the UN checkbox with the AI checkbox for all surrogate key columns. The AI checkbox enables AUTO_INCREMENT behavior. The UN checkbox ensure you have the maximum number of integers before you would migrate the table to a double precision number.
Active tables grow quickly and using a signed int means you run out of rows more quickly. This is an important design consideration because using a unsigned int adds a maintenance task later. The maintenance task will require changing the data type of all dependent foreign key columns before changing the primary key column’s data type. Assuming you’re design uses referential integrity constraints, implemented as a foreign keys, you will need to:
Remove any foreign key constraints before changing the referenced primary key and dependent foreign key column’s data types.
Change the primary and foreign key column’s data types.
Add back foreign key constraints after changing the referenced primary key and dependent foreign key column’s data types.
While fixing a less optimal design is a relatively simple scripting exercise for most data engineers, you can avoid this maintenance task. Implement all surrogate primary key columns and foreign key columns with the signed int as their initial data type.
The following small ERD displays a multi-language lookup table, which is preferable to a monolinquistic enum data type.:
A design uses a lookup table when there are known lists of selections to make. There are known lists that occur in most if not all business applications. Maintaining that list of values is an application setup task and requires the development team to build an entry and update form to input and maintain the lists.
While some MySQL examples demonstrate these types of lists by using the MySQL enum data type. However, the MySQL enum type doesn’t support multilingual implementations, isn’t readily portable to other relational database, and has a number of limitations.
A lookup table is the better solution to using an enum data type. It typically follows this pattern:
Identify the target table and column where a list is useful. Use the table_name and column_name columns as a super key to identify the location where the list belongs.
Identify a unique type identifier for the list. Store the unique type value in the type column of the lookup table.
Use a lang column to enable multilingual lists.
The combination of the table_name, column_name, type, and lang let you identify unique sets. You can find a monolingual implementation in these two older blog posts:
The column view of the lookup table shows the appropriate design checkboxes:
While most foreign keys use copies of surrogate keys, there are instances when you copy the natural key value from another table rather than the surrogate key. This is done when your application will frequently query the dependent lookup table without a join to the lang table, which means the foreign key value should be a human friendly foreign key value that works as a super key.
A super key is a column or set of columns that uniquely identifies a rows in the scope of a relation. For this example, the lang column identifies rows that belong to a language in a multilingual data model. Belonging to a language is the relation between the lookup and language table. It is also a key when filtering rows with a specific lang value from the lookup table.
You navigate to the foreign key tab to create a lookup_fk foreign key constraint, like:
With this type of foreign key constraint, you copy the lang value from the language table when inserting the lookup table values. Then, your HTML forms can use the lookup table’s meaning column in any of the supported languages, like:
SELECT lookup_id
, type
, meaning
FROM lookup
WHERE table_name = 'some_table_name'
AND column_name = 'some_column_name'
AND lang = 'some_lang_name';
The type column value isn’t used in the WHERE clause to filter the data set because it is unique within the relation of the table_name, column_name, and lang column values. It is always non-unique when you exclude the lang column value, and potentially non-unique for another combination of the table_name and column_name column values.
If I’ve left questions, let me know. Other wise, I hope this helps qualify a best design practice.
In my previous post, I explained how to deal with Performance_Schema and Sys to identify the candidates for Query Optimization but also to understand the workload on the database.
In this article, we will see how we can create an OCI Fn Application that will generate a slow query log from our MySQL Database Service instance and store it to Object Storage.
The creation of the function and its use is similar to the one explained in the previous post about creating a logical dump of a MySQL instance to Object Storage.
The Application
We need to create an application and 2 functions, one to extract the data in JSON format and one in plain text. We also need to deploy an API Gateway that will allow to call those function from anywhere (publicly):
Let’s start by creating the application in OCI console:
We need to use the Public Subnet of our VCN to be able to reach it later from our API Gateway:
After we click on Create, we can see our new Application created:
We then need to follow the statement displayed on the rest of the page. We use Cloud Shell:
This looks like this:
fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ docker login -u 'idinfdw2eouj/fdescamp' iad.ocir.io
Password: **********
WARNING! Your password will be stored unencrypted in /home/fdescamp/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
fdescamp@cloudshell:~ (us-ashburn-1)$ fn list apps
NAME ID
slow_query_log ocid1.fnapp.oc1.iad.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxq
After that in Cloud Shell, we initialize our two new functions:
fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog_txt
Creating function at: ./mysql_slowlog_txt
Function boilerplate generated.
func.yaml created.
fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog
Creating function at: ./mysql_slowlog
Function boilerplate generated.
func.yaml created.
Both functions are initialized, we will start with the one dumping the queries in JSON format.
Function mysql_slowlog
As this is the first function of our application, we will define a Dockerfile and the requirements in a file (requirements.txt) in the folder of the function:
fdescamp@cloudshell:~ (us-ashburn-1)$ cd mysql_slowlog
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ ls
Dockerfile func.py func.yaml requirements.txt
We need to add the following content in the Dockerfile:
Now we go in the ../mysql_slowlog_txt directory and we modify the func.yaml file to increase the memory to match the same amount as th previous function (2048).
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ cd ../mysql_slowlog_txt/
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.yaml
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.py
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ fn -v deploy --app slow_query_log
Variables
Our application requires some variables to work. Some will be sent every time the function is called, like which MySQL Instance, the user’s credentials, which Object Storage bucket to use, … Some will be “hardcoded” to not having to specify them each time (like the tencancy, oci user, …).
We use again Cloud Shell to specify those that won’t be specified each time:
We also need to provide an OCI key as a string. The content of the string can be generated using base64 command line program:
And then we add it in Cloud Shell like this:
fn config app slow_query_log oci_key '<THE CONTENT OF THE STRING ABOVE>'
Testing
If we have the security list well configured (Private Subnet accepting connection on port 3306 from Public Subnet) and if we have an Object Storage bucket ready (with the name configured earlier), we can already test our functions directly form Cloud Shell:
MySQL is an outstanding database for online transaction processing. With suitable hardware, it is easy to execute more than 1M queries per second and handle tens of thousands of simultaneous connections. Many of the most demanding web applications on the planet are built on MySQL. With capabilities like that, why would MySQL users need anything else?
Well, analytic queries for starters. Analytic queries answer important business questions like finding the number of unique visitors to a website over time or figuring out how to increase online purchases. They scan large volumes of data and compute aggregates, including sums, averages, and much more complex calculations besides. The results are invaluable but can bog down online transaction processing on MySQL.
Fortunately, there’s ClickHouse: a powerful analytic database that pairs well with MySQL. Altinity is working closely with our partner Percona to help users add ClickHouse easily to existing MySQL applications. You can read more about our partnership in our recent press release as well as about our joint MySQL-to-ClickHouse solution.
This article provides tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities. We then show three important patterns for integrating MySQL and ClickHouse. The result is more powerful, cost-efficient applications that leverage the strengths of both databases.
Signs that indicate MySQL needs analytic help
Let’s start by digging into some obvious signs that your MySQL database is overburdened with analytics processing.
Huge tables of immutable data mixed in with transaction tables
Tables that drive analytics tend to be very large, rarely have updates, and may also have many columns. Typical examples are web access logs, marketing campaign events, and monitoring data. If you see a few outlandishly large tables of immutable data mixed with smaller, actively updated transaction processing tables, it’s a good sign your users may benefit from adding an analytic database.
Complex aggregation pipelines
Analytic processing produces aggregates, which are numbers that summarize large datasets to help users identify patterns. Examples include unique site visitors per week, average page bounce rates, or counts of web traffic sources. MySQL may take minutes or even hours to compute such values. To improve performance it is common to add complex batch processes that precompute aggregates. If you see such aggregation pipelines, it is often an indication that adding an analytic database can reduce the labor of operating your application as well as deliver faster and more timely results for users.
MySQL is too slow or inflexible to answer important business questions
A final clue is the in-depth questions you don’t ask about MySQL-based applications because it is too hard to get answers. Why don’t users complete purchases on eCommerce sites? Which strategies for in-game promotions have the best payoff in multi-player games? Answering these questions directly from MySQL transaction data often requires substantial time and external programs. It’s sufficiently difficult that most users simply don’t bother. Coupling MySQL with a capable analytic database may be the answer.
Why is ClickHouse a natural complement to MySQL?
MySQL is an outstanding database for transaction processing. Yet the features of MySQL that make it work well–storing data in rows, single-threaded queries, and optimization for high concurrency–are exactly the opposite of those needed to run analytic queries that compute aggregates on large datasets.
ClickHouse on the other hand is designed from the ground up for analytic processing. It stores data in columns, has optimizations to minimize I/O, computes aggregates very efficiently, and parallelizes query processing. ClickHouse can answer complex analytic questions almost instantly in many cases, which allows users to sift through data quickly. Because ClickHouse calculates aggregates so efficiently, end users can pose questions in many ways without help from application designers.
These are strong claims. To understand them it is helpful to look at how ClickHouse differs from MySQL. Here is a diagram that illustrates how each database pulls in data for a query that reads all values of three columns of a table.
MySQL stores table data by rows. It must read the whole row to get data for just three columns. MySQL production systems also typically do not use compression, as it has performance downsides for transaction processing. Finally, MySQL uses a single thread for query processing and cannot parallelize work.
By contrast, ClickHouse reads only the columns referenced in queries. Storing data in columns enables ClickHouse to compress data at levels that often exceed 90%. Finally, ClickHouse stores tables in parts and scans them in parallel.
The amount of data you read, how greatly it is compressed, and the ability to parallelize work make an enormous difference. Here’s a picture that illustrates the reduction in I/O for a query reading three columns.
MySQL and ClickHouse give the same answer. To get it, MySQL reads 59 GB of data, whereas ClickHouse reads only 21 MB. That’s close to 3000 times less I/O, hence far less time to access the data. ClickHouse also parallelizes query execution very well, further improving performance. It is little wonder that analytic queries run hundreds or even thousands of times faster on ClickHouse than on MySQL.
ClickHouse also has a rich set of features to run analytic queries quickly and efficiently. These include a large library of aggregation functions, the use of SIMD instructions where possible, the ability to read data from Kafka event streams, and efficient materialized views, just to name a few.
There is a final ClickHouse strength: excellent integration with MySQL. Here are a few examples.
ClickHouse can ingest mysqldump and CSV data directly into ClickHouse tables.
ClickHouse can perform remote queries on MySQL tables, which provides another way to explore as well as ingest data quickly.
The ClickHouse query dialect is similar to MySQL, including system commands like SHOW PROCESSLIST, for example.
ClickHouse even supports the MySQL wire protocol on port 3306.
For all of these reasons, ClickHouse is a natural choice to extend MySQL capabilities for analytic processing.
Why is MySQL a natural complement to ClickHouse?
Just as ClickHouse can add useful capabilities to MySQL, it is important to see that MySQL adds useful capabilities to ClickHouse. ClickHouse is outstanding for analytic processing but there are a number of things it does not do well. Here are some examples.
Transaction processing – ClickHouse does not have full ACID transactions. You would not want to use ClickHouse to process online orders. MySQL does this very well.
Rapid updates on single rows – Selecting all columns of a single row is inefficient in ClickHouse, as you must read many files. Updating a single row may require rewriting large amounts of data. You would not want to put eCommerce session data in ClickHouse. It is a standard use case for MySQL.
Large numbers of concurrent queries – ClickHouse queries are designed to use as many resources as possible, not share them across many users. You would not want to use ClickHouse to hold metadata for microservices, but MySQL is commonly used for such purposes.
In fact, MySQL and ClickHouse are highly complementary. Users get the most powerful applications when ClickHouse and MySQL are used together.
Introducing ClickHouse to MySQL integration
There are three main ways to integrate MySQL data with ClickHouse analytic capabilities. They build on each other.
View MySQL data from ClickHouse. MySQL data is queryable from ClickHouse using native ClickHouse SQL syntax. This is useful for exploring data as well as joining on data where MySQL is the system of record.
Move data permanently from MySQL to ClickHouse. ClickHouse becomes the system of record for that data. This unloads MySQL and gives better results for analytics.
Mirror MySQL data in ClickHouse. Make a snapshot of data into ClickHouse and keep it updated using replication. This allows users to ask complex questions about transaction data without burdening transaction processing.
Viewing MySQL data from ClickHouse
ClickHouse can run queries on MySQL data using the MySQL database engine, which makes MySQL data appear as local tables in ClickHouse. Enabling it is as simple as executing a single SQL command like the following on ClickHouse:
Here is a simple illustration of the MySQL database engine in action.
The MySQL database engine makes it easy to explore MySQL tables and make copies of them in ClickHouse. ClickHouse queries on remote data may even run faster than in MySQL! This is because ClickHouse can sometimes parallelize queries even on remote data. It also offers more efficient aggregation once it has the data in hand.
Moving MySQL data to ClickHouse
Migrating large tables with immutable records permanently to ClickHouse can give vastly accelerated analytic query performance while simultaneously unloading MySQL. The following diagram illustrates how to migrate a table containing web access logs from ClickHouse to MySQL.
On the ClickHouse side, you’ll normally use MergeTree table engine or one of its variants such as ReplicatedMergeTree. MergeTree is the go-to engine for big data on ClickHouse. Here are three important features that will help you get the most out of ClickHouse.
Partitioning – MergeTree divides tables into parts using a partition key. Access logs and other big data tend to be time ordered, so it’s common to divide data by day, week, or month. For best performance, it’s advisable to pick a number that results in 1000 parts or less.
Ordering – MergeTree sorts rows and constructs an index on rows to match the ordering you choose. It’s important to pick a sort order that gives you large “runs” when scanning data. For instance, you could sort by tenant followed by time. That would mean a query on a tenant’s data would not need to jump around to find rows related to that tenant.
Compression and codecs – ClickHouse uses LZ4 compression by default but also offers ZSTD compression as well as codecs. Codecs reduce column data before turning it over to compression.
These features can make an enormous difference in performance. We cover them and add more performance tips in Altinity videos (look here and here.) as well as blog articles.
The ClickHouse MySQL database engine can also be very useful in this scenario. It enables ClickHouse to “see” and select data from remote transaction tables in MySQL. Your ClickHouse queries can join local tables on transaction data whose natural home is MySQL. Meanwhile, MySQL handles transactional changes efficiently and safely.
Migrating tables to ClickHouse generally proceeds as follows. We’ll use the example of the access log shown above.
Create a matching schema for the access log data on ClickHouse.
Dump/load data from MySQL to ClickHouse using any of the following tools:
Mydumper – A parallel dump/load utility that handles mysqldump and CSV formats.
MySQL Shell – A general-purpose utility for managing MySQL that can import and export tables.
Copy data using SELECT on MySQL database engine tables.
Native database commands – Use MySQL SELECT OUTFILE to dump data to CSV and read back in using ClickHouse INSERT SELECT FROM file(). ClickHouse can even read mysqldump format.
Check performance with suitable queries; make adjustments to the schema and reload if necessary.
Adapt front end and access log ingest to ClickHouse.
Run both systems in parallel while testing.
Cut over from MySQL only to MySQL + ClickHouse Extension.
Migration can take as little as a few days but it’s more common to take weeks to a couple of months in large systems. This helps ensure that everything is properly tested and the roll-out proceeds smoothly.
Mirroring MySQL Data in ClickHouse
The other way to extend MySQL is to mirror the data in ClickHouse and keep it up to date using replication. Mirroring allows users to run complex analytic queries on transaction data without (a) changing MySQL and its applications or (b) affecting the performance of production systems.
Here are the working parts of a mirroring setup.
ClickHouse has a built-in way to handle mirroring: the experimental MaterializedMySQL database engine, which reads binlog records directly from the MySQL primary and propagates data into ClickHouse tables. The approach is simple but is not yet recommended for production use. It may eventually be important for 1-to-1 mirroring cases but needs additional work before it can be widely used.
The externalized approach has a number of advantages. They include working with current ClickHouse releases, taking advantage of fast dump/load programs like mydumper or direct SELECT using MySQL database engine, support for mirroring into replicated tables, and simple procedures to add new tables or reset old ones. Finally, it can extend to multiple upstream MySQL systems replicating to a single ClickHouse cluster.
ClickHouse can mirror data from MySQL thanks to the unique capabilities of the ReplacingMergeTree table. It has an efficient method of dealing with inserts, updates, and deletes that is ideally suited for use with replicated data. As mentioned already, ClickHouse cannot update individual rows easily, but it inserts data extremely quickly and has an efficient process for merging rows in the background. ReplicatingMergeTree builds on these capabilities to handle changes to data in a “ClickHouse way.”
Replicated table rows use version and sign columns to represent the version of changed rows as well as whether the change is an insert or delete. The ReplacingMergeTree will only keep the last version of a row, which may in fact be deleted. The sign column lets us apply another ClickHouse trick to make those deleted rows inaccessible. It’s called a row policy. Using row policies we can make any row where the sign column is negative disappear.
Here’s an example of ReplacingMergeTree in action that combines the effect of the version and sign columns to handle mutable data.
Mirroring data into ClickHouse may appear more complex than migration but in fact is relatively straightforward because there is no need to change MySQL schema or applications and the ClickHouse schema generation follows a cookie-cutter pattern. The implementation process consists of the following steps.
Create schema for replicated tables in ClickHouse.
Configure and start replication from MySQL to ClickHouse.
Dump/load data from MySQL to ClickHouse using the same tools as migration.
At this point, users are free to start running analytics or build additional applications on ClickHouse whilst changes replicate continuously from MySQL.
Tooling improvements are on the way!
MySQL to ClickHouse migration is an area of active development both at Altinity as well as the ClickHouse community at large. Improvements fall into three general categories.
Dump/load utilities – Altinity is working on a new utility to move data that reduces schema creation and transfer of data to a single. We will have more to say on this in a future blog article.
Replication – Altinity is sponsoring the Sink Connector for ClickHouse, which automates high-speed replication, including monitoring as well as integration into Altinity.Cloud. Our goal is similarly to reduce replication setup to a single command.
ReplacingMergeTree – Currently users must include the FINAL keyword on table names to force the merging of row changes. It is also necessary to add a row policy to make deleted rows disappear automatically. There are pull requests in progress to add a MergeTree property to add FINAL automatically in queries as well as make deleted rows disappear without a row policy. Together they will make handling of replicated updates and deletes completely transparent to users.
We are also watching carefully for improvements on MaterializedMySQL as well as other ways to integrate ClickHouse and MySQL efficiently. You can expect further blog articles in the future on these and related topics. Stay tuned!
Wrapping up and getting started
ClickHouse is a powerful addition to existing MySQL applications. Large tables with immutable data, complex aggregation pipelines, and unanswered questions on MySQL transactions are clear signs that integrating ClickHouse is the next step to provide fast, cost-efficient analytics to users.
Depending on your application, it may make sense to mirror data onto ClickHouse using replication or even migrate some tables into ClickHouse. ClickHouse already integrates well with MySQL and better tooling is arriving quickly. Needless to say, all Altinity contributions in this area are open source, released under Apache 2.0 license.
The most important lesson is to think in terms of MySQL and ClickHouse working together, not as one being a replacement for the other. Each database has unique and enduring strengths. The best applications will build on these to provide users with capabilities that are faster and more flexible than using either database alone.
Percona, well-known experts in open source databases, partners with Altinity to deliver robust analytics for MySQL applications. If you would like to learn more about MySQL integration with ClickHouse, feel free to contact us or leave a message on our forum at any time.
When a new user clicks on the Sign up button of an app, he or she usually gets a confirmation email with an activation link (see examples here). This is needed to make sure that the user owns the email address entered during the sign-up. After the click on the activation link, the user is authenticated for the app.
From the user’s standpoint, the email verification process is quite simple. From the developer’s perspective, things are much trickier unless your app is built with Laravel. Those who use Laravel 5.7+ have the user email verification available out-of-the-box. For earlier releases of the framework, you can use a dedicated package to add email verification to your project. In this article, we’ll touch upon each solution you can choose.
Basic project to be used as an example
Since email verification requires one to send emails in Laravel, let’s create a basic project with all the stuff needed for that. Here is the first command to begin with:
Run the migrate command to create tables for users, password resets, and failed jobs:
php artisan migrate
Since our Laravel app will send a confirmation email, we need to set up the email configuration in the .env file.
For email testing purposes, we’ll use Mailtrap Email Sandbox, which captures SMTP traffic from staging and allows developers to debug emails without the risk of spamming users.
The Email Sandbox is one of the SMTP drivers in Laravel. All you need to do is sign up and add your credentials to .env, as follows:
In Laravel, you can scaffold the UI for registration, login, and forgot password using the php artisan make:auth command. However, it was removed from Laravel 6. In the latest releases of the framework, a separate package called laravel/ui is responsible for the login and registration scaffolding with React, Vue, jQuery and Bootstrap layouts. After you install the package, you can use the php artisan ui vue --auth command to scaffold UI with Vue, for example.
Set up email verification in Laravel 5.7+ using the MustVerifyEmail contract
The Must Verify Email contract is a feature that allows you to send email verification in Laravel by adding a few lines of code to the following files:
App/User.php:
Implement the MustVerifyEmail contract in the User model:
<?php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Foundation\Auth\User as Authenticatable;
class User extends Authenticatable implements MustVerifyEmail
{
use Notifiable;
protected $fillable = [
'name', 'email', 'password',
];
protected $hidden = [
'password', 'remember_token',
];
}
routes/web.php
Add such routes as email/verify and email/resend to the app:
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class HomeController extends Controller
{
public function __construct()
{
$this->middleware(['auth','verified']);
}
public function index()
{
return view('home');
}
}
Now you can test the app.
And that’s what you’ll see in the Mailtrap Demo inbox:
Customization
On screenshots above, the default name of the app, Laravel, is used as a sender’s name. You can update the name in the .env file:
APP_NAME=<Name of your app>
To customize notifications, you need to override the sendEmailVerificationNotification method of the App\User class. It is a default method, which calls the notify method to notify the user after the sign-up.
To override sendEmailVerificationNotification, create a custom Notification and pass it as a parameter to $this->notify() within sendEmailVerificationNotification in the User Model, as follows:
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmail);
}
Now, in the created Notification, CustomVerifyEmail, define the way to handle the verification. For example, you can use a custom route to send the email.
How can I manually verify users?
The MustVerifyEmail class is a great thing to use. However, you may need to take over the control and manually verify email addresses without sending emails. Why would anyone do so? Reasons may include a need to create and add system users that have no accessible email addresses, import a list of email addresses (verified) to a migrated app, and others.
So, each manually created user will see the following message when signing in:
The problem lies in the timestamp in the Email Verification Column (email_verified_at) of the user table. When creating users manually, you need to validate them by setting a valid timestamp. In this case, there will be no email verification requests. Here is how you can do this:
markEmailAsVerified()
The markEmailAsVerified() method allows you to verify the user after it’s been created. Check out the following example:
The most obvious way is to set a valid timestamp in the email_verified_at column. To do this, you need to add the column to the $fillable array in the user model. For example, like this:
The idea of queuing is to dispatch the processing of particular tasks, in our case, email sending, until a later time. This can speed up processing if your app sends large amounts of emails. It would be useful to implement email queues for the built-in Laravel email verification feature. The simplest way to do that is as follows:
Create a new notification, e.g., CustomVerifyEmailQueued, which extends the existing one, VerifyEmail. Also, the new notification should implement the ShouldQueue contract. This will enable queuing. Here is how it looks:
namespace App\Notifications;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Auth\Notifications\VerifyEmail;
class CustomVerifyEmailQueued extends VerifyEmail implements ShouldQueue
{
use Queueable;
}
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmailQueued);
}
We did not touch upon configuration of the queue driver here, which is “sync” by default without actual queuing. If you need some insight on that, check out this Guide to Laravel Email Queues.
Set up email verification in Laravel using the laravel-confirm-email package
The laravel-confirm-email package is an alternative way to set up email verification in 5.8 and older versions of Laravel. It works, however, also for the newest releases. You’re likely to go with it if you’re looking for Laravel to customize verification of emails. For example, the package allows you to set up your own confirmation messages and change all possible redirect routes. Let’s see how it works.
Installation
Install the laravel-confirm-email package, as follows:
composer require beyondcode/laravel-confirm-email
You also need to add two fields to your users table: confirmed_at and confirmation_code. For this, publish the migration and the configuration file, as follows:
Updated the resources/lang/vendor/confirmation/en/confirmation.php file if you want to use custom error/confirmation messages:
<?php
return [
'confirmation_subject' => 'Email verification',
'confirmation_subject_title' => 'Verify your email',
'confirmation_body' => 'Please verify your email address in order to access this website. Click on the button below to verify your email.',
'confirmation_button' => 'Verify now',
'not_confirmed' => 'The given email address has not been confirmed. <a href=":resend_link">Resend confirmation link.</a>',
'not_confirmed_reset_password' => 'The given email address has not been confirmed. To reset the password you must first confirm the email address. <a href=":resend_link">Resend confirmation link.</a>',
'confirmation_successful' => 'You successfully confirmed your email address. Please log in.',
'confirmation_info' => 'Please confirm your email address.',
'confirmation_resent' => 'We sent you another confirmation email. You should receive it shortly.',
];
You can modify all possible redirect routes (the default value is route('login')) in the registration controller. Keeping in mind that the app was automatically bootstrapped, the registration controller is at app/Http/Controllers/Auth/RegisterController.php. Just include the following values either as properties or as methods returning the route/URL string:
redirectConfirmationTo – is opened after the user completed the confirmation (opened the link from the email)
redirectAfterRegistrationTo – is opened after the user submitted the registration form (it’s the one where “Go and verify your email now”)
redirectAfterResendConfirmationTo – is opened when you ask to resend the email
By redefining the redirect routes you can change not only the flash message but also the status page which you show to the user.
Set up email verification in Laravel 5.4-5.6 using the laravel-email-verification package
The laravel-email-verification package has been deemed an obsolete solution due to the release of MustVerifyEmail. Nevertheless, you can still use the package to handle email verification in older Laravel versions (starting from 5.4).
CanVerifyEmail is a trait to be implemented in the User Model. You can customize this trait to change the activation email address.
use Illuminate\Foundation\Auth\User as Authenticatable;
use Lunaweb\EmailVerification\Traits\CanVerifyEmail;
use Lunaweb\EmailVerification\Contracts\CanVerifyEmail as CanVerifyEmailContract;
class User extends Authenticatable implements CanVerifyEmailContract
{
use CanVerifyEmail;
// ...
}
VerifiesEmail is a trait for RegisterController. To let the authenticated users access the verify routes, update the middleware exception:
use Lunaweb\EmailVerification\Traits\VerifiesEmail;
class RegisterController extends Controller
{
use RegistersUsers, VerifiesEmail;
public function __construct()
{
$this->middleware('guest', ['except' => ['verify', 'showResendVerificationEmailForm', 'resendVerificationEmail']]);
$this->middleware('auth', ['only' => ['showResendVerificationEmailForm', 'resendVerificationEmail']]);
}
// ...
}
The package listens for the Illuminate\Auth\Events\Registered event and sends the verification email. Therefore, you don’t have to override register(). If you want to disable this behavior, use the listen_registered_event setting.
Middleware
Add the IsEmailVerified middleware to the app/Http/Kernel.php:
To customize the verification email, override sendEmailVerificationNotification() of the User model. For example:
class User implements CanVerifyEmailContract
{
use CanVerifyEmail;
/**
* Send the email verification notification.
*
* @param string $token The verification mail reset token.
* @param int $expiration The verification mail expiration date.
* @return void
*/
public function sendEmailVerificationNotification($token, $expiration)
{
$this->notify(new MyEmailVerificationNotification($token, $expiration));
}
}
To customize the resend form, use the following command:
Sending a verification email is the most reliable way to check the validity of an email address. The tutorials above will help you implement this feature in your Laravel app. At the same time, if you need to validate a large number of existing addresses, you do not have to send a test email to each of them. There are plenty of online email validators that will do the job for you. In the most extreme case, you can validate an email address manually with mailbox pinging. For more on this, read How to Verify Email Address Without Sending an Email.
A Laravel package to sign documents and optionally generate certified PDFs associated to a Eloquent model.
Support us
Requirements
Laravel pad signature requires PHP 8.0 or 8.1 and Laravel 8 or 9.
Installation
You can install the package via composer:
composer require creagia/laravel-sign-pad
Publish the config and the migration files and migrate the database
php artisan sign-pad:install
Publish the .js assets:
php artisan vendor:publish --tag=sign-pad-assets
This will copy the package assets inside the public/vendor/sign-pad/ folder.
Configuration
In the published config file config/sign-pad.php you’ll be able to configure many important aspects of the package, like the route name where users will be redirected after signing the document or where do you want to store the signed documents.
Notice that the redirect_route_name will receive the parameter $uuid with the uuid of the signature model in the database.
Preparing your model
Add the RequiresSignature trait and implement the CanBeSigned class to the model you would like.
If you want to generate PDF documents with the signature, you should implement the ShouldGenerateSignatureDocument class . Define your document template with the getSignatureDocumentTemplate method.
<?phpnamespaceApp\Models;
useCreagia\LaravelSignPad\Concerns\RequiresSignature;
useCreagia\LaravelSignPad\Contracts\CanBeSigned;
useCreagia\LaravelSignPad\Contracts\ShouldGenerateSignatureDocument;
useCreagia\LaravelSignPad\Templates\BladeDocumentTemplate;
useCreagia\LaravelSignPad\Templates\PdfDocumentTemplate;
classMyModelextendsModelimplementsCanBeSigned, ShouldGenerateSignatureDocument
{
useRequiresSignature;
publicfunctiongetSignatureDocumentTemplate(): SignatureDocumentTemplate
{
returnnewSignatureDocumentTemplate(
signaturePage: 1,
signatureX: 20,
signatureY: 25,
outputPdfPrefix: 'document', // optional// template: new BladeDocumentTemplate('pdf/my-pdf-blade-template'), // Uncomment for Blade template// template: new PdfDocumentTemplate(storage_path('pdf/template.pdf')), // Uncomment for PDF template
);
}
}
?>
A $model object will be automatically injected into the Blade template, so you will be able to access all the needed properties of the model.
Usage
At this point, all you need is to create the form with the sign pad canvas in your template. For the route of the form, you have to call the method getSignatureUrl() from the instance of the model you prepared before:
To certify your signature with TCPDF, you will have to create your own SSL certificate with OpenSSL. Otherwise you can
find the TCPDF demo certificate
here : TCPDF Demo Certificate
After generating the certificate, you’ll have to change the value of the variable certify_documents in the config/sign-pad.php file and set it to true.
When the variable certify_documents is set to true, the package will search the file allocated in the certificate_file path to sign the documents. Feel free to modify the location or the name of certificate file by changing its value.
Inside the same config/sign-pad.php we encourage you to fill all the fields of the array certificate_info to be more specific with the certificate.
Finally, you can change the certificate type by modifying the value of the variable cert_type (by default 2). You can find more information about certificates types at TCPDF setSignature reference.
Hosting web servers on the internet can be very challenging for a first-timer without a proper guide. Cloud service providers have provided numerous ways to easily spin up servers of any kind in the cloud.
AWS is one of the biggest and most reliable cloud-based options for deploying servers. Here’s how you can get your Linux-based server running in the cloud with AWS EC2.
What Is Amazon EC2?
Amazon Elastic Cloud Compute (EC2) is one of the most popular web services offered by Amazon. With EC2, you can create virtual machines in the cloud with different operating systems and resizable compute capacity. This is very useful for launching secure web servers and making them available on the internet.
How to Create a Linux EC2 Instance
The AWS web console provides an easy-to-navigate interface that allows you to launch an instance without the use of any scripts or code. Here’s a step-by-step guide to launching a Linux-based EC2 instance on AWS. You’ll also learn how to connect to it securely via the console.
Sign in to your existing AWS account or head over to portal.aws.amazon.com to sign up for a new one. Then, search and navigate to the EC2 dashboard.
Locate the Launch instances button in the top-right corner of the screen and click it to launch the EC2 launch wizard.
The first required step is to enter a name for your instance; next, you choose the operating system image and version (Amazon Machine Image-AMI) of the Linux distribution you wish to use. You’re free to explore other recommended Linux server operating systems other than Ubuntu.
Choose an Instance Type
The different EC2 instance types are made up of various combinations of CPU, memory, storage, and networking power. There are up to 10 different instance types you can pick from, depending on your requirements. For demonstration, we’ll go with the default (t2.micro) instance type.
In most cases, at least for development and debugging purposes, you might need to access your instance via SSH, and to do this securely, you require a key pair. It is an optional configuration, but because you might connect to your instance via SSH later, you must add a key pair.
You can either use an existing key pair or create a new one. To create a new one, click on Create new key pair, and you will see the popup screen below.
Give your key pair a name, and choose an encryption type (RSA is the most popular and recommended option, as it is supported across multiple platforms). You also need to choose a file format (PEM or PPK) for the private keys which would be downloaded on your local machine depending on the SSH client you use.
The Network settings for your EC2 instance come up next. By default, you need to create a new security group to define firewall rules to restrict access to only specific ports on your instance.
It is recommended to restrict SSH connection to only your IP address to reduce the chances of your server getting hacked. You should also allow HTTP traffic if you’ve created the instance to be a web server.
You can always go back to edit your security group rules to add or remove inbound and outbound rules. For instance, adding inbound rules for HTTPS traffic when you set up an SSL certificate for secure HTTP connections.
Storage Settings
By default, EC2 will allocate storage based on the instance type selected. But you have an option to attach an Amazon Elastic Block Storage volume (which acts like an external storage disk) to your instance.
This isn’t mandatory, but if you want a virtual disk that you can use across multiple instances or move around with ease, you should consider it. You can now review your instance configuration to be sure everything is set up correctly, then click on the Launch Instance button to create your Linux virtual machine.
You will be redirected to a screen where you have the View Instances button. Click it to see your newly launched instance.
How to Connect to a Linux EC2 Instance
Now that the virtual machine is up and running, you can set up a web server in it. It could be an Apache server, Node.js server, or whatever server you want to use. There are up to four different ways to connect to an EC2 instance, namely:
EC2 instance connect
Session manager
SSH Client
EC2 serial console
The most common methods of connection are EC2 instance connect and SSH Client. EC2 instance connect is the quickest and easiest way to connect to your EC2 instance and perform your desired operations on it.
To connect to your Linux instance via EC2 instance connect, select it on the dashboard and click Connect.
Select the EC2 instance connect tab and click on the Connect button. This would automatically open up a screen that looks like a command-line interface.
This confirms a successful login to your Linux machine, and you may now begin to set it up for your web server needs. For instance, to create a simple Apache web server, run the following commands:
To verify that everything went fine and the Apache server is up and running, check the status using sudo systemctl status apache2.service. If everything is okay, you should have an output similar to the one below:
Finally, you can test the server by copying the Public IPv4 DNS from the instance properties tab and pasting it into your browser. You should see the Apache demo page.
Congratulations on successfully setting up your Linux server in the AWS cloud. You may now build and deploy your applications to production with it.
Deploying Applications in the Cloud With AWS
Now you can easily set up a Linux web server in the cloud with Amazon EC2. While Ubuntu is the most-used operating system for Linux servers, the process to create an EC2 instance is the same for just any other Linux distribution.
You could also set up different kinds of web servers such as Node.js, Git, Golang, or a Docker container. All you have to do is connect to your instance and carry out the steps to set up your preferred application server.
In web development, data integrity and accuracy are important. Therefore, we need to be sure that we are writing code that securely stores, updates, and deletes data in our databases. In this article, we’ll take a look at what database transactions are, why they’re important, and how to get started using them in Laravel. We will also look at typical problems associated with queued jobs and database transactions.
What are database transactions
Before we get started with transactions in Laravel, let’s take a look at what they are and how they are useful.
A transaction is an archive for database queries. It protects your data thanks to the all-or-nothing principle.
Let’s say you transfer money from one account to another. In the application, it looks like several operations
What if one request succeeds and the other fails? Then the integrity of the data will be violated. To avoid such situations, the DBMS introduced the concept of a transaction – an atomic impact on data. That is, the transfer of the database from one holistic state to another. In other words, we include several requests in the transaction, which must all be executed, but if at least one is not executed, then all the requests included in the transaction will not be executed. This is the all-or-nothing principle.
Using database transactions in Laravel
Now that we have an idea about transactions, let’s look at how to use them in Laravel.
First, let’s see what we have in the wallets table
The first request passed, but the second one failed. And in the end: the funds from the first account were gone, but they did not come to the second one. Data integrity has been violated. To prevent this from happening, you need to use transactions.
It’s very easy to get started with transactions in Laravel thanks to the transaction() method, which we can access from the DB facade. Based on the previous code example, let’s look at how to use transactions in Laravel.
useIlluminate\Support\Facades\DB;publicfunctiontransfer(){DB::transaction(function(){Wallet::where('id',1)->decrement('amount',100);Wallet::where('id_',2)->increment('amount',100);// <-- left an error});}
Let’s run the code. But now both requests are in a transaction. Therefore, no query should be executed.
All requests were completed without errors, so the transaction was successful. The amounts on the wallets have changed.
This was a simple example using a closure. But what if you have third-party services whose response is important and should affect an event in the code? Because not all services return exceptions, some just return a boolean. To do this, Laravel has several methods for manually processing transactions.
DB::beginTransaction() – for defining a transaction
DB::commit() – to execute all queries after DB::beginTransaction()
DB::rollBack() – to cancel all requests after DB::beginTransaction()
Let’s consider them with an example. We have a wallet with a balance of $100, and we have a card with a balance of $50, we want to use both balances to transfer $150 to another wallet.
Data integrity has been violated. Since the service does not throw an exception so that the transaction is not completed, but only returns a false value and the code continues to work. As a result, we replenish the balance by 150 without deducting 50 from the card
Now we use the above methods to manually use transactions
Thus, if a third-party service returns false to us, then by calling DB::rollBack() we will prevent the execution of requests and preserve the integrity of the data
If you use Gates in the Laravel project for roles/permissions, you can add one condition to override any gates, making a specific user a super-admin. Here’s how.
Let’s imagine you have this line in app/Providers/AuthServiceProvider.php, as per documentation:
public function boot()
{
Gate::define('update-post', function (User $user, Post $post) {
return $user->id === $post->user_id;
});
}
And in this way, you define more gates like create-post, delete-post, and others.
But then, you want some User with, let’s say, users.role_id == 1 to be able to do ANYTHING with the posts. And with other features, too. In other words, a super-admin.
All you need to do is, within the same boot() method, add these lines:
Far-left commentators earlier this week tried to cancel Matt Walsh over some comments he made years ago about unwed pregnancies and the historical ages at which women used to begin childbearing.