https://media.notthebee.com/articles/634db34ab88c5634db34ab88c6.jpg
Amen to this!
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/634db34ab88c5634db34ab88c6.jpg
Amen to this!
Not the Bee
https://blog.mclaughlinsoftware.com/wp-content/uploads/2022/10/lookup_erd.png
As I teach students how to create tables in MySQL Workbench, it’s always important to review the meaning of the checkbox keys. Then, I need to remind them that every table requires a natural key from our prior discussion on normalization. I explain that a natural key is a compound candidate key (made up of two or more column values), and that it naturally defines uniqueness for each row in a table.
Then, we discuss surrogate keys, which are typically ID column keys. I explain that surrogate keys are driven by sequences in the database. While a number of databases disclose the name of sequences, MySQL treats the sequence as an attribute of the table. In Object-Oriented Analysis and Design (OOAD), that makes the sequence a member of the table by composition rather than aggregation. Surrogate keys are also unique in the table but should never be used to determine uniqueness like the natural key. Surrogate keys are also candidate keys, like a VIN number uniquely identifies a vehicle.
In a well designed table you always have two candidate keys: One describes the unique row and the other assigns a number to it. While you can perform joins by using either candidate key, you always should use the surrogate key for joins statements. This means you elect, or choose, the surrogate candidate key as the primary key. Then, you build a unique index for the natural key, which lets you query any unique row with human decipherable words.
The column attribute table for MySQL Workbench is:
| Key | Meaning |
|---|---|
| PK | Designates a primary key column. |
| NN | Designates a not-null column constraint. |
| UQ | Designates a column contains a unique value for every row. |
| BIN | Designates a VARCHAR data type column so that its values are stored in a case-sensitive fashion. You can’t apply this constraint to other data types. |
| UN | Designates a column contains an unsigned numeric data type. The possible values are 0 to the maximum number of the data type, like integer, float, or double. The value 0 isn’t possible when you also select the PK and AI check boxes, which ensures the column automatically increments to the maximum value of the column. |
| ZF | Designates a zero fill populates zeros in front of any number data type until all space is consumed, which acts like a left pad function with zeros. |
| AI | Designates AUTO_INCREMENT and should only be checked for a surrogate primary key value. |
All surrogate key columns should check the PK, NN, UN, and AI checkboxes. The default behavior checks only the PK and NN checkboxes and leaves the UN and AI boxes unchecked. You should also click the UN checkbox with the AI checkbox for all surrogate key columns. The AI checkbox enables AUTO_INCREMENT behavior. The UN checkbox ensure you have the maximum number of integers before you would migrate the table to a double precision number.
Active tables grow quickly and using a signed int means you run out of rows more quickly. This is an important design consideration because using a unsigned int adds a maintenance task later. The maintenance task will require changing the data type of all dependent foreign key columns before changing the primary key column’s data type. Assuming you’re design uses referential integrity constraints, implemented as a foreign keys, you will need to:
While fixing a less optimal design is a relatively simple scripting exercise for most data engineers, you can avoid this maintenance task. Implement all surrogate primary key columns and foreign key columns with the signed int as their initial data type.
The following small ERD displays a multi-language lookup table, which is preferable to a monolinquistic enum data type.:

A design uses a lookup table when there are known lists of selections to make. There are known lists that occur in most if not all business applications. Maintaining that list of values is an application setup task and requires the development team to build an entry and update form to input and maintain the lists.
While some MySQL examples demonstrate these types of lists by using the MySQL enum data type. However, the MySQL enum type doesn’t support multilingual implementations, isn’t readily portable to other relational database, and has a number of limitations.
A lookup table is the better solution to using an enum data type. It typically follows this pattern:
The combination of the table_name, column_name, type, and lang let you identify unique sets. You can find a monolingual implementation in these two older blog posts:
The column view of the lookup table shows the appropriate design checkboxes:

While most foreign keys use copies of surrogate keys, there are instances when you copy the natural key value from another table rather than the surrogate key. This is done when your application will frequently query the dependent lookup table without a join to the lang table, which means the foreign key value should be a human friendly foreign key value that works as a super key.
A super key is a column or set of columns that uniquely identifies a rows in the scope of a relation. For this example, the lang column identifies rows that belong to a language in a multilingual data model. Belonging to a language is the relation between the lookup and language table. It is also a key when filtering rows with a specific lang value from the lookup table.
You navigate to the foreign key tab to create a lookup_fk foreign key constraint, like:

With this type of foreign key constraint, you copy the lang value from the language table when inserting the lookup table values. Then, your HTML forms can use the lookup table’s meaning column in any of the supported languages, like:
SELECT lookup_id
, type
, meaning
FROM lookup
WHERE table_name = 'some_table_name'
AND column_name = 'some_column_name'
AND lang = 'some_lang_name';
The type column value isn’t used in the WHERE clause to filter the data set because it is unique within the relation of the table_name, column_name, and lang column values. It is always non-unique when you exclude the lang column value, and potentially non-unique for another combination of the table_name and column_name column values.
If I’ve left questions, let me know. Other wise, I hope this helps qualify a best design practice.
Planet MySQL
https://i0.wp.com/lefred.be/wp-content/uploads/2022/10/fn_workflow.png?w=758&ssl=1
In my previous post, I explained how to deal with Performance_Schema and Sys to identify the candidates for Query Optimization but also to understand the workload on the database.
In this article, we will see how we can create an OCI Fn Application that will generate a slow query log from our MySQL Database Service instance and store it to Object Storage.

The creation of the function and its use is similar to the one explained in the previous post about creating a logical dump of a MySQL instance to Object Storage.
We need to create an application and 2 functions, one to extract the data in JSON format and one in plain text. We also need to deploy an API Gateway that will allow to call those function from anywhere (publicly):

Let’s start by creating the application in OCI console:

After we click on Create, we can see our new Application created:

We then need to follow the statement displayed on the rest of the page. We use Cloud Shell:

This looks like this:
fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ fn update context registry iad.ocir.io/i***********j/lefred
Current context updated registry with iad.ocir.io/i***********j/lefred
fdescamp@cloudshell:~ (us-ashburn-1)$ docker login -u 'idinfdw2eouj/fdescamp' iad.ocir.io
Password: **********
WARNING! Your password will be stored unencrypted in /home/fdescamp/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
fdescamp@cloudshell:~ (us-ashburn-1)$ fn list apps
NAME ID
slow_query_log ocid1.fnapp.oc1.iad.xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxq
After that in Cloud Shell, we initialize our two new functions:
fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog_txt
Creating function at: ./mysql_slowlog_txt
Function boilerplate generated.
func.yaml created.
fdescamp@cloudshell:~ (us-ashburn-1)$ fn init --runtime python mysql_slowlog
Creating function at: ./mysql_slowlog
Function boilerplate generated.
func.yaml created.
Both functions are initialized, we will start with the one dumping the queries in JSON format.
As this is the first function of our application, we will define a Dockerfile and the requirements in a file (requirements.txt) in the folder of the function:
fdescamp@cloudshell:~ (us-ashburn-1)$ cd mysql_slowlog
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ ls
Dockerfile func.py func.yaml requirements.txt
We need to add the following content in the Dockerfile:
FROM fnproject/python:3.9-dev as build-stage
WORKDIR /function
ADD requirements.txt /function/
RUN pip3 install --target /python/ --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp* requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python
ADD . /function/
RUN rm -fr /function/.pip_cache
FROM fnproject/python:3.9
WORKDIR /function
COPY --from=build-stage /python /python
COPY --from=build-stage /function /function
RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn
ENV PYTHONPATH=/function:/python
ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]
The requirements.txt file needs to contain the following lines:
fdk>=0.1.48
oci
mysql-connector-python
We also need to modify the content of func.yaml file to increase the memory to 2048:
memory: 2048
All the magic of the function resides in the Python file func.py.
Modify the content of the file with the code of the file linked above.
Once done, we can deploy the function:
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ fn -v deploy --app slow_query_log
Deploying mysql_slowlog to app: slow_query_log
Bumped to version 0.0.1
Using Container engine docker
Building image iad.ocir.io/i**********j/lefred/mysql_slowlog:0.0.1
Dockerfile content
-----------------------------------
FROM fnproject/python:3.9-dev as build-stage
WORKDIR /function
ADD requirements.txt /function/
RUN pip3 install --target /python/ --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp* requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python
ADD . /function/
RUN rm -fr /function/.pip_cache
FROM fnproject/python:3.9
WORKDIR /function
COPY --from=build-stage /python /python
COPY --from=build-stage /function /function
RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn
ENV PYTHONPATH=/function:/python
ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]
-----------------------------------
FN_REGISTRY: iad.ocir.io/i**********j/lefred
Current Context: us-ashburn-1
Sending build context to Docker daemon 9.728kB
Step 1/13 : FROM fnproject/python:3.9-dev as build-stage
---> 808c3fde4a95
Step 2/13 : WORKDIR /function
---> Using cache
---> 7953c328cf0e
Step 3/13 : ADD requirements.txt /function/
---> Using cache
---> 5d44308f3376
Step 4/13 : RUN pip3 install --target /python/ --no-cache --no-cache-dir -r requirements.txt && rm -fr ~/.cache/pip /tmp* requirements.txt func.yaml Dockerfile .venv && chmod -R o+r /python
---> Using cache
---> 608ec9527aca
Step 5/13 : ADD . /function/
---> ae85dfe7245e
Step 6/13 : RUN rm -fr /function/.pip_cache
---> Running in 60421dfa5e4d
Removing intermediate container 60421dfa5e4d
---> 06de6b9b1860
Step 7/13 : FROM fnproject/python:3.9
---> d6c82f055722
Step 8/13 : WORKDIR /function
---> Using cache
---> b6bf41dd40e4
Step 9/13 : COPY --from=build-stage /python /python
---> Using cache
---> c895f3bb74f7
Step 10/13 : COPY --from=build-stage /function /function
---> b397ec7769a1
Step 11/13 : RUN chmod -R o+r /function && mkdir -p /home/fn && chown fn /home/fn
---> Running in 5af6a775d055
Removing intermediate container 5af6a775d055
---> fac578e4290a
Step 12/13 : ENV PYTHONPATH=/function:/python
---> Running in fe0bb2f24d6e
Removing intermediate container fe0bb2f24d6e
---> c0460b0ca6f9
Step 13/13 : ENTRYPOINT ["/python/bin/fdk", "/function/func.py", "handler"]
---> Running in 0ed370d1b391
Removing intermediate container 0ed370d1b391
---> 6907b3653dac
Successfully built 6907b3653dac
Successfully tagged iad.ocir.io/i************j/lefred/mysql_slowlog:0.0.1
Parts: [iad.ocir.io i*************j lefred mysql_slowlog:0.0.1]
Using Container engine docker to push
Pushing iad.ocir.io/i********j/lefred/mysql_slowlog:0.0.1 to docker registry...The push refers to repository [iad.ocir.io/i**********j/lefred/mysql_slowlog]
50019643244c: Pushed
b2b65f9f6bdd: Pushed
4ae76999236e: Layer already exists
9dbf415302a5: Layer already exists
fcc297df3f46: Layer already exists
79b7117c006c: Layer already exists
05dc728e5e49: Layer already exists
0.0.1: digest: sha256:e0a693993c7470557fac557cba9a2a4d3e828fc2d21789afb7ebe6163f4d4c14 size: 1781
Updating function mysql_slowlog using image iad.ocir.io/i**********j/lefred/mysql_slowlog:0.0.1...
Now we go in the ../mysql_slowlog_txt directory and we modify the func.yaml file to increase the memory to match the same amount as th previous function (2048).
Then we copy the content of this files in to func.py.
When done we can deploy that function too:
fdescamp@cloudshell:mysql_slowlog (us-ashburn-1)$ cd ../mysql_slowlog_txt/
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.yaml
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ vi func.py
fdescamp@cloudshell:mysql_slowlog_txt (us-ashburn-1)$ fn -v deploy --app slow_query_log
Our application requires some variables to work. Some will be sent every time the function is called, like which MySQL Instance, the user’s credentials, which Object Storage bucket to use, … Some will be “hardcoded” to not having to specify them each time (like the tencancy, oci user, …).
We use again Cloud Shell to specify those that won’t be specified each time:
fn config app slow_query_log oci_fingerprint "fe:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:3d"
fn config app slow_query_log oci_tenancy 'ocid1.tenancy.oc1..xxxxx'
fn config app slow_query_log oci_user "ocid1.user.oc1..xxxxxx"
fn config app slow_query_log namespace "i********j"
fn config app slow_query_log bucket "lefred-bucket"
fn config app slow_query_log oci_region "us-ashburn-1"
We also need to provide an OCI key as a string. The content of the string can be generated using base64 command line program:

And then we add it in Cloud Shell like this:
fn config app slow_query_log oci_key '<THE CONTENT OF THE STRING ABOVE>'
If we have the security list well configured (Private Subnet accepting connection on port 3306 from Public Subnet) and if we have an Object Storage bucket ready (with the name configured earlier), we can already test our functions directly form Cloud Shell:
fdescamp@cloudshell:~ (us-ashburn-1)$ echo -n '{"mds_host": "10.0.1.127",
"mds_user": "admin", "mds_port": "3306", "mds_password": "Passw0rd!",
"mds_name": "lefred-mysql"}' | fn invoke slow_query_log mysql_slowlog
{"message": "MySQL Slow Log saved: slow_lefred-mysql_202210132114.json"}
fdescamp@cloudshell:~ (us-ashburn-1)$ echo -n '{"mds_host": "10.0.1.127",
"mds_user": "admin", "mds_port": "3306", "mds_password": "Passw0rd!",
"mds_name": "lefred-mysql"}' | fn invoke slow_query_log mysql_slowlog_txt
{"message": "MySQL Slow Log saved: slow_lefred-mysql_202210132124.log"}
We can see the files in Object Storage:
In the next article, we will see how to configure the API Gateway to call our application and store the statements on Object Storage.
And finally, we will see the content of those files and how to use them.
Stay tuned !
Planet MySQL
https://assets.amuniversal.com/984b0ea0226b013bc746005056a9545d
Dilbert Daily Strip
https://www.percona.com/blog/wp-content/uploads/2022/10/altinity-6.png
MySQL is an outstanding database for online transaction processing. With suitable hardware, it is easy to execute more than 1M queries per second and handle tens of thousands of simultaneous connections. Many of the most demanding web applications on the planet are built on MySQL. With capabilities like that, why would MySQL users need anything else?
Well, analytic queries for starters. Analytic queries answer important business questions like finding the number of unique visitors to a website over time or figuring out how to increase online purchases. They scan large volumes of data and compute aggregates, including sums, averages, and much more complex calculations besides. The results are invaluable but can bog down online transaction processing on MySQL.
Fortunately, there’s ClickHouse: a powerful analytic database that pairs well with MySQL. Altinity is working closely with our partner Percona to help users add ClickHouse easily to existing MySQL applications. You can read more about our partnership in our recent press release as well as about our joint MySQL-to-ClickHouse solution.
This article provides tips on how to recognize when MySQL is overburdened with analytics and can benefit from ClickHouse’s unique capabilities. We then show three important patterns for integrating MySQL and ClickHouse. The result is more powerful, cost-efficient applications that leverage the strengths of both databases.
Let’s start by digging into some obvious signs that your MySQL database is overburdened with analytics processing.
Tables that drive analytics tend to be very large, rarely have updates, and may also have many columns. Typical examples are web access logs, marketing campaign events, and monitoring data. If you see a few outlandishly large tables of immutable data mixed with smaller, actively updated transaction processing tables, it’s a good sign your users may benefit from adding an analytic database.
Analytic processing produces aggregates, which are numbers that summarize large datasets to help users identify patterns. Examples include unique site visitors per week, average page bounce rates, or counts of web traffic sources. MySQL may take minutes or even hours to compute such values. To improve performance it is common to add complex batch processes that precompute aggregates. If you see such aggregation pipelines, it is often an indication that adding an analytic database can reduce the labor of operating your application as well as deliver faster and more timely results for users.
A final clue is the in-depth questions you don’t ask about MySQL-based applications because it is too hard to get answers. Why don’t users complete purchases on eCommerce sites? Which strategies for in-game promotions have the best payoff in multi-player games? Answering these questions directly from MySQL transaction data often requires substantial time and external programs. It’s sufficiently difficult that most users simply don’t bother. Coupling MySQL with a capable analytic database may be the answer.
MySQL is an outstanding database for transaction processing. Yet the features of MySQL that make it work well–storing data in rows, single-threaded queries, and optimization for high concurrency–are exactly the opposite of those needed to run analytic queries that compute aggregates on large datasets.
ClickHouse on the other hand is designed from the ground up for analytic processing. It stores data in columns, has optimizations to minimize I/O, computes aggregates very efficiently, and parallelizes query processing. ClickHouse can answer complex analytic questions almost instantly in many cases, which allows users to sift through data quickly. Because ClickHouse calculates aggregates so efficiently, end users can pose questions in many ways without help from application designers.
These are strong claims. To understand them it is helpful to look at how ClickHouse differs from MySQL. Here is a diagram that illustrates how each database pulls in data for a query that reads all values of three columns of a table.
MySQL stores table data by rows. It must read the whole row to get data for just three columns. MySQL production systems also typically do not use compression, as it has performance downsides for transaction processing. Finally, MySQL uses a single thread for query processing and cannot parallelize work.

By contrast, ClickHouse reads only the columns referenced in queries. Storing data in columns enables ClickHouse to compress data at levels that often exceed 90%. Finally, ClickHouse stores tables in parts and scans them in parallel.
The amount of data you read, how greatly it is compressed, and the ability to parallelize work make an enormous difference. Here’s a picture that illustrates the reduction in I/O for a query reading three columns.

MySQL and ClickHouse give the same answer. To get it, MySQL reads 59 GB of data, whereas ClickHouse reads only 21 MB. That’s close to 3000 times less I/O, hence far less time to access the data. ClickHouse also parallelizes query execution very well, further improving performance. It is little wonder that analytic queries run hundreds or even thousands of times faster on ClickHouse than on MySQL.
ClickHouse also has a rich set of features to run analytic queries quickly and efficiently. These include a large library of aggregation functions, the use of SIMD instructions where possible, the ability to read data from Kafka event streams, and efficient materialized views, just to name a few.
There is a final ClickHouse strength: excellent integration with MySQL. Here are a few examples.
For all of these reasons, ClickHouse is a natural choice to extend MySQL capabilities for analytic processing.
Just as ClickHouse can add useful capabilities to MySQL, it is important to see that MySQL adds useful capabilities to ClickHouse. ClickHouse is outstanding for analytic processing but there are a number of things it does not do well. Here are some examples.
In fact, MySQL and ClickHouse are highly complementary. Users get the most powerful applications when ClickHouse and MySQL are used together.
There are three main ways to integrate MySQL data with ClickHouse analytic capabilities. They build on each other.
ClickHouse can run queries on MySQL data using the MySQL database engine, which makes MySQL data appear as local tables in ClickHouse. Enabling it is as simple as executing a single SQL command like the following on ClickHouse:
CREATE DATABASE sakila_from_mysql
ENGINE=MySQLDatabase('mydb:3306', 'sakila', 'user', 'password')
Here is a simple illustration of the MySQL database engine in action.
The MySQL database engine makes it easy to explore MySQL tables and make copies of them in ClickHouse. ClickHouse queries on remote data may even run faster than in MySQL! This is because ClickHouse can sometimes parallelize queries even on remote data. It also offers more efficient aggregation once it has the data in hand.

Migrating large tables with immutable records permanently to ClickHouse can give vastly accelerated analytic query performance while simultaneously unloading MySQL. The following diagram illustrates how to migrate a table containing web access logs from ClickHouse to MySQL.
On the ClickHouse side, you’ll normally use MergeTree table engine or one of its variants such as ReplicatedMergeTree. MergeTree is the go-to engine for big data on ClickHouse. Here are three important features that will help you get the most out of ClickHouse.

These features can make an enormous difference in performance. We cover them and add more performance tips in Altinity videos (look here and here.) as well as blog articles.
The ClickHouse MySQL database engine can also be very useful in this scenario. It enables ClickHouse to “see” and select data from remote transaction tables in MySQL. Your ClickHouse queries can join local tables on transaction data whose natural home is MySQL. Meanwhile, MySQL handles transactional changes efficiently and safely.
Migrating tables to ClickHouse generally proceeds as follows. We’ll use the example of the access log shown above.
Migration can take as little as a few days but it’s more common to take weeks to a couple of months in large systems. This helps ensure that everything is properly tested and the roll-out proceeds smoothly.
The other way to extend MySQL is to mirror the data in ClickHouse and keep it up to date using replication. Mirroring allows users to run complex analytic queries on transaction data without (a) changing MySQL and its applications or (b) affecting the performance of production systems.
Here are the working parts of a mirroring setup.
ClickHouse has a built-in way to handle mirroring: the experimental MaterializedMySQL database engine, which reads binlog records directly from the MySQL primary and propagates data into ClickHouse tables. The approach is simple but is not yet recommended for production use. It may eventually be important for 1-to-1 mirroring cases but needs additional work before it can be widely used.

Altinity has developed a new approach to replication using Debezium, Kafka-compatible event streams, and the Altinity Sink Connector for ClickHouse. The mirroring configuration looks like the following.
The externalized approach has a number of advantages. They include working with current ClickHouse releases, taking advantage of fast dump/load programs like mydumper or direct SELECT using MySQL database engine, support for mirroring into replicated tables, and simple procedures to add new tables or reset old ones. Finally, it can extend to multiple upstream MySQL systems replicating to a single ClickHouse cluster.

ClickHouse can mirror data from MySQL thanks to the unique capabilities of the ReplacingMergeTree table. It has an efficient method of dealing with inserts, updates, and deletes that is ideally suited for use with replicated data. As mentioned already, ClickHouse cannot update individual rows easily, but it inserts data extremely quickly and has an efficient process for merging rows in the background. ReplicatingMergeTree builds on these capabilities to handle changes to data in a “ClickHouse way.”
Replicated table rows use version and sign columns to represent the version of changed rows as well as whether the change is an insert or delete. The ReplacingMergeTree will only keep the last version of a row, which may in fact be deleted. The sign column lets us apply another ClickHouse trick to make those deleted rows inaccessible. It’s called a row policy. Using row policies we can make any row where the sign column is negative disappear.
Here’s an example of ReplacingMergeTree in action that combines the effect of the version and sign columns to handle mutable data.

Mirroring data into ClickHouse may appear more complex than migration but in fact is relatively straightforward because there is no need to change MySQL schema or applications and the ClickHouse schema generation follows a cookie-cutter pattern. The implementation process consists of the following steps.
At this point, users are free to start running analytics or build additional applications on ClickHouse whilst changes replicate continuously from MySQL.
MySQL to ClickHouse migration is an area of active development both at Altinity as well as the ClickHouse community at large. Improvements fall into three general categories.
Dump/load utilities – Altinity is working on a new utility to move data that reduces schema creation and transfer of data to a single. We will have more to say on this in a future blog article.
Replication – Altinity is sponsoring the Sink Connector for ClickHouse, which automates high-speed replication, including monitoring as well as integration into Altinity.Cloud. Our goal is similarly to reduce replication setup to a single command.
ReplacingMergeTree – Currently users must include the FINAL keyword on table names to force the merging of row changes. It is also necessary to add a row policy to make deleted rows disappear automatically. There are pull requests in progress to add a MergeTree property to add FINAL automatically in queries as well as make deleted rows disappear without a row policy. Together they will make handling of replicated updates and deletes completely transparent to users.
We are also watching carefully for improvements on MaterializedMySQL as well as other ways to integrate ClickHouse and MySQL efficiently. You can expect further blog articles in the future on these and related topics. Stay tuned!
ClickHouse is a powerful addition to existing MySQL applications. Large tables with immutable data, complex aggregation pipelines, and unanswered questions on MySQL transactions are clear signs that integrating ClickHouse is the next step to provide fast, cost-efficient analytics to users.
Depending on your application, it may make sense to mirror data onto ClickHouse using replication or even migrate some tables into ClickHouse. ClickHouse already integrates well with MySQL and better tooling is arriving quickly. Needless to say, all Altinity contributions in this area are open source, released under Apache 2.0 license.
The most important lesson is to think in terms of MySQL and ClickHouse working together, not as one being a replacement for the other. Each database has unique and enduring strengths. The best applications will build on these to provide users with capabilities that are faster and more flexible than using either database alone.
Percona, well-known experts in open source databases, partners with Altinity to deliver robust analytics for MySQL applications. If you would like to learn more about MySQL integration with ClickHouse, feel free to contact us or leave a message on our forum at any time.
Percona Database Performance Blog
https://opengraph.githubassets.com/cfb4ebaddf85fbe6c915f7eac293d618d78d86d4a2034bf48dc6bf7935d72066/creagia/laravel-sign-pad
A Laravel package to sign documents and optionally generate
certified PDFs associated to a Eloquent model.
Laravel pad signature requires PHP 8.0 or 8.1 and Laravel 8 or 9.
You can install the package via composer:
composer require creagia/laravel-sign-pad
Publish the config and the migration files and migrate the database
php artisan sign-pad:install
Publish the .js assets:
php artisan vendor:publish --tag=sign-pad-assets
This will copy the package assets inside the public/vendor/sign-pad/ folder.
In the published config file config/sign-pad.php you’ll be able to configure many important aspects of the package, like the route name where users will be redirected after signing the document or where do you want to store the signed documents.
Notice that the redirect_route_name will receive the parameter $uuid with the uuid of the signature model in the database.
Add the RequiresSignature trait and implement the CanBeSigned class to the model you would like.
<?php namespace App\Models; use Creagia\LaravelSignPad\Concerns\RequiresSignature; use Creagia\LaravelSignPad\Contracts\CanBeSigned; class MyModel extends Model implements CanBeSigned { use RequiresSignature; } ?>
If you want to generate PDF documents with the signature, you should implement the ShouldGenerateSignatureDocument class . Define your document template with the getSignatureDocumentTemplate method.
<?php namespace App\Models; use Creagia\LaravelSignPad\Concerns\RequiresSignature; use Creagia\LaravelSignPad\Contracts\CanBeSigned; use Creagia\LaravelSignPad\Contracts\ShouldGenerateSignatureDocument; use Creagia\LaravelSignPad\Templates\BladeDocumentTemplate; use Creagia\LaravelSignPad\Templates\PdfDocumentTemplate; class MyModel extends Model implements CanBeSigned, ShouldGenerateSignatureDocument { use RequiresSignature; public function getSignatureDocumentTemplate(): SignatureDocumentTemplate { return new SignatureDocumentTemplate( signaturePage: 1, signatureX: 20, signatureY: 25, outputPdfPrefix: 'document', // optional // template: new BladeDocumentTemplate('pdf/my-pdf-blade-template'), // Uncomment for Blade template // template: new PdfDocumentTemplate(storage_path('pdf/template.pdf')), // Uncomment for PDF template ); } } ?>
A $model object will be automatically injected into the Blade template, so you will be able to access all the needed properties of the model.
At this point, all you need is to create the form with the sign pad canvas in your template. For the route of the form, you have to call the method getSignatureUrl() from the instance of the model you prepared before:
@if (!$myModel->hasBeenSigned()) <form action="" method="POST"> @csrf <div style="text-align: center"> <x-creagia-signature-pad /> </div> </form> <script src=""></script> @endif
You can retrieve your model signature using the Eloquent relation $myModel->signature. After that,
you can use
getSignatureImagePath() method in the relation to get the signature image.getSignedDocumentPath() method in the relation to get the generated PDF document.echo $myModel->signature->getSignatureImagePath(); echo $myModel->signature->getSignedDocumentPath();
From the same template, you can change the look of the component by passing some properties:
An example with an app using Tailwind would be:
<x-creagia-signature-pad border-color="#eaeaea" pad-classes="rounded-xl border-2" button-classes="bg-gray-100 px-4 py-2 rounded-xl mt-4" clear-name="Clear" submit-name="Submit" />
To certify your signature with TCPDF, you will have to create your own SSL certificate with OpenSSL. Otherwise you can
find the TCPDF demo certificate
here : TCPDF Demo Certificate
To create your own certificate use this command :
cd storage/app
openssl req -x509 -nodes -days 365000 -newkey rsa:1024 -keyout certificate.crt -out certificate.crt
More information in the TCPDF documentation
After generating the certificate, you’ll have to change the value of the variable certify_documents in the config/sign-pad.php file and set it to true.
When the variable certify_documents is set to true, the package will search the file allocated in the certificate_file path to sign the documents. Feel free to modify the location or the name of certificate file by changing its value.
Inside the same config/sign-pad.php we encourage you to fill all the fields of the array certificate_info to be more specific with the certificate.
Finally, you can change the certificate type by modifying the value of the variable cert_type (by default 2). You can find more information about certificates types at TCPDF setSignature reference.
Laravel News Links
https://mailtrap.io/wp-content/uploads/2021/05/mailtrap_home-2.png
Updated on July 29th, 2020.
When a new user clicks on the Sign up button of an app, he or she usually gets a confirmation email with an activation link (see examples here). This is needed to make sure that the user owns the email address entered during the sign-up. After the click on the activation link, the user is authenticated for the app.
From the user’s standpoint, the email verification process is quite simple. From the developer’s perspective, things are much trickier unless your app is built with Laravel. Those who use Laravel 5.7+ have the user email verification available out-of-the-box. For earlier releases of the framework, you can use a dedicated package to add email verification to your project. In this article, we’ll touch upon each solution you can choose.
Since email verification requires one to send emails in Laravel, let’s create a basic project with all the stuff needed for that. Here is the first command to begin with:
composer create-project --prefer-dist laravel/laravel app
Now, let’s create a database using the mysql client and then configure the .env file thereupon:
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=DB-laravel
DB_USERNAME=root
DB_PASSWORD=root
Run the migrate command to create tables for users, password resets, and failed jobs:
php artisan migrate
Since our Laravel app will send a confirmation email, we need to set up the email configuration in the .env file.
For email testing purposes, we’ll use Mailtrap Email Sandbox, which captures SMTP traffic from staging and allows developers to debug emails without the risk of spamming users.
The Email Sandbox is one of the SMTP drivers in Laravel. All you need to do is sign up and add your credentials to .env, as follows:
MAIL_MAILER=smtp
MAIL_HOST=smtp.mailtrap.io
MAIL_PORT=2525
MAIL_USERNAME=<********> //Your Mailtrap username
MAIL_PASSWORD=<********> //Your Mailtrap password
MAIL_ENCRYPTION=tls
For more on Mailtrap features and functions, read the Mailtrap Getting Started Guide.
In Laravel, you can scaffold the UI for registration, login, and forgot password using the php artisan make:auth command. However, it was removed from Laravel 6. In the latest releases of the framework, a separate package called laravel/ui is responsible for the login and registration scaffolding with React, Vue, jQuery and Bootstrap layouts. After you install the package, you can use the php artisan ui vue --auth command to scaffold UI with Vue, for example.
MustVerifyEmail contractThe Must Verify Email contract is a feature that allows you to send email verification in Laravel by adding a few lines of code to the following files:
Implement the MustVerifyEmail contract in the User model:
<?php
namespace App;
use Illuminate\Notifications\Notifiable;
use Illuminate\Contracts\Auth\MustVerifyEmail;
use Illuminate\Foundation\Auth\User as Authenticatable;
class User extends Authenticatable implements MustVerifyEmail
{
use Notifiable;
protected $fillable = [
'name', 'email', 'password',
];
protected $hidden = [
'password', 'remember_token',
];
}
Add such routes as email/verify and email/resend to the app:
Route::get('/', function () {
return view('welcome');
});
Auth::routes(['verify' => true]);
Route::get('/home', 'HomeController@index')->name('home');
Add the verified and auth middlewares:
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class HomeController extends Controller
{
public function __construct()
{
$this->middleware(['auth','verified']);
}
public function index()
{
return view('home');
}
}
Now you can test the app.
And that’s what you’ll see in the Mailtrap Demo inbox:
On screenshots above, the default name of the app, Laravel, is used as a sender’s name. You can update the name in the .env file:
APP_NAME=<Name of your app>
To customize notifications, you need to override the sendEmailVerificationNotification method of the App\User class. It is a default method, which calls the notify method to notify the user after the sign-up.
For more on sending notifications in Laravel, read our dedicated blog post.
To override sendEmailVerificationNotification, create a custom Notification and pass it as a parameter to $this->notify() within sendEmailVerificationNotification in the User Model, as follows:
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmail);
}
Now, in the created Notification, CustomVerifyEmail, define the way to handle the verification. For example, you can use a custom route to send the email.
The MustVerifyEmail class is a great thing to use. However, you may need to take over the control and manually verify email addresses without sending emails. Why would anyone do so? Reasons may include a need to create and add system users that have no accessible email addresses, import a list of email addresses (verified) to a migrated app, and others.
So, each manually created user will see the following message when signing in:
The problem lies in the timestamp in the Email Verification Column (email_verified_at) of the user table. When creating users manually, you need to validate them by setting a valid timestamp. In this case, there will be no email verification requests. Here is how you can do this:
The markEmailAsVerified() method allows you to verify the user after it’s been created. Check out the following example:
$user = User::create([
'name' => 'John Doe',
'email' => 'john.doe@example.com',
'password' => Hash::make('password')
]);
$user->markEmailAsVerified();
The forceCreate() method can do the same but in a slightly different way:
$user = User::forceCreate([
'name' => 'John Doe',
'email' => john.doe@example.com',
'password' => Hash::make('password'),
'email_verified_at' => now() //Carbon instance
]);
The most obvious way is to set a valid timestamp in the email_verified_at column. To do this, you need to add the column to the $fillable array in the user model. For example, like this:
protected $fillable = [
'name', 'email', 'password', 'email_verified_at',
];
After that, you can use the email_verified_at value within the create method when creating a user:
$user = User::create([
'name' => 'John Doe',
'email' => john.doe@example.com',
'password' => Hash::make('password'),
'email_verified_at' => now() //Carbon instance
]);
The idea of queuing is to dispatch the processing of particular tasks, in our case, email sending, until a later time. This can speed up processing if your app sends large amounts of emails. It would be useful to implement email queues for the built-in Laravel email verification feature. The simplest way to do that is as follows:
CustomVerifyEmailQueued, which extends the existing one, VerifyEmail. Also, the new notification should implement the ShouldQueue contract. This will enable queuing. Here is how it looks:namespace App\Notifications;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Auth\Notifications\VerifyEmail;
class CustomVerifyEmailQueued extends VerifyEmail implements ShouldQueue
{
use Queueable;
}
public function sendEmailVerificationNotification()
{
$this->notify(new \App\Notifications\CustomVerifyEmailQueued);
}
We did not touch upon configuration of the queue driver here, which is “sync” by default without actual queuing. If you need some insight on that, check out this Guide to Laravel Email Queues.
laravel-confirm-email package The laravel-confirm-email package is an alternative way to set up email verification in 5.8 and older versions of Laravel. It works, however, also for the newest releases. You’re likely to go with it if you’re looking for Laravel to customize verification of emails. For example, the package allows you to set up your own confirmation messages and change all possible redirect routes. Let’s see how it works.
Install the laravel-confirm-email package, as follows:
composer require beyondcode/laravel-confirm-email
You also need to add two fields to your users table: confirmed_at and confirmation_code. For this, publish the migration and the configuration file, as follows:
php artisan vendor:publish --provider="BeyondCode\EmailConfirmation\EmailConfirmationServiceProvider"
Run the migrations after:
php artisan migrate
We need to replace the default traits with those provided by laravel-confirm-email in the following files:
app\Http\Controllers\Auth\LoginController.php
use Illuminate\Foundation\Auth\AuthenticatesUsers;
laravel-confirm-email traituse BeyondCode\EmailConfirmation\Traits\AuthenticatesUsers;
app\Http\Controllers\Auth\RegisterController.php
use Illuminate\Foundation\Auth\RegistersUsers;
laravel-confirm-email traituse BeyondCode\EmailConfirmation\Traits\RegistersUsers;
app\Http\Controllers\Auth\ForgotPasswordController.php
use Illuminate\Foundation\Auth\SendsPasswordResetEmails;
laravel-confirm-email traituse BeyondCode\EmailConfirmation\Traits\SendsPasswordResetEmails;
Add the routes to app/routes/web.php:
Route::name('auth.resend_confirmation')->get('/register/confirm/resend', 'Auth\RegisterController@resendConfirmation');
Route::name('auth.confirm')->get('/register/confirm/{confirmation_code}', 'Auth\RegisterController@confirm');
To set up flash messages that show up after a user clicks on the verification link, append the code to the following files:
resources\views\auth\login.blade.php
@if (session('confirmation'))
<div class="alert alert-info" role="alert">
{!! session('confirmation') !!}
</div>
@endif
@if ($errors->has('confirmation') > 0 )
<div class="alert alert-danger" role="alert">
{!! $errors->first('confirmation') !!}
</div>
@endif
resources\views\auth\passwords\email.blade.php
@if ($errors->has('confirmation') > 0 )
<div class="alert alert-danger" role="alert">
{!! $errors->first('confirmation') !!}
</div>
@endif
Updated the resources/lang/vendor/confirmation/en/confirmation.php file if you want to use custom error/confirmation messages:
<?php
return [
'confirmation_subject' => 'Email verification',
'confirmation_subject_title' => 'Verify your email',
'confirmation_body' => 'Please verify your email address in order to access this website. Click on the button below to verify your email.',
'confirmation_button' => 'Verify now',
'not_confirmed' => 'The given email address has not been confirmed. <a href=":resend_link">Resend confirmation link.</a>',
'not_confirmed_reset_password' => 'The given email address has not been confirmed. To reset the password you must first confirm the email address. <a href=":resend_link">Resend confirmation link.</a>',
'confirmation_successful' => 'You successfully confirmed your email address. Please log in.',
'confirmation_info' => 'Please confirm your email address.',
'confirmation_resent' => 'We sent you another confirmation email. You should receive it shortly.',
];
You can modify all possible redirect routes (the default value is route('login')) in the registration controller. Keeping in mind that the app was automatically bootstrapped, the registration controller is at app/Http/Controllers/Auth/RegisterController.php. Just include the following values either as properties or as methods returning the route/URL string:
redirectConfirmationTo – is opened after the user completed the confirmation (opened the link from the email) redirectAfterRegistrationTo – is opened after the user submitted the registration form (it’s the one where “Go and verify your email now”) redirectAfterResendConfirmationTo – is opened when you ask to resend the email By redefining the redirect routes you can change not only the flash message but also the status page which you show to the user.
laravel-email-verification packageThe laravel-email-verification package has been deemed an obsolete solution due to the release of MustVerifyEmail. Nevertheless, you can still use the package to handle email verification in older Laravel versions (starting from 5.4).
Install the package, as follows:
composer require josiasmontag/laravel-email-verification
Register the service provider in the configuration file (config/app.php):
'providers' => [
Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider::class,
],
In Laravel 5.5, this should have been done automatically, but it did not work for us (version 5.5.48).
You need to update the users table with a verified column. For this, you can publish the migration:
php artisan migrate --path="/vendor/josiasmontag/laravel-email-verification/database/migrations"
If you want to customize the migration, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="migrations"
And run the migrations after:
php artisan migrate
CanVerifyEmail is a trait to be implemented in the User Model. You can customize this trait to change the activation email address.
use Illuminate\Foundation\Auth\User as Authenticatable;
use Lunaweb\EmailVerification\Traits\CanVerifyEmail;
use Lunaweb\EmailVerification\Contracts\CanVerifyEmail as CanVerifyEmailContract;
class User extends Authenticatable implements CanVerifyEmailContract
{
use CanVerifyEmail;
// ...
}
VerifiesEmail is a trait for RegisterController. To let the authenticated users access the verify routes, update the middleware exception:
use Lunaweb\EmailVerification\Traits\VerifiesEmail;
class RegisterController extends Controller
{
use RegistersUsers, VerifiesEmail;
public function __construct()
{
$this->middleware('guest', ['except' => ['verify', 'showResendVerificationEmailForm', 'resendVerificationEmail']]);
$this->middleware('auth', ['only' => ['showResendVerificationEmailForm', 'resendVerificationEmail']]);
}
// ...
}
The package listens for the Illuminate\Auth\Events\Registered event and sends the verification email. Therefore, you don’t have to override register(). If you want to disable this behavior, use the listen_registered_event setting.
Add the IsEmailVerified middleware to the app/Http/Kernel.php:
protected $routeMiddleware = [
// …
'isEmailVerified' => \Lunaweb\EmailVerification\Middleware\IsEmailVerified::class,
And apply it in routes/web.php:
<?php
Route::group(['middleware' => ['web', 'auth', 'isEmailVerified']], function () {
// Verification
Route::get('register/verify', 'App\Http\Controllers\Auth\RegisterController@verify')->name('verifyEmailLink');
Route::get('register/verify/resend', 'App\Http\Controllers\Auth\RegisterController@showResendVerificationEmailForm')->name('showResendVerificationEmailForm');
Route::post('register/verify/resend', 'App\Http\Controllers\Auth\RegisterController@resendVerificationEmail')->name('resendVerificationEmail')->middleware('throttle:2,1');
});
To customize the verification email, override sendEmailVerificationNotification() of the User model. For example:
class User implements CanVerifyEmailContract
{
use CanVerifyEmail;
/**
* Send the email verification notification.
*
* @param string $token The verification mail reset token.
* @param int $expiration The verification mail expiration date.
* @return void
*/
public function sendEmailVerificationNotification($token, $expiration)
{
$this->notify(new MyEmailVerificationNotification($token, $expiration));
}
}
To customize the resend form, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="views"
Path to the template: resources/views/vendor/emailverification/resend.blade.php
To customize messages and the language used, use the following command:
php artisan vendor:publish --provider="Lunaweb\EmailVerification\Providers\EmailVerificationServiceProvider" --tag="translations"
Path to the files: resources/lang/
Sending a verification email is the most reliable way to check the validity of an email address. The tutorials above will help you implement this feature in your Laravel app. At the same time, if you need to validate a large number of existing addresses, you do not have to send a test email to each of them. There are plenty of online email validators that will do the job for you. In the most extreme case, you can validate an email address manually with mailbox pinging. For more on this, read How to Verify Email Address Without Sending an Email.
Laravel News Links
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/10/Linux_server_in_the_cloud_cover-1.jpg
Hosting web servers on the internet can be very challenging for a first-timer without a proper guide. Cloud service providers have provided numerous ways to easily spin up servers of any kind in the cloud.
AWS is one of the biggest and most reliable cloud-based options for deploying servers. Here’s how you can get your Linux-based server running in the cloud with AWS EC2.
Amazon Elastic Cloud Compute (EC2) is one of the most popular web services offered by Amazon. With EC2, you can create virtual machines in the cloud with different operating systems and resizable compute capacity. This is very useful for launching secure web servers and making them available on the internet.
The AWS web console provides an easy-to-navigate interface that allows you to launch an instance without the use of any scripts or code. Here’s a step-by-step guide to launching a Linux-based EC2 instance on AWS. You’ll also learn how to connect to it securely via the console.
Sign in to your existing AWS account or head over to portal.aws.amazon.com to sign up for a new one. Then, search and navigate to the EC2 dashboard.
Locate the Launch instances button in the top-right corner of the screen and click it to launch the EC2 launch wizard.
The first required step is to enter a name for your instance; next, you choose the operating system image and version (Amazon Machine Image-AMI) of the Linux distribution you wish to use. You’re free to explore other recommended Linux server operating systems other than Ubuntu.
The different EC2 instance types are made up of various combinations of CPU, memory, storage, and networking power. There are up to 10 different instance types you can pick from, depending on your requirements. For demonstration, we’ll go with the default (t2.micro) instance type.
AWS has an article on choosing the right instance type for your EC2 virtual machine, which you can use as a reference.
In most cases, at least for development and debugging purposes, you might need to access your instance via SSH, and to do this securely, you require a key pair. It is an optional configuration, but because you might connect to your instance via SSH later, you must add a key pair.
You can either use an existing key pair or create a new one. To create a new one, click on Create new key pair, and you will see the popup screen below.
Give your key pair a name, and choose an encryption type (RSA is the most popular and recommended option, as it is supported across multiple platforms). You also need to choose a file format (PEM or PPK) for the private keys which would be downloaded on your local machine depending on the SSH client you use.
The Network settings for your EC2 instance come up next. By default, you need to create a new security group to define firewall rules to restrict access to only specific ports on your instance.
It is recommended to restrict SSH connection to only your IP address to reduce the chances of your server getting hacked. You should also allow HTTP traffic if you’ve created the instance to be a web server.
You can always go back to edit your security group rules to add or remove inbound and outbound rules. For instance, adding inbound rules for HTTPS traffic when you set up an SSL certificate for secure HTTP connections.
By default, EC2 will allocate storage based on the instance type selected. But you have an option to attach an Amazon Elastic Block Storage volume (which acts like an external storage disk) to your instance.
This isn’t mandatory, but if you want a virtual disk that you can use across multiple instances or move around with ease, you should consider it. You can now review your instance configuration to be sure everything is set up correctly, then click on the Launch Instance button to create your Linux virtual machine.
You will be redirected to a screen where you have the View Instances button. Click it to see your newly launched instance.
Now that the virtual machine is up and running, you can set up a web server in it. It could be an Apache server, Node.js server, or whatever server you want to use. There are up to four different ways to connect to an EC2 instance, namely:
The most common methods of connection are EC2 instance connect and SSH Client. EC2 instance connect is the quickest and easiest way to connect to your EC2 instance and perform your desired operations on it.
To connect to your Linux instance via EC2 instance connect, select it on the dashboard and click Connect.
Select the EC2 instance connect tab and click on the Connect button. This would automatically open up a screen that looks like a command-line interface.
This confirms a successful login to your Linux machine, and you may now begin to set it up for your web server needs. For instance, to create a simple Apache web server, run the following commands:
sudo apt-get update -y
sudo apt-get install apache2 -y
sudo systemctl start apache2.service
To verify that everything went fine and the Apache server is up and running, check the status using sudo systemctl status apache2.service. If everything is okay, you should have an output similar to the one below:
Finally, you can test the server by copying the Public IPv4 DNS from the instance properties tab and pasting it into your browser. You should see the Apache demo page.
Congratulations on successfully setting up your Linux server in the AWS cloud. You may now build and deploy your applications to production with it.
Now you can easily set up a Linux web server in the cloud with Amazon EC2. While Ubuntu is the most-used operating system for Linux servers, the process to create an EC2 instance is the same for just any other Linux distribution.
You could also set up different kinds of web servers such as Node.js, Git, Golang, or a Docker container. All you have to do is connect to your instance and carry out the steps to set up your preferred application server.
MUO – Feed
https://kongulov.dev/assets/images/posts/database-transactions-in-laravel.png
In web development, data integrity and accuracy are important. Therefore, we need to be sure that we are writing code that securely stores, updates, and deletes data in our databases. In this article, we’ll take a look at what database transactions are, why they’re important, and how to get started using them in Laravel. We will also look at typical problems associated with queued jobs and database transactions.
Before we get started with transactions in Laravel, let’s take a look at what they are and how they are useful.
A transaction is an archive for database queries. It protects your data thanks to the all-or-nothing principle.
Let’s say you transfer money from one account to another. In the application, it looks like several operations
UPDATE `wallets` SET `amount` = `amount` - 100 WHERE `id` = 1;
UPDATE `wallets` SET `amount` = `amount` + 100 WHERE `id` = 2;
What if one request succeeds and the other fails? Then the integrity of the data will be violated. To avoid such situations, the DBMS introduced the concept of a transaction – an atomic impact on data. That is, the transfer of the database from one holistic state to another. In other words, we include several requests in the transaction, which must all be executed, but if at least one is not executed, then all the requests included in the transaction will not be executed. This is the all-or-nothing principle.
Now that we have an idea about transactions, let’s look at how to use them in Laravel.
First, let’s see what we have in the wallets table
| id | amount | |----|--------| | 1 | 1000 | | 2 | 0 |
I intentionally made a mistake in the transfer method to see the consequences of a data violation.
public function transfer()
{
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id_', 2)->increment('amount', 100);
}
After executing the code, check the database
| id | amount | |----|--------| | 1 | 900 | | 2 | 0 |
The first request passed, but the second one failed. And in the end: the funds from the first account were gone, but they did not come to the second one. Data integrity has been violated. To prevent this from happening, you need to use transactions.
It’s very easy to get started with transactions in Laravel thanks to the transaction() method, which we can access from the DB facade. Based on the previous code example, let’s look at how to use transactions in Laravel.
use Illuminate\Support\Facades\DB;
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id_', 2)->increment('amount', 100); // <-- left an error
});
}
Let’s run the code. But now both requests are in a transaction. Therefore, no query should be executed.
| id | amount | |----|--------| | 1 | 1000 | | 2 | 0 |
An error occurred while executing the second request. Because of this, the transaction as a whole failed. The amounts on the wallets have not changed.
Let’s fix the transfer method and run the code
use Illuminate\Support\Facades\DB;
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
Wallet::where('id', 2)->increment('amount', 100);
});
}
After executing the code, check the database
| id | amount | |----|--------| | 1 | 900 | | 2 | 100 |
All requests were completed without errors, so the transaction was successful. The amounts on the wallets have changed.
This was a simple example using a closure. But what if you have third-party services whose response is important and should affect an event in the code? Because not all services return exceptions, some just return a boolean. To do this, Laravel has several methods for manually processing transactions.
DB::beginTransaction() – for defining a transactionDB::commit() – to execute all queries after DB::beginTransaction()DB::rollBack() – to cancel all requests after DB::beginTransaction()Let’s consider them with an example. We have a wallet with a balance of $100, and we have a card with a balance of $50, we want to use both balances to transfer $150 to another wallet.
use App\Services\ThirdPartyService;
use Illuminate\Support\Facades\DB;
private ThirdPartyService $thirdPartyService;
public function __construct(ThirdPartyService $thirdPartyService)
{
$this->thirdPartyService = $thirdPartyService;
}
public function transfer()
{
DB::transaction(function(){
Wallet::where('id', 1)->decrement('amount', 100);
$this->thirdPartyService->withdrawal(50); // <-- returns false
Wallet::where('id', 2)->increment('amount', 150);
});
}
Data integrity has been violated. Since the service does not throw an exception so that the transaction is not completed, but only returns a false value and the code continues to work. As a result, we replenish the balance by 150 without deducting 50 from the card
Now we use the above methods to manually use transactions
use App\Services\ThirdPartyService;
use Illuminate\Support\Facades\DB;
private ThirdPartyService $thirdPartyService;
public function __construct(ThirdPartyService $thirdPartyService)
{
$this->thirdPartyService = $thirdPartyService;
}
public function transfer()
{
DB::beginTransaction();
Wallet::where('id', 1)->decrement('amount', 100);
if(!$this->thirdPartyService->withdrawal(50)) {
DB::rollBack();
return;
}
Wallet::where('id', 2)->increment('amount', 150);
DB::commit();
}
Thus, if a third-party service returns false to us, then by calling DB::rollBack() we will prevent the execution of requests and preserve the integrity of the data
Laravel News Links
https://laraveldaily.com/storage/117/laravel-gates-override-superadmin.png
If you use Gates in the Laravel project for roles/permissions, you can add one condition to override any gates, making a specific user a super-admin. Here’s how.
Let’s imagine you have this line in app/Providers/AuthServiceProvider.php, as per documentation:
public function boot()
{
Gate::define('update-post', function (User $user, Post $post) {
return $user->id === $post->user_id;
});
}
And in this way, you define more gates like create-post, delete-post, and others.
But then, you want some User with, let’s say, users.role_id == 1 to be able to do ANYTHING with the posts. And with other features, too. In other words, a super-admin.
All you need to do is, within the same boot() method, add these lines:
Gate::before(function($user, $ability) {
if ($user->role_id == 1) {
return true;
}
});
Depending on your logic of roles/permissions, you may change the condition, like this, for example:
Gate::before(function($user, $ability) {
if ($user->hasPermission('root')) {
return true;
}
});
In other words, for any $ability you return true if the User has a certain role or permission.
Then, Laravel wouldn’t even check the Gate logic, and would just grant that user access.
Of course, be careful with that, cause one wrong condition and you may grant access to someone who is not a super-admin.
You can read more about Gates and permissions, in the official documentation.
Laravel News Links