A MyRocks Use Case

https://www.percona.com/blog/wp-content/uploads/2022/12/A-MyRocks-Use-Case.pngA MyRocks Use Case

A MyRocks Use CaseI wrote this post on MyRocks because I believe it is the most interesting new MySQL storage engine to have appeared over the last few years. Although MyRocks is very efficient for writes, I chose a more generic workload that will provide a different MyRocks use case.

The use case is the TPC-C benchmark but executed not on a high-end server but on a lower-spec virtual machine that is I/O limited like for example, with AWS EBS volumes. I decided to use a virtual machine with two CPU cores, four GB of memory, and storage limited to a maximum of 1000 IOPs of 16KB. The storage device has performance characteristics pretty similar to an AWS gp2 EBS volume of about 330 GB in size. I emulated these limits using the KVM iotune settings in my lab.


MyRocks and RocksDB

If you wonder what is the difference between MyRocks and RocksDB, consider MyRocks as the piece of code, or the glue, that allows MySQL to store data in RocksDB. RocksDB is a very efficient key-value store based on LSM trees. MyRocks stores table rows in RocksDB using an index id value concatenated with the primary key as the key and then the internal MySQL binary row representation as the value. MyRocks handles indexes in a similar fashion. There are obviously tons of details but that is the main principle behind MyRocks. Inside MyRocks, there is an embedded instance of RocksDB running.



The TPC-C dataset I used was with a scale of 200. As seen in the figure below, the sizes of the dataset are very different using InnoDB vs MyRocks.  While with InnoDB the size is 20GB, it is only 4.3GB with MyRocks. This is a tribute to the efficient compression capabilities of MyRocks.

InnoDB and MyRocks dataset sizes

InnoDB and MyRocks dataset sizes

A keen observer will quickly realize the compressed dataset size with MyRocks is roughly the same as the amount of memory of the virtual machine. This is not an accident, it is on purpose. I want to illustrate, maybe using an obvious use case, that you can’t use general rules like “InnoDB is faster for reads” or “MyRocks is only good for writes”. A careful answer would be: “it depends…”


TPC-C on MyRocks

In order to be able to run the sysbench TPC-C script, you need to use a binary collation and the read-committed isolation level. You must also avoid foreign key constraints. A typical sysbench invocation would look like this:

./tpcc.lua --mysql-host= --mysql-user=sysbench --mysql-password=sysbench --mysql-ssl=off \
   --mysql-db=sysbench --threads=4 --scale=200 --use_fk=0 --mysql_storage_engine=rocksdb \
   --mysql_table_options="COLLATE latin1_bin" --trx_level=RC --report-interval=10 --time=3600 run

I used a rocksdb_block_cache_size of 512MB. I wanted most of the memory to be available for the file cache, where the compressed SST files will be cached. The block cache just needs to be large enough to keep the index and filter blocks in memory. In terms of compression, the relevant settings in the column family options are:


MyRocks uses ZStd:1 compression for the bottom level and LZ4 for the upper levels. The bottom-level compression is really critical as it contains most of the data.

Being an LSM-type storage engine, RocksDB must frequently perform level compactions. Level compactions consume IOPs and in environments where IOPs are scarce, those impact performance. Fortunately, RocksDB has the variable rocksdb_rate_limiter_bytes_per_sec to limit the impacts of compaction. The IO bandwidth used by the background compaction threads is limited by this parameter. The following figure illustrates the impacts.


As the filesystem cache and the block cache warms up, the TPC-C transaction rates rise from 50 to around 175/s. After roughly 500s, the need for compaction arises and the performance drops. With no rate limit (0), the background threads consume too much IOPs and the compaction adversely affects the workload. With lower values of rocksdb_rate_limiter_bytes_per_sec, the impacts are reduced and the compactions are spread over longer periods of time.

For this environment, a rate limit of 4 MB/s achieves the lowest performance drops. Once warmed, the performance level never felt under 100 Trx/s. If you set rocksdb_rate_limiter_bytes_per_sec too low, like at 1MB/s, compaction cannot keep up and processing has to stall for some time. You should allocate enough bandwidth for compaction to avoid these stalls.

Long term stability

Over time, as data accumulates in the RocksDB LSM tree, performance can degrade. Using the 2 MB/s rate limiter, I pushed the runtime to 10 hours and observed very little degradation as shown in the following figure.

MyRocks performance stability

There are of course many compaction events but the performance baseline remains stable.


MyRocks Vs InnoDB

Now, how does this workload perform on InnoDB? InnoDB is more IO bound than MyRocks, essentially the 20GB dataset is large for the 3GB buffer pool.

MyRocks Vs InnoDB

The compaction event diminishes MyRocks performance but even then, the transaction rate stays well above the InnoDB one. Over the course of one hour, InnoDB executed 125k transactions while MyRocks achieved in excess of 575k transactions. Even if InnoDB uses compression (CMP8k), the performance level is still much lower.


I hope this post has raised your interest in the MyRocks storage engine. If you are paying too much for cloud-based storage and IOPs, make sure you evaluate MyRocks as it has super compression capabilities and is IO efficient.

Note: all the raw results, scripts, and configuration files used for this post can be found on Github.

Planet MySQL

Making Black Powder


Warning: these procedures create a low explosive. You can hurt yourself or others if you do something stupid.

When I was in a situation where I had lost my home I had to move into temporary housing for a bit. I could not take my firework chemicals with me. I needed to dispose of them. The method I choose to use was to just ignite the burnables and dispose of them that way.

I would move across the road to a safe location, pour a pound or more of BP on the ground. Add a slow fuse. Weigh the fuse down with rocks to keep it stretched out. Light the end of the fuse and move away. 30 to 60 seconds later the pile would go up with a POOF and a cloud of smoke along with a flash of light. No BOOM. Very safe.

The guy I was working with asked to do a pile. I kept working on what I needed to do. Even though he had watched me weigh the fuse down to stretch it out, he just stuck the fuse in the pile like a birthday candle. When he lit the fuse sparks from the fuse landed on the pile and it went off.

He suffered major burns. His sunglasses melted. All exposed skin was blistered. EMS was called. He was transported to the local hospital and from there a life flight took him to the regional burn center.


There are only three components that go into black powder:

  • 75% Potassium Nitrate (KNO3)
  • 10% Sulfur
  • 15% Carbon

All percentages are given BY WEIGHT. While KNO3 makes up 75% of the BP mixture it is not the largest by volume. That goes to carbon.

You will need to source some of these but can make the carbon. The cheapest choice is to buy the KNO3 in pellet form. This is used for many things. One of which is making fuel from vegetable oils. If you wanted you could get a 55 gallon drum of the stuff delivered to you.

Pure sulfur is also easy to purchase. It is used in many processes.

Making carbon is actually making charcoal. Not “charcoal briquettes” but actual charcoal. To make charcoal you need a good hardwood, a heat source and an air tight container.

What you are going to do, in essence, is to cook your hardwood into charcoal. Start by turning your hardwood into small chunks. You want as much surface area as possible without making chips or sawdust. Once this is done you need to cook it.

Find an airtight metal can. I purchased an empty paint can from the hardware store. Poke one small hole in the center of the lid. Fill the container with your hardwood. Then put the lid with hole back on.

As the wood cooks it will emit gasses. The hole allows those gasses to escape. The gasses also displace all of the oxygen inside the can. One of the cool things you can do is actually light the gasses that are escaping on fire.

You cook your wood until no more gas is escaping.

If you did everything correctly you should have no ash in the can and lots of charcoal. You might still have some wood inside that charcoal.

Now that you have these, you need to processes them into something usable. For that you will need a mill. The best option for that is to purchase a ball mill. You want a ball mill that is non sparking. That means that you need to avoid plastics that might build up an electrical charge. Remember the life flight above but add to it a BOOM when the spark sets off your mixture will it is contained.

For the best black powder you want a homogeneous mixture. The smaller the particles of the mixture the more homogeneous the mixture will be when mixed properly.

To this end we want to turn our three components into a fine powder. In general, at this stage you want a powder that will pass through a #100 sieve. The following is an example of grading sieves. There are other options that are cheaper. You will need grading sieves at different sizes.

Gilson ASTM 3″ diameter Round Brass-Stainless Test Sieves meet the requirements of ASTM E 11. Brass Frame with Stainless Steel Cloth is a popular choice that offers extended service and…

One of the nice thing about the type of sieve listed above is that you can put your material at the top and sift it. Each finer mesh will stop your material and you end up with your powder properly graded.

To make this powder you have to mill the KNO3, Sulfur, and Carbon. There are different ways of doing this. I’m only going to describe the method using the ball mill as I feel this is the safest method.

You need to put your material into the ball mill with some sort of media. You might be tempted to use lead balls. DON’T. At this stage you are best served with a non-sparking material that is hard. I chose to use stainless steel balls. You need a mix of balls from about 1/4″ to 3/4″ inch in diameter. You can find these for sale on Amazon and other sources.

The amount of raw material and media is dependent on the size of your ball mill.

Once loaded and sealed, run your mill until the raw material is a fine powder. Pass it through your #100 sieve. Anything that doesn’t pass through goes back into the mill for another run. Once all your material passes through your sieve carefully package it in an airtight container. You don’t want it to absorb water from the air.

Now wash your ball mill and media. Make sure there is no residue left behind and then let it dry. You do not want the different chemicals to mix while milling.

Repeat the process for the other three chemicals. When you are milling the charcoal you might find bits of wood that has not carbonized. Just return them to your charcoal can to wait for your next run.

Remember to wear a mask while working with powders this fine. They will get into your lungs if you don’t.

Now that you have the three powders you need to measure them carefully by weight.

I choose to use a triple beam scale. This is accurate to 0.1 grams. Our reloading scales are normally good to 0.1gr. 154 grains per gram if I did my math correctly.

This means that your reloading scale is more than accurate enough. What might be an issue is the total amount that you can weigh on your scale or the volume you can hold in your scale. Just be aware.

If you are using any type of scale, make sure you tare your scale and container.

You should now have 3 airtight containers full of powdered chemicals. You should have a spotlessly clean ball mill.

Take your stainless steel media and put it in a safe place. Think of this as removing ammunition when you are working with a firearm and don’t want an accidental boom.

Now you need to mix the three chemicals. Use your scale and measure out 7.5g of your powdered KNO3. 1.0g of Sulfur. and 1.5g of Carbon/charcoal into your mill.

Add your non-sparking media to the ball mill. If you use a hard lead balls you will turn your KNO3 gray which means there is lead transferred. I don’t like the lead ball method. Brass works very well, does not spark. It is expensive. The one most people use is ceramic. It should not spark but there are arguments within the fireworks community as to the truth of this. Finally there is non-sparking stainless steel. The prefered stainless steel alloys for this are 304 and 316.

Remember, if there is a spark in your ball mill at this point, it will go boom.

Now the safety part of the next step.

Get yourself a long extension cord, 100ft is best. Run it out a 100ft from where power is away from all buildings and people. Make sure that the cord is NOT energized. Do NOT plug in the extension cord. Put your jar on the ball mill drive and turn on the ball mill.


If the ball mill starts up, turn it off and go unplug the extension cord.

Now that the mill is on but not running, go back to the other end of the extension cord and plug it in. This should turn the mill on. You might be able to hear it running. Hopefully you don’t see it running.

Remember all those videos of idiots putting tannerite inside things and then shooting said things only to find that there is stuff flying at high speeds towards them? You just filled a jar with an explosive and projectiles. If it goes boom things WILL fly. Don’t be where said speeding things can hit you.

Let the mill run for about an hour. You want a good homogeneous mixture.

This mixture is very flammable. If you put a spark to it, it will flash. Don’t do it!

If you want to test a small amount make sure it is a small amount and you use something that keeps you at a distance when you light the powder.

This is NOT gunpowder this is PB “meal”! There are a couple of more steps.

The meal must be turned into actual gunpowder. This is done by pressing it into pucks and then processing the pucks.

Take your black powder meal and add a small amount of water to it. You want just enough water to be able to press it into pucks. If you put enough water in that it looks wet, you’ve added to much.

No, I can’t tell you how much.

KNO3 is water soluble. This means that as you add water to your BP meal the KNO3 will dissolve into the water. When you press your puck any excess water will be squeezed out and this will carry away some of your KNO3 which changes the ratios of your BP.

One method used is to spritz a fine mist over the powder. One spritz might be enough for this amount of BP meal.

Now you need to make your puck.

You need a container to hold the puck. I used a piece of 2″ PVC pipe that was about 2.5 inches long. I put this on a piece of aluminum, 1/4″ thick and about 5in square. I put a small round piece of wood inside the pipe at the bottom and then added my BP meal on top until there was about an 3/4 to an inch of powder there.

Now I put another wooden round over the top that fits snuggle in the pipe.

Today, because I have a machine shop, I would take a 1/2 sheet of aluminum and mill it to have a boss in the center that exactly fit the pipe. I would make an aluminum plug that would exactly fit the pipe and use that instead of working with wood.

Now press the pipe. I used a big C-Clamp the first time. Today I would use my arbor press. You want to squeeze this hard enough that it sticks together on its own. You can use something like this cheese press to compress your puck.

This is a fancy press that is designed to provide a constant pressure. You don’t need all that fancy. You just need a long lever and a single down rod to press into the top plate of the puck mold.

Because the BP meal is damp it is MOSTLY safe from sparks. This is fairly safe as things go.

Now you need to dry your pucks. Place them on a screen to sun dry. You want both the top and bottom exposed to air and you want to do this in a location where there is no chance of a spark. I’ve used a furnace filter but an actual window screen is better.

Now you have a bunch of very dry and hard pucks of BP. And it is actual Black Powder now.

But it isn’t really usable, what you need now is to create granules that can be used as you want.

Take one of your pucks and put it in a spark proof baggy to control where stuff goes. Now using non-sparking equipment hammer that puck lightly until it breaks up.

I use a wooden mallet and zip lock bags on an aluminum block.

Hammer until you have grains of black powder that are about what you want.

Your Black Powder is sorted into different grades:

  • Whaling – 4 mesh (4.74 mm)
  • Cannon – 6 mesh (3.35 mm)
  • Saluting – 10 mesh (2 mm)
  • Fg – 12 mesh (1.7 mm)
  • FFg – 16 mesh (1.18 mm)
  • FFFg – 20 mesh (.85 mm)
  • FFFFg – 40 mesh (.47 mm)
  • FFFFFg – 75mesh (.149 mm)

To make our meal we used 100mesh. For FFFFg you will need a 40 mesh and a 75 mesh sieve. The grains of BP that pass through the 40 mesh but not through the 75 mesh are FFFFg black powder.

You need two sieves in order to properly grade you powder.

The smaller the size of your powder, the faster it burns.

And there you have it. How to make black powder.

When last I did this I purchased my KNO3 from a company that was selling equipment and supplies for bio-diesel. His issue was that 10#s was a small amount. Other than that, no issues. I picked up the sulfur from someplace, it was no big deal. I grabbed the hardwood from the firewood pile to make it.

It took about 3 days to go through the entire process. Once I was done I used the powder to make BP rockets and a couple of BP salutes, types of fireworks.

Be safe if you try this. You are the responsible person. You shouldn’t take advise from randos on the web.

Gun Free Zone

Impact of DDL Operations on Aurora MySQL Readers

https://www.percona.com/blog/wp-content/uploads/2022/12/Impact-of-DDL-Operations-on-Aurora-MySQL-Readers-300×168.pngImpact of DDL Operations on Aurora MySQL Readers

Impact of DDL Operations on Aurora MySQL ReadersRecently I came across an interesting investigation about long-running transactions getting killed on an Aurora Reader instance. In this article, I will explain why it is advisable to avoid long-running transactions on Aurora readers when executing frequent DDL operations on the Writer, or at least be aware of how a DDL can impact your Aurora readers.

Aurora uses a shared volume often called a cluster volume that manages the data for all the DB instances which are part of the cluster. Here DB instances could be a single Aurora instance or multiple instances (Writer and Aurora Read Replicas) within a cluster.

Aurora replicas connect to the same storage volume as the primary DB instance and support only read operations. So if you add a new Aurora replica it would not make a new copy of the table data and instead will connect to the shared cluster volume which contains all the data.

This could lead to an issue on replica instances when handling the DDL operations.

Below is one such example.

| 3.02.2           |
1 row in set (0.22 sec)


Start a transaction on reader:

mysql> SELECT connection_id();
| connection_id() |
|               21|
1 row in set (0.27 sec)

mysql> SELECT * FROM t WHERE old_column not like '%42909700340-70078987867%';


While the transaction is ongoing on the reader, execute any DDL against the same table on the writer

mysql> ALTER TABLE t ADD COLUMN new_column VARCHAR(32);


Check status on reader, the transaction would be terminated forcefully

mysql> SELECT * FROM t WHERE old_column not like '%42909700340-70078987867%';
ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> SELECT connection_id();
ERROR 2006 (HY000): MySQL server has gone awayNo connection.
Trying to reconnect...
Connection id:    22
Current database: db
| connection_id() |
|              22 |
1 row in set (3.19 sec)


Now, let’s see what happens when there is a backup happening from a reader node and the writer receives a DDL for that particular table that is being backed up.

Take a logical backup of a table using mydumper:

mydumper --success-on-1146 --outputdir=/backups/ --verbose=3 --host=aurora-reader --ask-password --tables-list=db.t

While the backup is ongoing on the reader, execute any DDL against the same table on the writer.

mysql> ALTER TABLE t ADD COLUMN new_column VARCHAR(32);

Check the status of the backup

** Message: 16:04:51.108: Thread 1 dumping data for `db`.`t`          into /backups/db.t.00000.sql| Remaining jobs: 6
** Message: 16:04:51.941: Waiting threads to complete
** Message: 16:04:51.941: Thread 2 shutting down
** (mydumper:44955): CRITICAL **: 16:04:55.268: Could not read data from db.t: Lost connection to MySQL server during query

So what is the issue?

As stated above, Aurora does not use binary log-based replication to replicate data to the readers. The underlying storage is the same for all the instances (writer+readers) within a cluster and Aurora handles it with let’s say “magic”.

Now, because of this “magic” in Aurora, when you perform any DDL operation on writer instance, the reader instances are forced to terminate any long-running transactions so as to acquire the metadata lock so that DDL operation can continue on writer instance.

Hence, if you are using Aurora replicas for logical backups (mysqldump/mydumper) or if you are running some long-running jobs on the reader instance you may encounter the issue mentioned above.

To understand this better let’s see what happens when we perform any DDL operation in a binary log-based replication environment and in the Aurora replication environment. Following are the high-level steps when any DDL gets executed.

Binary log-based replication:

  • On the primary, ALTER TABLE will try to acquire the metadata lock
  • Once the lock is acquired the ALTER TABLE progresses
  • Once the ALTER TABLE operation completes, the DDL statement will be written to the binary log
  • On the replicas, the IO thread will copy this event to the local relay log
  • The SQL thread will apply the query from the relay log
  • On the replica, it will also acquire the global metadata lock
  • Once the lock is acquired, the ALTER TABLE will starts execution on the replica

Aurora replication:

  • On the writer, the ALTER TABLE will try to acquire the metadata lock
  • At the same time, it will check if there is any open transaction in any of the reader nodes, if so it will kill those transactions forcefully
  • Once the metadata lock is acquired, the ALTER TABLE progresses
  • After the ALTER TABLE completes, the modified structure will be visible to the replicas because of the same underlying storage

What are the issues?

  1. If you are performing frequent DDL operations in your database, it is not recommended to take logical backups from Aurora Reader.
  2. If transactions are running for a long time they may get killed.

What is the solution?

Create an external replica of the Aurora cluster using binary log-based replication. This replica can be used to take logical backups or to execute some long-running queries that will not be interrupted by the DDL operation on the Aurora writer instance.

You may follow the Percona blog to create an external replica from Aurora using MyDumper or review the AWS documentation page.

Percona Database Performance Blog

A Guide To Command-Line Data Manipulation


Allow me to preface this article by saying that I’m not a terminal person. I don’t use Vim. I find sed, grep, and awk convoluted and counter-intuitive. I prefer seeing my files in a nice UI. Despite all that, I got into the habit of reaching for command-line interfaces (CLIs) when I had small, dedicated tasks to complete. Why? I’ll explain all of that below. In this article, you’ll also learn how to use a CLI tool named Miller to manipulate data from CSV, TSV and/or JSON files.

Why Use The Command Line?

Everything that I’m showing here can be done with regular code. You can load the file, parse the CSV data, and then transform it using regular JavaScript, Python, or any other language. But there are a few reasons why I reach out for command-line interfaces (CLIs) whenever I need to transform data:

  • Easier to read.
    It is faster (for me) to write a script in JavaScript or Python for my usual data processing. But, a script can be confusing to come back to. In my experience, command-line manipulations are harder to write initially but easier to read afterward.
  • Easier to reproduce.
    Thanks to package managers like Homebrew, CLIs are much easier to install than they used to be. No need to figure out the correct version of Node.js or Python, the package manager takes care of that for you.
  • Ages well.
    Compared to modern programming languages, CLIs are old. They change a lot more slowly than languages and frameworks.

What Is Miller?

The main reason I love Miller is that it’s a standalone tool. There are many great tools for data manipulation, but every other tool I found was part of a specific ecosystem. The tools written in Python required knowing how to use pip and virtual environments; for those written in Rust, it was cargo, and so on.

On top of that, it’s fast. The data files are streamed, not held in memory, which means that you can perform operations on large files without freezing your computer.

As a bonus, Miller is actively maintained, John Kerl really keeps on top of PRs and issues. As a developer, I always get a satisfying feeling when I see a neat and maintained open-source project with great documentation.


  • Linux: apt-get install miller or Homebrew.
  • macOS: brew install miller using Homebrew.
  • Windows: choco install miller using Chocolatey.

That’s it, and you should now have the mlr command available in your terminal.

Run mlr help topics to see if it worked. This will give you instructions to navigate the built-in documentation. You shouldn’t need it, though; that’s what this tutorial is for!

How mlr Works

Miller commands work the following way:

mlr [input/output file formats] [verbs] [file]

Example: mlr --csv filter '$color != "red"' example.csv

Let’s deconstruct:

  • --csv specifies the input file format. It’s a CSV file.
  • filter is what we’re doing on the file, called a “verb” in the documentation. In this case, we’re filtering every row that doesn’t have the field color set to "red". There are many other verbs like sort and cut that we’ll explore later.
  • example.csv is the file that we’re manipulating.

Operations Overview

We can use those verbs to run specific operations on your data. There’s a lot we can do. Let’s explore.


I’ll be using a dataset of IMDb ratings for American TV dramas created by The Economist. You can download it here or find it in the repo for this article.

Note: For the sake of brevity, I’ve renamed the file from mlr --csv head ./IMDb_Economist_tv_ratings.csv to tv_ratings.csv.

Above, I mentioned that every command contains a specific operation or verb. Let’s learn our first one, called head. What it does is show you the beginning of the file (the “head”) rather than print the entire file in the console.

You can run the following command:

`mlr --csv head ./tv_ratings.csv`

And this is the output you’ll see:

tt3148266,1,12 Monkeys,2015-02-27,8.3407,0.46,"Adventure,Drama,Mystery"
tt3148266,2,12 Monkeys,2016-05-30,8.8196,0.25,"Adventure,Drama,Mystery"
tt3148266,3,12 Monkeys,2017-05-19,9.0369,0.19,"Adventure,Drama,Mystery"
tt3148266,4,12 Monkeys,2018-06-26,9.1363,0.38,"Adventure,Drama,Mystery"
tt1837492,1,13 Reasons Why,2017-03-31,8.437,2.38,"Drama,Mystery"
tt1837492,2,13 Reasons Why,2018-05-18,7.5089,2.19,"Drama,Mystery"

This is a bit hard to read, so let’s make it easier on the eye by adding --opprint.

mlr --csv --opprint head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title            date          av_rating   share   genres
tt2879552      1       11.22.63         2016-03-10    8.489       0.51    Drama,Mystery,Sci-Fi
tt3148266      1       12 Monkeys       2015-02-27    8.3407      0.46    Adventure,Drama,Mystery
tt3148266      2       12 Monkeys       2016-05-30    8.8196      0.25    Adventure,Drama,Mystery
tt3148266      3       12 Monkeys       2017-05-19    9.0369      0.19    Adventure,Drama,Mystery
tt3148266      4       12 Monkeys       2018-06-26    9.1363      0.38    Adventure,Drama,Mystery
tt1837492      1       13 Reasons Why   2017-03-31    8.437       2.38    Drama,Mystery
tt1837492      2       13 Reasons Why   2018-05-18    7.5089      2.19    Drama,Mystery
tt0285331      1       24               2002-02-16    8.5641      6.67    Action,Crime,Drama
tt0285331      2       24               2003-02-09    8.7028      7.13    Action,Crime,Drama
tt0285331      3       24               2004-02-09    8.7173      5.88    Action,Crime,Drama

Much better, isn’t it?

Note: Rather than typing --csv --opprint every time, we can use the --c2p option, which is a shortcut.


That’s where the fun begins. Rather than run multiple commands, we can chain the verbs together by using the then keyword.

Remove columns

You can see that there’s a titleId column that isn’t very useful. Let’s get rid of it using the cut verb.

mlr --c2p cut -x -f titleId then head ./tv_ratings.csv

It gives you the following output:

seasonNumber  title            date         av_rating   share    genres
     1      11.22.63          2016-03-10    8.489       0.51     Drama,Mystery,Sci-Fi
     1      12 Monkeys        2015-02-27    8.3407      0.46     Adventure,Drama,Mystery
     2      12 Monkeys        2016-05-30    8.8196      0.25     Adventure,Drama,Mystery
     3      12 Monkeys        2017-05-19    9.0369      0.19     Adventure,Drama,Mystery
     4      12 Monkeys        2018-06-26    9.1363      0.38     Adventure,Drama,Mystery
     1      13 Reasons Why    2017-03-31    8.437       2.38     Drama,Mystery
     2      13 Reasons Why    2018-05-18    7.5089      2.19     Drama,Mystery
     1      24                2002-02-16    8.5641      6.67     Action,Crime,Drama
     2      24                2003-02-09    8.7028      7.13     Action,Crime,Drama
     3      24                2004-02-09    8.7173      5.88     Action,Crime,Drama

Fun Fact

This is how I first learned about Miller! I was playing with a CSV dataset for https://details.town/ that had a useless column, and I looked up “how to remove a column from CSV command line.” I discovered Miller, loved it, and then pitched an article to Smashing magazine. Now here we are!


This is the verb that I first showed earlier. We can remove all the rows that don’t match a specific expression, letting us clean our data with only a few characters.

If we only want the rating of the first seasons of every series in the dataset, this is how you do it:

mlr --c2p filter '$seasonNumber == 1' then head ./tv_ratings.csv


We can sort our data based on a specific column like it would be in a UI like Excel or macOS Numbers. Here’s how you would sort your data based on the series with the highest rating:

mlr --c2p sort -nr av_rating then head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title                         date         av_rating  share   genres
tt0098887      1       Parenthood                    1990-11-13   9.6824     1.68    Comedy,Drama
tt0106028      6       Homicide: Life on the Street  1997-12-05   9.6        0.13    Crime,Drama,Mystery
tt0108968      5       Touched by an Angel           1998-11-15   9.6        0.08    Drama,Family,Fantasy
tt0903747      5       Breaking Bad                  2013-02-20   9.554      18.95   Crime,Drama,Thriller
tt0944947      6       Game of Thrones               2016-05-25   9.4943     15.18   Action,Adventure,Drama
tt3398228      5       BoJack Horseman               2018-09-14   9.4738     0.45    Animation,Comedy,Drama
tt0103352      3       Are You Afraid of the Dark?   1994-02-23   9.4349     2.6     Drama,Family,Fantasy
tt0944947      4       Game of Thrones               2014-05-09   9.4282     11.07   Action,Adventure,Drama
tt0976014      4       Greek                         2011-03-07   9.4        0.01    Comedy,Drama
tt0090466      4       L.A. Law                      1990-04-05   9.4        0.1     Drama

We can see that Parenthood, from 1990, has the highest rating on IMDb — who knew!

Saving Our Operations

By default, Miller only prints your processed data to the console. If we want to save it to another CSV file, we can use the > operator.

If we wanted to save our sorted data to a new CSV file, this is what the command would look like:

mlr --csv sort -nr av_rating ./tv_ratings.csv > sorted.csv

Convert CSV To JSON

Most of the time, you don’t use CSV data directly in your application. You convert it to a format that is easier to read or doesn’t require additional dependencies, like JSON.

Miller gives you the --c2j option to convert your data from CSV to JSON. Here’s how to do this for our sorted data:

mlr --c2j sort -nr av_rating ./tv_ratings.csv > sorted.json

Case study: Top 5 Athletes With Highest Number Of Medals In Rio 2016

Let’s apply everything we learned above to a real-world use case. Let’s say that you have a detailed dataset of every athlete who participated in the 2016 Olympic games in Rio, and you want to know who the 5 with the highest number of medals are.

First, download the athlete data as a CSV, then save it in a file named athletes.csv.

Let’s open up the following file:

mlr --c2p head ./athletes.csv

The resulting output will be something like the following:

id        name                nationality sex    date_of_birth height weight sport      gold silver bronze info
736041664 A Jesus Garcia      ESP         male   1969-10-17    1.72    64     athletics    0    0      0      -
532037425 A Lam Shin          KOR         female 1986-09-23    1.68    56     fencing      0    0      0      -
435962603 Aaron Brown         CAN         male   1992-05-27    1.98    79     athletics    0    0      1      -
521041435 Aaron Cook          MDA         male   1991-01-02    1.83    80     taekwondo    0    0      0      -
33922579  Aaron Gate          NZL         male   1990-11-26    1.81    71     cycling      0    0      0      -
173071782 Aaron Royle         AUS         male   1990-01-26    1.80    67     triathlon    0    0      0      -
266237702 Aaron Russell       USA         male   1993-06-04    2.05    98     volleyball   0    0      1      -
382571888 Aaron Younger       AUS         male   1991-09-25    1.93    100    aquatics     0    0      0      -
87689776  Aauri Lorena Bokesa ESP         female 1988-12-14    1.80    62     athletics    0    0      0      -

Optional: Clean Up The File

The CSV file has a few fields we don’t need. Let’s clean it up by removing the info , id , weight, and date_of_birth columns.

mlr --csv -I cut -x -f id,info,weight,date_of_birth athletes.csv

Now we can move to our original problem: we want to find who won the highest number of medals. We have how many of each medal (bronze, silver, and gold) the athletes won, but not the total number of medals per athlete.

Let’s compute a new value called medals which corresponds to this total number (bronze, silver, and gold added together).

mlr --c2p put '$medals=$bronze+$silver+$gold' then head ./athletes.csv

It gives you the following output:

name                 nationality   sex      height  sport        gold silver bronze medals
A Jesus Garcia       ESP           male     1.72    athletics      0    0      0      0
A Lam Shin           KOR           female   1.68    fencing        0    0      0      0
Aaron Brown          CAN           male     1.98    athletics      0    0      1      1
Aaron Cook           MDA           male     1.83    taekwondo      0    0      0      0
Aaron Gate           NZL           male     1.81    cycling        0    0      0      0
Aaron Royle          AUS           male     1.80    triathlon      0    0      0      0
Aaron Russell        USA           male     2.05    volleyball     0    0      1      1
Aaron Younger        AUS           male     1.93    aquatics       0    0      0      0
Aauri Lorena Bokesa  ESP           female   1.80    athletics      0    0      0      0
Ababel Yeshaneh      ETH           female   1.65    athletics      0    0      0      0

Sort by the highest number of medals by adding a sort.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head ./athletes.csv

Respectively, the resulting output will be the following:

name              nationality  sex     height  sport       gold silver bronze medals
Michael Phelps    USA          male    1.94    aquatics      5    1      0      6
Katie Ledecky     USA          female  1.83    aquatics      4    1      0      5
Simone Biles      USA          female  1.45    gymnastics    4    0      1      5
Emma McKeon       AUS          female  1.80    aquatics      1    2      1      4
Katinka Hosszu    HUN          female  1.75    aquatics      3    1      0      4
Madeline Dirado   USA          female  1.76    aquatics      2    1      1      4
Nathan Adrian     USA          male    1.99    aquatics      2    0      2      4
Penny Oleksiak    CAN          female  1.86    aquatics      1    1      2      4
Simone Manuel     USA          female  1.78    aquatics      2    2      0      4
Alexandra Raisman USA          female  1.58    gymnastics    1    2      0      3

Restrict to the top 5 by adding -n 5 to your head operation.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv

You will end up with the following file:

name             nationality  sex      height  sport        gold silver bronze medals
Michael Phelps   USA          male     1.94    aquatics       5     1      0      6
Katie Ledecky    USA          female   1.83    aquatics       4     1      0      5
Simone Biles     USA          female   1.45    gymnastics     4     0      1      5
Emma McKeon      AUS          female   1.80    aquatics       1     2      1      4
Katinka Hosszu   HUN          female   1.75    aquatics       3     1      0      4

As a final step, let’s convert this into a JSON file with the --c2j option.

Here is our final command:

mlr --c2j put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv > top5.json

With a single command, we’ve computed new data, sorted the result, truncated it, and converted it to JSON.

    "name": "Michael Phelps",
    "nationality": "USA",
    "sex": "male",
    "height": 1.94,
    "weight": 90,
    "sport": "aquatics",
    "gold": 5,
    "silver": 1,
    "bronze": 0,
    "medals": 6
  // Other entries omitted for brevity.

Bonus: If you wanted to show the top 5 women, you could add a filter.

mlr --c2p put '$medals=$bronze+$silver+$gold' then sort -nr medals then filter '$sex == "female"' then head -n 5 ./athletes.csv

Respectively, you would end up with the following output:

name              nationality   sex       height   sport        gold silver bronze medals
Katie Ledecky     USA           female    1.83     aquatics       4    1      0      5
Simone Biles      USA           female    1.45     gymnastics     4    0      1      5
Emma McKeon       AUS           female    1.80     aquatics       1    2      1      4
Katinka Hosszu    HUN           female    1.75     aquatics       3    1      0      4
Madeline Dirado   USA           female    1.76     aquatics       2    1      1      4


I hope this article showed you how versatile Miller is and gave you a taste of the power of command-line tools. Feel free to scourge the internet for the best CLI next time you find yourself writing yet another random script.


Further Reading on Smashing Magazine

Smashing Magazine

3 Seconds of Every TNG Episode Is a Wild Way to Experience Star Trek


There’s a lot of Star Trek, and a lot of it is very watchable—especially in the iconic sophomore series The Next Generation, which helped truly catapult the franchise into the pop culture stratosphere. Asking a Trek neophyte to dive into over five days of TV is a daunting task, however. So why not just give them an  appetizer of everything?

This incredible mash-up of all 178 episodes of Star Trek: The Next Generation by Sentinel of Something condenses the seven seasons of boldly going done by Captain Jean-Luc Picard and the crew of the U.S.S. Enterprise into a little over nine minutes… by giving you three random seconds of every episode. It’s unhinged and it’s perfect.

3 Seconds of Every Star Trek: TNG Episode

There’s an artistry to the consideration here. Do you pick an iconic visual, a perfect, but short enough line of dialogue, a joke, a sad moment, or a shot of action? Just how do you distill an entire episode of TNG, from the very best to the very worst, in just three seconds? The answer is that you not take Star Trek seriously, so what you get is three manic seconds of out-of-context weirdness, 178 times in a row.

Okay, it’s probably not helpful to a Star Trek newbie looking to shave some time off of a marathon. But for TNG fans, it’s a delightfully zany whirlwind trip through one of the best sci-fi TV shows of all time.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.


If You’ve Ever Used LastPass, You Should Change All Your Passwords Now


Personal details and password vaults containing the sign-in credentials of millions of users are now in the hands of criminals. If you’ve ever used the password manager, LastPass, you should change all of your passwords for everything, now. And you should immediately take further measures to protect yourself.

What Happened in the 2022 LastPass Data Breach?

LastPass is a password management service which operates on a "freemium" model. Users can store all of their passwords and logins for online services with LastPass, and access them through the web interface, through browser add-ons, and through dedicated smartphone apps.

Passwords are stored in "vaults", which are protected by a single master password.

In August 2022, LastPass announced that criminals had used a compromised developer account to access the LastPass development environment, source code, and technical information.

Further details were released in November 2022, when LastPass added that some customer data had been disclosed.

The true severity of the breach was revealed on December 22, when a LastPass blog post noted that criminals had used some of the information obtained in the earlier attack to steal backup data including customer names, addresses and phone numbers, email addresses, IP addresses, and partial credit card numbers. Additionally, they managed to steal user password vaults containing unencrypted website URLs and site names, as well as encrypted usernames and passwords.

Is It Difficult for Criminals to Crack Your LastPass Master Password?

Theoretically, yes, hackers should find it difficult to crack your master password. The LastPass blog post notes that if you use their default recommended settings, "it would take millions of years to guess your master password using generally-available password-cracking technology."

LastPass requires the master password to be a minimum of 12 characters, and recommends "that you never reuse your master password on other websites."

However, LastPass is unique among password management services in that it allows users to set a password hint to remind them of their master password should they lose it.

Effectively, this encourages users to use dictionary words and phrases as part of their password, rather than a truly random strong password. No password hint is going to help if your password is "lVoT=.N]4CmU".

The LastPass password vaults have been in the hands of criminals for some time now, and even though they’re encrypted, they will eventually be subject to brute force attacks.

Attackers will find their work easier thanks to the existence of massive databases of commonly used passwords. You can download a 17GB password list comprising the 613 million most common passwords from haveibeenpwned, for instance. Other password and credential lists are available on the dark web.

To try each of the half billion most common keys against an individual vault would take minutes, and although relatively few would be the required 12 characters, it’s likely that cybercriminals will be able to easily break into a good proportion of vaults.

Add to that the fact that computing power increases year-on-year, and that motivated criminals can use distributed networks to help with the effort; "millions of years" doesn’t seem feasible for the majority of accounts.

Does the LastPass Breach Just Affect Passwords?

While the headline news is that criminals can take their time to break into your LastPass vault, they can take advantage of you in other ways by using your name, address, phone number, email address, IP address, and partial credit card number.

These can be used for a number of nefarious purposes including spearphishing attacks against you and your contacts, identity theft, taking out credit and loans in your name, and SIM swap attacks.

How Can You Protect Yourself After the LastPass Data Breaches?

You should assume that within a few years, your master password will be compromised and all the passwords contained within will be known to criminals. You should change them now, and use unique passwords you have never used before, and which aren’t in any of the commonly used password lists.

With regard to the other data criminals obtained from LastPass, you should freeze your credit, and engage a credit monitoring service to monitor any new card or loan applications in your name. If you’re able to change your phone number without too much inconvenience, you should do that too.

Take Responsibility for Your Own Security

It’s easy to blame LastPass for the data breaches which saw your password vaults and personal details fall into the hands of criminals, but password management services that secure your life and help you generate unique combos are still the best way to secure your online life.

One way to make it more difficult for would-be thieves to get hold of your vital data is to host a password manager on your own hardware. It’s cheap, easy to do, and some solutions, such as VaultWarden, can even be deployed on a Raspberry Pi Zero.

MUO – Feed

A quick guide on why you should use Laravel Octane to scale your app


Laravel has gained a lot of attention in recent years, though it has been a charm in the eyes of product owners for small-scale web applications. However, developing a large-scale application with Laravel still seems like a hard nut to crack. In this blog post, we have covered the latest addition to the Laravel framework, Laravel Octane, and the buzz Octane to scale Laravel applications has created around the market to fill the scalability gap of Laravel experienced by product owners


Laravel possesses a simple and decent syntax that allows developers to make their web application development easier. The only drawback which led Laravel to lack on every front is Scalability which is why Laravel is not among the top most preferred languages among developers. After the introduction of Octane, this perception shifted slightly in favor of Laravel and its community. But, before we move to our Topic of Octane to Laravel Applications. Let us first understand What Laravel Octane is and How does it work.

What is Laravel Octane?

Laravel Octane is an open-source package introduced by Taylor Otwell in his Laracon Online Talk, with its release date on May 12, 2021. This package serves the Laravel application and uses the web server Swoole and RoadRunner to help improve performance. It is initialized with the first request, stored in memory, and does not reinitialize at subsequent requests. It possesses 3.3k Stars and 234 Forks on GitHub.

Octane Key Features

The key features of Octane include several features that allow it to present better performance than the traditional Laravel; given below are a few such features:

  • It makes your application extremely fast and efficient
  • It saves resources compared to conventional Laravel applications.

How does Laravel Octane work?

Octane Laravel is built on top of the traditional Laravel and PHP version 8.0 and above. It is not a separate language but is an addition to the standard Laravel. Octane works on its servers via Swoole and RoadRunner, which the traditional Apache and Nginx should redirect for incoming traffic.

Octane Open Swoole

It is a software development kit that allows product owners to create efficient and scalable web applications. Open Swoole is a fork of the Swoole framework initiated in late 2021 concerning the internal conflicts within the developer community of swoole.

Octane Roadrunner

It is an open-source, high-performance, serviceable PHP application server that is effective and efficient in replacing the traditional Nginx setups delivering better performance and flexibility.

We can also mention that though Swoole and RoadRunner still use worker processes for all requests, they only serve the first request to boot the framework (dependency containers), and the remaining ones are from the framework’s bootstrapped version. This collectively makes your application perform impressively fast from your existing application.

Octane Roadrunner

Octane Laravel catches the bootstrapped Laravel framework in RAM with the first request. After that, all the requests use the already bootstrapped framework from RAM instead of doing the whole reading of files from the hard disk and re-bootstrapping the framework, making the Laravel application faster and improving its performance to a significant extent.

Laravel with Apache vs Laravel Octane Benchmarks

According to the creator of Laravel, Taylor Otwell, after the introduction of Apache and Octane, there is almost no requirement to choose Lumen for new projects. As we know that Lumen is a Laravel-based micro-framework intentionally devoid of a few components, making it fast. Also, using the Laravel Octane as its server is a fast better option than Lumen. To understand that better, let us look at a few benchmarks.

In this benchmark, we took in a total of three modes Laravel Octane vs Laravel Apache Web Server vs Laravel built-in server:

  • Laravel with Octane
  • Laravel with Apache Web Server
  • Laravel with its in-built server.

In this benchmarking, we used the tool wrk with the given below configuration:

1wrk -tl -c50 URL

The first attest is performed on the Just Hello World HTML, and the results that we have received are as follows:

Mode Requests Handled in 10-Seconds Requests Per Second (RPS)
Laravel with Octane 2667 266 rps
Laravel with Apache 1210 121 rps
Laravel with its in-built Server 705 70 rps

These, Laravel Octane benchmark metrics are enough to give you insight that Octane Laravel is far faster in terms of performance than the traditional Apache-Server Laravel.

Why Scale your Laravel Application with Octane?

Octane catches your data and objects in the memory, which contributes to the improvement in the performance of your Laravel application by upto 30%. This improvement in the performance is observed due to the eliminating of disk reads, process forks, and various other overheads that come to function whensoever the framework is booted up on every request, which allows the Laravel application to manage thousands of connections quickly, thereby contributing to the increase in traffic or the need to be highly available. Also, this is a productive method that too with a little expense.

Additionally, a minor or no change in the codebase is required under certain circumstances. But it is related to how your application is structured.

What to Look for Before Scaling your Laravel Application with Octane (Prerequisites)

Octane can improve your web application development performance to a great extent, but that doesn’t mean that it is sufficiently capable of improving the performance of your web application and providing you with the freedom to scale.

It involves prerequisites like PHP version 8.0 and adopting the stateful programming convention. Therefore, before you finalize using Octane, the first and foremost step is to update your PHP version and install it. Though it seems a simple task, the severe hiccup with this is that numerous websites that work on Laravel are using an older version of Laravel, which does not support the Octane and hence needs to be updated with the latest Laravel version.

Also, many product owners, whether experienced or fresher, are not familiar with the stateful framework, which again requires efforts to understand it before they finalize to implement Laravel Octane within your web application to scale.

Other Necessary Factors

In addition to the above-given prerequisites, a few other factors must be considered to make a scalable web application. Let s look at those factors given below:

Refactoring Implications

For applications using session state or a few middleware types, some prior changes to the code are required for using the Octane. Also, you should add support for Swoole coroutine-based sessions and remove middlewares that aren’t compatible with Swoole.

The rest of the changes that need to be made involve the Dependency Injection Container. You can refer to the Laravel Official Document of Laravel for more details on How to do it.

Handling Code Changes

The traditional Laravel requires you to start or stop the entire Laravel application instance whensoever a change is made. However, contrary to this, with Octane, you get a convenient way to monitor your codebase automatically and restart your Octane server without any manual interventions, which allows you to make the developments easily by altering the code at a single instance or in one place instead of restarting the whole application insta\nce every time the codebase is updated.

Are there any other Options to Scale Laravel applications?

Yes, there is always another solution to a problem. Though Laravel Octane is a great tool to resolve the issue of scaling within the traditional Laravel application, it is still not the only option; plenty of other options can contribute to your scaling needs. Let us have a look at a few of them:

Horizontal Scaling

The first comes horizontal scaling, a simple yet least recommended method to scale your Laravel application. The reason behind this is that though it is efficient, it becomes more and more complex and expensive as the running costs keep increasing and more difficult as you migrate to a different server. Further, it is still a limited solution as you cannot scale on demand with this setup.

Load Balancing (with AWS or Without AWS)

For scaling your Laravel Application on AWS, you can automatically scale the applications to a limited extent for applications with the same MySQL database. These servers thereafter use a single Amazon RDS or Amazon Relational Database.

Though this seems a viable option to scale compared to Horizontal scaling, the hidden costs involved here are not transparent. Also, it is less effective than Octane Laravel. Still, the Significant benefit is that it allows you to scale without any alteration needed in your existing codebase, as with Laravel Octane.

Laravel Vapor and Octane

Laravel Vapor is an auto-scaling, serverless deployment platform for Laravel. Vapor allows for scalability and simplicity of the serverless. When combined with Octane, the Laravel Octane Vapor combination provides autoscaling and faster load times at any scale. However, to understand it, even more, let us take an example from the official Laravel Documentation.

Vapor project on the us-west-1 region of Amazon. This Vapor application is configured with 1 GB of RAM and an RDS MySQL instance (db.t2.micro) with 1 VCPU and 1Gib RAM.

Now, looking at the API endpoint that gets a user from the database. With Octane, its endpoint becomes 7x faster. It uses 44% less memory:

Before Vapor’s Octane integration

1Time Taken by Request: 39.40 ms, Memory Usage: 169 MB

2Time Taken by Request: 40.20 ms, Memory Usage: 169 MB

3Time Taken by Request: 37.71 ms, Memory Usage: 169 MB

4Time Taken by Request: 42.16 ms, Memory Usage: 169 MB

5Time Taken by Request: 40.60 ms, Memory Usage: 169 MB

6Time Taken by Request: 45.75 ms, Memory Usage: 169 MB

After Vapor’s Octane integration

1Time Taken by Request: 6.78 ms, Memory Usage: 112 MB

2Time Taken by Request: 6.64 ms, Memory Usage: 112 MB

3Time Taken by Request: 6.67 ms, Memory Usage: 112 MB

4Time Taken by Request: 6.38 ms, Memory Usage: 112 MB

5Time Taken by Request: 6.75 ms, Memory Usage: 112 MB

6Time Taken by Request: 6.47 ms, Memory Usage: 112 MB

While the “login” route which renders a static template. Octane at this endpoint has made it almost 3x faster. It uses 35% less memory:

Before Vapor’s Octane integration

1Time Taken by Request: 11.32 ms, Memory Usage: 165 MB

2Time Taken by Request: 11.35 ms, Memory Usage: 165 MB

3Time Taken by Request: 11.29 ms, Memory Usage: 165 MB

4Time Taken by Request: 11.29 ms, Memory Usage: 165 MB

5Time Taken by Request: 11.36 ms, Memory Usage: 165 MB

6Time Taken by Request: 11.43 ms, Memory Usage: 165 MB

After Vapor’s Octane integration

1Time Taken by Request: 4.89 ms, Memory Usage: 108 MB

2Time Taken by Request: 4.89 ms, Memory Usage: 108 MB

3Time Taken by Request: 4.83 ms, Memory Usage: 108 MB

4Time Taken by Request: 4.66 ms, Memory Usage: 108 MB

5Time Taken by Request: 4.79 ms, Memory Usage: 108 MB

6Time Taken by Request: 4.91 ms, Memory Usage: 108 MB

Thus, we can infer that Octane decreases the duration of requests as well as memory usage. Also, adding the AWS applies 1ms billing granularity on Lambda, so you need to pay comparatively less for your HTTP function. Hence, the combination of Laravel Octane Vapor is a great choice for making a Laravel Octane Vapor web application with scalability.


This is what we have on scaling your Laravel application using Laravel Octane. Though it is a decent option to scale your existing Laravel application, it shows significant improvement in terms of performance in comparison to the traditional Laravel application. Still, its incompatibility with the existing codebase of many traditional Laravel applications is a setback. It can be a good choice if you wish to improve the performance of your application, but if you plan on saving an already struggling application, then Octane Laravel is not the best option. You must do thorough research before making the final call.

Additionally, when you have opted for Octane, make sure to use the Octane Compatible packages you add to your project to get the ideal application compatible with Octane if you are still hesitant about making the final call or scaling your Laravel application. A Laravel Development Company like Bacancy can assist you in the same; our expert professionals are always available to guide and walk alongside you through your entire web application development process.

Laravel News