Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

A MyRocks Use Case

https://www.percona.com/blog/wp-content/uploads/2022/12/A-MyRocks-Use-Case.pngA MyRocks Use Case

A MyRocks Use CaseI wrote this post on MyRocks because I believe it is the most interesting new MySQL storage engine to have appeared over the last few years. Although MyRocks is very efficient for writes, I chose a more generic workload that will provide a different MyRocks use case.

The use case is the TPC-C benchmark but executed not on a high-end server but on a lower-spec virtual machine that is I/O limited like for example, with AWS EBS volumes. I decided to use a virtual machine with two CPU cores, four GB of memory, and storage limited to a maximum of 1000 IOPs of 16KB. The storage device has performance characteristics pretty similar to an AWS gp2 EBS volume of about 330 GB in size. I emulated these limits using the KVM iotune settings in my lab.

<iotune>
     <total_iops_sec>1000</total_iops_sec>
     <total_bytes_sec>16384000</total_bytes_sec>
</iotune>

MyRocks and RocksDB

If you wonder what is the difference between MyRocks and RocksDB, consider MyRocks as the piece of code, or the glue, that allows MySQL to store data in RocksDB. RocksDB is a very efficient key-value store based on LSM trees. MyRocks stores table rows in RocksDB using an index id value concatenated with the primary key as the key and then the internal MySQL binary row representation as the value. MyRocks handles indexes in a similar fashion. There are obviously tons of details but that is the main principle behind MyRocks. Inside MyRocks, there is an embedded instance of RocksDB running.

 

Dataset

The TPC-C dataset I used was with a scale of 200. As seen in the figure below, the sizes of the dataset are very different using InnoDB vs MyRocks.  While with InnoDB the size is 20GB, it is only 4.3GB with MyRocks. This is a tribute to the efficient compression capabilities of MyRocks.

InnoDB and MyRocks dataset sizes

InnoDB and MyRocks dataset sizes

A keen observer will quickly realize the compressed dataset size with MyRocks is roughly the same as the amount of memory of the virtual machine. This is not an accident, it is on purpose. I want to illustrate, maybe using an obvious use case, that you can’t use general rules like “InnoDB is faster for reads” or “MyRocks is only good for writes”. A careful answer would be: “it depends…”

 

TPC-C on MyRocks

In order to be able to run the sysbench TPC-C script, you need to use a binary collation and the read-committed isolation level. You must also avoid foreign key constraints. A typical sysbench invocation would look like this:

./tpcc.lua --mysql-host=10.0.4.112 --mysql-user=sysbench --mysql-password=sysbench --mysql-ssl=off \
   --mysql-db=sysbench --threads=4 --scale=200 --use_fk=0 --mysql_storage_engine=rocksdb \
   --mysql_table_options="COLLATE latin1_bin" --trx_level=RC --report-interval=10 --time=3600 run

I used a rocksdb_block_cache_size of 512MB. I wanted most of the memory to be available for the file cache, where the compressed SST files will be cached. The block cache just needs to be large enough to keep the index and filter blocks in memory. In terms of compression, the relevant settings in the column family options are:

compression_per_level=kLZ4Compression;bottommost_compression=kZSTD;compression_opts=-14:1:0

MyRocks uses ZStd:1 compression for the bottom level and LZ4 for the upper levels. The bottom-level compression is really critical as it contains most of the data.

Being an LSM-type storage engine, RocksDB must frequently perform level compactions. Level compactions consume IOPs and in environments where IOPs are scarce, those impact performance. Fortunately, RocksDB has the variable rocksdb_rate_limiter_bytes_per_sec to limit the impacts of compaction. The IO bandwidth used by the background compaction threads is limited by this parameter. The following figure illustrates the impacts.

myrocks

As the filesystem cache and the block cache warms up, the TPC-C transaction rates rise from 50 to around 175/s. After roughly 500s, the need for compaction arises and the performance drops. With no rate limit (0), the background threads consume too much IOPs and the compaction adversely affects the workload. With lower values of rocksdb_rate_limiter_bytes_per_sec, the impacts are reduced and the compactions are spread over longer periods of time.

For this environment, a rate limit of 4 MB/s achieves the lowest performance drops. Once warmed, the performance level never felt under 100 Trx/s. If you set rocksdb_rate_limiter_bytes_per_sec too low, like at 1MB/s, compaction cannot keep up and processing has to stall for some time. You should allocate enough bandwidth for compaction to avoid these stalls.

Long term stability

Over time, as data accumulates in the RocksDB LSM tree, performance can degrade. Using the 2 MB/s rate limiter, I pushed the runtime to 10 hours and observed very little degradation as shown in the following figure.

MyRocks performance stability

There are of course many compaction events but the performance baseline remains stable.

 

MyRocks Vs InnoDB

Now, how does this workload perform on InnoDB? InnoDB is more IO bound than MyRocks, essentially the 20GB dataset is large for the 3GB buffer pool.

MyRocks Vs InnoDB

The compaction event diminishes MyRocks performance but even then, the transaction rate stays well above the InnoDB one. Over the course of one hour, InnoDB executed 125k transactions while MyRocks achieved in excess of 575k transactions. Even if InnoDB uses compression (CMP8k), the performance level is still much lower.

Conclusion

I hope this post has raised your interest in the MyRocks storage engine. If you are paying too much for cloud-based storage and IOPs, make sure you evaluate MyRocks as it has super compression capabilities and is IO efficient.

Note: all the raw results, scripts, and configuration files used for this post can be found on Github.

Planet MySQL

Making Black Powder

https://www.supplymylab.com/_resources/_global/media/resized/00068/ihwx.d2a6e233-50ed-4576-a9e1-41b575704b50.500.500.jpg

Warning: these procedures create a low explosive. You can hurt yourself or others if you do something stupid.

When I was in a situation where I had lost my home I had to move into temporary housing for a bit. I could not take my firework chemicals with me. I needed to dispose of them. The method I choose to use was to just ignite the burnables and dispose of them that way.

I would move across the road to a safe location, pour a pound or more of BP on the ground. Add a slow fuse. Weigh the fuse down with rocks to keep it stretched out. Light the end of the fuse and move away. 30 to 60 seconds later the pile would go up with a POOF and a cloud of smoke along with a flash of light. No BOOM. Very safe.

The guy I was working with asked to do a pile. I kept working on what I needed to do. Even though he had watched me weigh the fuse down to stretch it out, he just stuck the fuse in the pile like a birthday candle. When he lit the fuse sparks from the fuse landed on the pile and it went off.

He suffered major burns. His sunglasses melted. All exposed skin was blistered. EMS was called. He was transported to the local hospital and from there a life flight took him to the regional burn center.

THIS STUFF IS DANGEROUS. BE CAREFUL.

There are only three components that go into black powder:

  • 75% Potassium Nitrate (KNO3)
  • 10% Sulfur
  • 15% Carbon

All percentages are given BY WEIGHT. While KNO3 makes up 75% of the BP mixture it is not the largest by volume. That goes to carbon.

You will need to source some of these but can make the carbon. The cheapest choice is to buy the KNO3 in pellet form. This is used for many things. One of which is making fuel from vegetable oils. If you wanted you could get a 55 gallon drum of the stuff delivered to you.

Pure sulfur is also easy to purchase. It is used in many processes.

Making carbon is actually making charcoal. Not “charcoal briquettes” but actual charcoal. To make charcoal you need a good hardwood, a heat source and an air tight container.

What you are going to do, in essence, is to cook your hardwood into charcoal. Start by turning your hardwood into small chunks. You want as much surface area as possible without making chips or sawdust. Once this is done you need to cook it.

Find an airtight metal can. I purchased an empty paint can from the hardware store. Poke one small hole in the center of the lid. Fill the container with your hardwood. Then put the lid with hole back on.

As the wood cooks it will emit gasses. The hole allows those gasses to escape. The gasses also displace all of the oxygen inside the can. One of the cool things you can do is actually light the gasses that are escaping on fire.

You cook your wood until no more gas is escaping.

If you did everything correctly you should have no ash in the can and lots of charcoal. You might still have some wood inside that charcoal.

Now that you have these, you need to processes them into something usable. For that you will need a mill. The best option for that is to purchase a ball mill. You want a ball mill that is non sparking. That means that you need to avoid plastics that might build up an electrical charge. Remember the life flight above but add to it a BOOM when the spark sets off your mixture will it is contained.

For the best black powder you want a homogeneous mixture. The smaller the particles of the mixture the more homogeneous the mixture will be when mixed properly.

To this end we want to turn our three components into a fine powder. In general, at this stage you want a powder that will pass through a #100 sieve. The following is an example of grading sieves. There are other options that are cheaper. You will need grading sieves at different sizes.

3
Gilson ASTM 3″ diameter Round Brass-Stainless Test Sieves meet the requirements of ASTM E 11. Brass Frame with Stainless Steel Cloth is a popular choice that offers extended service and…

One of the nice thing about the type of sieve listed above is that you can put your material at the top and sift it. Each finer mesh will stop your material and you end up with your powder properly graded.

To make this powder you have to mill the KNO3, Sulfur, and Carbon. There are different ways of doing this. I’m only going to describe the method using the ball mill as I feel this is the safest method.

You need to put your material into the ball mill with some sort of media. You might be tempted to use lead balls. DON’T. At this stage you are best served with a non-sparking material that is hard. I chose to use stainless steel balls. You need a mix of balls from about 1/4″ to 3/4″ inch in diameter. You can find these for sale on Amazon and other sources.

The amount of raw material and media is dependent on the size of your ball mill.

Once loaded and sealed, run your mill until the raw material is a fine powder. Pass it through your #100 sieve. Anything that doesn’t pass through goes back into the mill for another run. Once all your material passes through your sieve carefully package it in an airtight container. You don’t want it to absorb water from the air.

Now wash your ball mill and media. Make sure there is no residue left behind and then let it dry. You do not want the different chemicals to mix while milling.

Repeat the process for the other three chemicals. When you are milling the charcoal you might find bits of wood that has not carbonized. Just return them to your charcoal can to wait for your next run.

Remember to wear a mask while working with powders this fine. They will get into your lungs if you don’t.

Now that you have the three powders you need to measure them carefully by weight.

I choose to use a triple beam scale. This is accurate to 0.1 grams. Our reloading scales are normally good to 0.1gr. 154 grains per gram if I did my math correctly.

This means that your reloading scale is more than accurate enough. What might be an issue is the total amount that you can weigh on your scale or the volume you can hold in your scale. Just be aware.

If you are using any type of scale, make sure you tare your scale and container.

You should now have 3 airtight containers full of powdered chemicals. You should have a spotlessly clean ball mill.

Take your stainless steel media and put it in a safe place. Think of this as removing ammunition when you are working with a firearm and don’t want an accidental boom.

Now you need to mix the three chemicals. Use your scale and measure out 7.5g of your powdered KNO3. 1.0g of Sulfur. and 1.5g of Carbon/charcoal into your mill.

Add your non-sparking media to the ball mill. If you use a hard lead balls you will turn your KNO3 gray which means there is lead transferred. I don’t like the lead ball method. Brass works very well, does not spark. It is expensive. The one most people use is ceramic. It should not spark but there are arguments within the fireworks community as to the truth of this. Finally there is non-sparking stainless steel. The prefered stainless steel alloys for this are 304 and 316.

Remember, if there is a spark in your ball mill at this point, it will go boom.

Now the safety part of the next step.

Get yourself a long extension cord, 100ft is best. Run it out a 100ft from where power is away from all buildings and people. Make sure that the cord is NOT energized. Do NOT plug in the extension cord. Put your jar on the ball mill drive and turn on the ball mill.

NOTHING SHOULD HAPPEN

If the ball mill starts up, turn it off and go unplug the extension cord.

Now that the mill is on but not running, go back to the other end of the extension cord and plug it in. This should turn the mill on. You might be able to hear it running. Hopefully you don’t see it running.

Remember all those videos of idiots putting tannerite inside things and then shooting said things only to find that there is stuff flying at high speeds towards them? You just filled a jar with an explosive and projectiles. If it goes boom things WILL fly. Don’t be where said speeding things can hit you.

Let the mill run for about an hour. You want a good homogeneous mixture.

This mixture is very flammable. If you put a spark to it, it will flash. Don’t do it!

If you want to test a small amount make sure it is a small amount and you use something that keeps you at a distance when you light the powder.

This is NOT gunpowder this is PB “meal”! There are a couple of more steps.

The meal must be turned into actual gunpowder. This is done by pressing it into pucks and then processing the pucks.

Take your black powder meal and add a small amount of water to it. You want just enough water to be able to press it into pucks. If you put enough water in that it looks wet, you’ve added to much.

No, I can’t tell you how much.

KNO3 is water soluble. This means that as you add water to your BP meal the KNO3 will dissolve into the water. When you press your puck any excess water will be squeezed out and this will carry away some of your KNO3 which changes the ratios of your BP.

One method used is to spritz a fine mist over the powder. One spritz might be enough for this amount of BP meal.

Now you need to make your puck.

You need a container to hold the puck. I used a piece of 2″ PVC pipe that was about 2.5 inches long. I put this on a piece of aluminum, 1/4″ thick and about 5in square. I put a small round piece of wood inside the pipe at the bottom and then added my BP meal on top until there was about an 3/4 to an inch of powder there.

Now I put another wooden round over the top that fits snuggle in the pipe.

Today, because I have a machine shop, I would take a 1/2 sheet of aluminum and mill it to have a boss in the center that exactly fit the pipe. I would make an aluminum plug that would exactly fit the pipe and use that instead of working with wood.

Now press the pipe. I used a big C-Clamp the first time. Today I would use my arbor press. You want to squeeze this hard enough that it sticks together on its own. You can use something like this cheese press to compress your puck.

This is a fancy press that is designed to provide a constant pressure. You don’t need all that fancy. You just need a long lever and a single down rod to press into the top plate of the puck mold.

Because the BP meal is damp it is MOSTLY safe from sparks. This is fairly safe as things go.

Now you need to dry your pucks. Place them on a screen to sun dry. You want both the top and bottom exposed to air and you want to do this in a location where there is no chance of a spark. I’ve used a furnace filter but an actual window screen is better.

Now you have a bunch of very dry and hard pucks of BP. And it is actual Black Powder now.

But it isn’t really usable, what you need now is to create granules that can be used as you want.

Take one of your pucks and put it in a spark proof baggy to control where stuff goes. Now using non-sparking equipment hammer that puck lightly until it breaks up.

I use a wooden mallet and zip lock bags on an aluminum block.

Hammer until you have grains of black powder that are about what you want.

Your Black Powder is sorted into different grades:

  • Whaling – 4 mesh (4.74 mm)
  • Cannon – 6 mesh (3.35 mm)
  • Saluting – 10 mesh (2 mm)
  • Fg – 12 mesh (1.7 mm)
  • FFg – 16 mesh (1.18 mm)
  • FFFg – 20 mesh (.85 mm)
  • FFFFg – 40 mesh (.47 mm)
  • FFFFFg – 75mesh (.149 mm)

To make our meal we used 100mesh. For FFFFg you will need a 40 mesh and a 75 mesh sieve. The grains of BP that pass through the 40 mesh but not through the 75 mesh are FFFFg black powder.

You need two sieves in order to properly grade you powder.

The smaller the size of your powder, the faster it burns.

And there you have it. How to make black powder.

When last I did this I purchased my KNO3 from a company that was selling equipment and supplies for bio-diesel. His issue was that 10#s was a small amount. Other than that, no issues. I picked up the sulfur from someplace, it was no big deal. I grabbed the hardwood from the firewood pile to make it.

It took about 3 days to go through the entire process. Once I was done I used the powder to make BP rockets and a couple of BP salutes, types of fireworks.

Be safe if you try this. You are the responsible person. You shouldn’t take advise from randos on the web.

Gun Free Zone

Impact of DDL Operations on Aurora MySQL Readers

https://www.percona.com/blog/wp-content/uploads/2022/12/Impact-of-DDL-Operations-on-Aurora-MySQL-Readers-300×168.pngImpact of DDL Operations on Aurora MySQL Readers

Impact of DDL Operations on Aurora MySQL ReadersRecently I came across an interesting investigation about long-running transactions getting killed on an Aurora Reader instance. In this article, I will explain why it is advisable to avoid long-running transactions on Aurora readers when executing frequent DDL operations on the Writer, or at least be aware of how a DDL can impact your Aurora readers.

Aurora uses a shared volume often called a cluster volume that manages the data for all the DB instances which are part of the cluster. Here DB instances could be a single Aurora instance or multiple instances (Writer and Aurora Read Replicas) within a cluster.

Aurora replicas connect to the same storage volume as the primary DB instance and support only read operations. So if you add a new Aurora replica it would not make a new copy of the table data and instead will connect to the shared cluster volume which contains all the data.

This could lead to an issue on replica instances when handling the DDL operations.

Below is one such example.

mysql> SELECT AURORA_VERSION();
+------------------+
| AURORA_VERSION() |
+------------------+
| 3.02.2           |
+------------------+
1 row in set (0.22 sec)

 

Start a transaction on reader:

mysql> SELECT connection_id();
+-----------------+
| connection_id() |
+-----------------+
|               21|
+-----------------+
1 row in set (0.27 sec)

mysql> SELECT * FROM t WHERE old_column not like '%42909700340-70078987867%';

 

While the transaction is ongoing on the reader, execute any DDL against the same table on the writer

mysql> ALTER TABLE t ADD COLUMN new_column VARCHAR(32);

 

Check status on reader, the transaction would be terminated forcefully

mysql> SELECT * FROM t WHERE old_column not like '%42909700340-70078987867%';
ERROR 2013 (HY000): Lost connection to MySQL server during query

mysql> SELECT connection_id();
ERROR 2006 (HY000): MySQL server has gone awayNo connection.
Trying to reconnect...
Connection id:    22
Current database: db
+-----------------+
| connection_id() |
+-----------------+
|              22 |
+-----------------+
1 row in set (3.19 sec)

 

Now, let’s see what happens when there is a backup happening from a reader node and the writer receives a DDL for that particular table that is being backed up.

Take a logical backup of a table using mydumper:

mydumper --success-on-1146 --outputdir=/backups/ --verbose=3 --host=aurora-reader --ask-password --tables-list=db.t

While the backup is ongoing on the reader, execute any DDL against the same table on the writer.

mysql> ALTER TABLE t ADD COLUMN new_column VARCHAR(32);

Check the status of the backup

** Message: 16:04:51.108: Thread 1 dumping data for `db`.`t`          into /backups/db.t.00000.sql| Remaining jobs: 6
..
..
** Message: 16:04:51.941: Waiting threads to complete
** Message: 16:04:51.941: Thread 2 shutting down
** (mydumper:44955): CRITICAL **: 16:04:55.268: Could not read data from db.t: Lost connection to MySQL server during query

So what is the issue?

As stated above, Aurora does not use binary log-based replication to replicate data to the readers. The underlying storage is the same for all the instances (writer+readers) within a cluster and Aurora handles it with let’s say “magic”.

Now, because of this “magic” in Aurora, when you perform any DDL operation on writer instance, the reader instances are forced to terminate any long-running transactions so as to acquire the metadata lock so that DDL operation can continue on writer instance.

Hence, if you are using Aurora replicas for logical backups (mysqldump/mydumper) or if you are running some long-running jobs on the reader instance you may encounter the issue mentioned above.

To understand this better let’s see what happens when we perform any DDL operation in a binary log-based replication environment and in the Aurora replication environment. Following are the high-level steps when any DDL gets executed.

Binary log-based replication:

  • On the primary, ALTER TABLE will try to acquire the metadata lock
  • Once the lock is acquired the ALTER TABLE progresses
  • Once the ALTER TABLE operation completes, the DDL statement will be written to the binary log
  • On the replicas, the IO thread will copy this event to the local relay log
  • The SQL thread will apply the query from the relay log
  • On the replica, it will also acquire the global metadata lock
  • Once the lock is acquired, the ALTER TABLE will starts execution on the replica

Aurora replication:

  • On the writer, the ALTER TABLE will try to acquire the metadata lock
  • At the same time, it will check if there is any open transaction in any of the reader nodes, if so it will kill those transactions forcefully
  • Once the metadata lock is acquired, the ALTER TABLE progresses
  • After the ALTER TABLE completes, the modified structure will be visible to the replicas because of the same underlying storage

What are the issues?

  1. If you are performing frequent DDL operations in your database, it is not recommended to take logical backups from Aurora Reader.
  2. If transactions are running for a long time they may get killed.

What is the solution?

Create an external replica of the Aurora cluster using binary log-based replication. This replica can be used to take logical backups or to execute some long-running queries that will not be interrupted by the DDL operation on the Aurora writer instance.

You may follow the Percona blog to create an external replica from Aurora using MyDumper or review the AWS documentation page.

Percona Database Performance Blog

A Guide To Command-Line Data Manipulation

http://cloud.netlifyusercontent.com/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/9865d051-a600-4de7-a961-8b39a9226757/guide-command-line-data-manipulation-cli-miller.jpg

Allow me to preface this article by saying that I’m not a terminal person. I don’t use Vim. I find sed, grep, and awk convoluted and counter-intuitive. I prefer seeing my files in a nice UI. Despite all that, I got into the habit of reaching for command-line interfaces (CLIs) when I had small, dedicated tasks to complete. Why? I’ll explain all of that below. In this article, you’ll also learn how to use a CLI tool named Miller to manipulate data from CSV, TSV and/or JSON files.

Why Use The Command Line?

Everything that I’m showing here can be done with regular code. You can load the file, parse the CSV data, and then transform it using regular JavaScript, Python, or any other language. But there are a few reasons why I reach out for command-line interfaces (CLIs) whenever I need to transform data:

  • Easier to read.
    It is faster (for me) to write a script in JavaScript or Python for my usual data processing. But, a script can be confusing to come back to. In my experience, command-line manipulations are harder to write initially but easier to read afterward.
  • Easier to reproduce.
    Thanks to package managers like Homebrew, CLIs are much easier to install than they used to be. No need to figure out the correct version of Node.js or Python, the package manager takes care of that for you.
  • Ages well.
    Compared to modern programming languages, CLIs are old. They change a lot more slowly than languages and frameworks.

What Is Miller?

The main reason I love Miller is that it’s a standalone tool. There are many great tools for data manipulation, but every other tool I found was part of a specific ecosystem. The tools written in Python required knowing how to use pip and virtual environments; for those written in Rust, it was cargo, and so on.

On top of that, it’s fast. The data files are streamed, not held in memory, which means that you can perform operations on large files without freezing your computer.

As a bonus, Miller is actively maintained, John Kerl really keeps on top of PRs and issues. As a developer, I always get a satisfying feeling when I see a neat and maintained open-source project with great documentation.

Installation

  • Linux: apt-get install miller or Homebrew.
  • macOS: brew install miller using Homebrew.
  • Windows: choco install miller using Chocolatey.

That’s it, and you should now have the mlr command available in your terminal.

Run mlr help topics to see if it worked. This will give you instructions to navigate the built-in documentation. You shouldn’t need it, though; that’s what this tutorial is for!

How mlr Works

Miller commands work the following way:

mlr [input/output file formats] [verbs] [file]

Example: mlr --csv filter '$color != "red"' example.csv

Let’s deconstruct:

  • --csv specifies the input file format. It’s a CSV file.
  • filter is what we’re doing on the file, called a “verb” in the documentation. In this case, we’re filtering every row that doesn’t have the field color set to "red". There are many other verbs like sort and cut that we’ll explore later.
  • example.csv is the file that we’re manipulating.

Operations Overview

We can use those verbs to run specific operations on your data. There’s a lot we can do. Let’s explore.

Data

I’ll be using a dataset of IMDb ratings for American TV dramas created by The Economist. You can download it here or find it in the repo for this article.

Note: For the sake of brevity, I’ve renamed the file from mlr --csv head ./IMDb_Economist_tv_ratings.csv to tv_ratings.csv.

Above, I mentioned that every command contains a specific operation or verb. Let’s learn our first one, called head. What it does is show you the beginning of the file (the “head”) rather than print the entire file in the console.

You can run the following command:

`mlr --csv head ./tv_ratings.csv`

And this is the output you’ll see:

titleId,seasonNumber,title,date,av_rating,share,genres
tt2879552,1,11.22.63,2016-03-10,8.489,0.51,"Drama,Mystery,Sci-Fi"
tt3148266,1,12 Monkeys,2015-02-27,8.3407,0.46,"Adventure,Drama,Mystery"
tt3148266,2,12 Monkeys,2016-05-30,8.8196,0.25,"Adventure,Drama,Mystery"
tt3148266,3,12 Monkeys,2017-05-19,9.0369,0.19,"Adventure,Drama,Mystery"
tt3148266,4,12 Monkeys,2018-06-26,9.1363,0.38,"Adventure,Drama,Mystery"
tt1837492,1,13 Reasons Why,2017-03-31,8.437,2.38,"Drama,Mystery"
tt1837492,2,13 Reasons Why,2018-05-18,7.5089,2.19,"Drama,Mystery"
tt0285331,1,24,2002-02-16,8.5641,6.67,"Action,Crime,Drama"
tt0285331,2,24,2003-02-09,8.7028,7.13,"Action,Crime,Drama"
tt0285331,3,24,2004-02-09,8.7173,5.88,"Action,Crime,Drama"

This is a bit hard to read, so let’s make it easier on the eye by adding --opprint.

mlr --csv --opprint head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title            date          av_rating   share   genres
tt2879552      1       11.22.63         2016-03-10    8.489       0.51    Drama,Mystery,Sci-Fi
tt3148266      1       12 Monkeys       2015-02-27    8.3407      0.46    Adventure,Drama,Mystery
tt3148266      2       12 Monkeys       2016-05-30    8.8196      0.25    Adventure,Drama,Mystery
tt3148266      3       12 Monkeys       2017-05-19    9.0369      0.19    Adventure,Drama,Mystery
tt3148266      4       12 Monkeys       2018-06-26    9.1363      0.38    Adventure,Drama,Mystery
tt1837492      1       13 Reasons Why   2017-03-31    8.437       2.38    Drama,Mystery
tt1837492      2       13 Reasons Why   2018-05-18    7.5089      2.19    Drama,Mystery
tt0285331      1       24               2002-02-16    8.5641      6.67    Action,Crime,Drama
tt0285331      2       24               2003-02-09    8.7028      7.13    Action,Crime,Drama
tt0285331      3       24               2004-02-09    8.7173      5.88    Action,Crime,Drama

Much better, isn’t it?

Note: Rather than typing --csv --opprint every time, we can use the --c2p option, which is a shortcut.

Chaining

That’s where the fun begins. Rather than run multiple commands, we can chain the verbs together by using the then keyword.

Remove columns

You can see that there’s a titleId column that isn’t very useful. Let’s get rid of it using the cut verb.

mlr --c2p cut -x -f titleId then head ./tv_ratings.csv

It gives you the following output:

seasonNumber  title            date         av_rating   share    genres
     1      11.22.63          2016-03-10    8.489       0.51     Drama,Mystery,Sci-Fi
     1      12 Monkeys        2015-02-27    8.3407      0.46     Adventure,Drama,Mystery
     2      12 Monkeys        2016-05-30    8.8196      0.25     Adventure,Drama,Mystery
     3      12 Monkeys        2017-05-19    9.0369      0.19     Adventure,Drama,Mystery
     4      12 Monkeys        2018-06-26    9.1363      0.38     Adventure,Drama,Mystery
     1      13 Reasons Why    2017-03-31    8.437       2.38     Drama,Mystery
     2      13 Reasons Why    2018-05-18    7.5089      2.19     Drama,Mystery
     1      24                2002-02-16    8.5641      6.67     Action,Crime,Drama
     2      24                2003-02-09    8.7028      7.13     Action,Crime,Drama
     3      24                2004-02-09    8.7173      5.88     Action,Crime,Drama

Fun Fact

This is how I first learned about Miller! I was playing with a CSV dataset for https://details.town/ that had a useless column, and I looked up “how to remove a column from CSV command line.” I discovered Miller, loved it, and then pitched an article to Smashing magazine. Now here we are!

Filter

This is the verb that I first showed earlier. We can remove all the rows that don’t match a specific expression, letting us clean our data with only a few characters.

If we only want the rating of the first seasons of every series in the dataset, this is how you do it:

mlr --c2p filter '$seasonNumber == 1' then head ./tv_ratings.csv

Sorting

We can sort our data based on a specific column like it would be in a UI like Excel or macOS Numbers. Here’s how you would sort your data based on the series with the highest rating:

mlr --c2p sort -nr av_rating then head ./tv_ratings.csv

The resulting output will be the following:

titleId   seasonNumber title                         date         av_rating  share   genres
tt0098887      1       Parenthood                    1990-11-13   9.6824     1.68    Comedy,Drama
tt0106028      6       Homicide: Life on the Street  1997-12-05   9.6        0.13    Crime,Drama,Mystery
tt0108968      5       Touched by an Angel           1998-11-15   9.6        0.08    Drama,Family,Fantasy
tt0903747      5       Breaking Bad                  2013-02-20   9.554      18.95   Crime,Drama,Thriller
tt0944947      6       Game of Thrones               2016-05-25   9.4943     15.18   Action,Adventure,Drama
tt3398228      5       BoJack Horseman               2018-09-14   9.4738     0.45    Animation,Comedy,Drama
tt0103352      3       Are You Afraid of the Dark?   1994-02-23   9.4349     2.6     Drama,Family,Fantasy
tt0944947      4       Game of Thrones               2014-05-09   9.4282     11.07   Action,Adventure,Drama
tt0976014      4       Greek                         2011-03-07   9.4        0.01    Comedy,Drama
tt0090466      4       L.A. Law                      1990-04-05   9.4        0.1     Drama

We can see that Parenthood, from 1990, has the highest rating on IMDb — who knew!

Saving Our Operations

By default, Miller only prints your processed data to the console. If we want to save it to another CSV file, we can use the > operator.

If we wanted to save our sorted data to a new CSV file, this is what the command would look like:

mlr --csv sort -nr av_rating ./tv_ratings.csv > sorted.csv

Convert CSV To JSON

Most of the time, you don’t use CSV data directly in your application. You convert it to a format that is easier to read or doesn’t require additional dependencies, like JSON.

Miller gives you the --c2j option to convert your data from CSV to JSON. Here’s how to do this for our sorted data:

mlr --c2j sort -nr av_rating ./tv_ratings.csv > sorted.json

Case study: Top 5 Athletes With Highest Number Of Medals In Rio 2016

Let’s apply everything we learned above to a real-world use case. Let’s say that you have a detailed dataset of every athlete who participated in the 2016 Olympic games in Rio, and you want to know who the 5 with the highest number of medals are.

First, download the athlete data as a CSV, then save it in a file named athletes.csv.

Let’s open up the following file:

mlr --c2p head ./athletes.csv

The resulting output will be something like the following:

id        name                nationality sex    date_of_birth height weight sport      gold silver bronze info
736041664 A Jesus Garcia      ESP         male   1969-10-17    1.72    64     athletics    0    0      0      -
532037425 A Lam Shin          KOR         female 1986-09-23    1.68    56     fencing      0    0      0      -
435962603 Aaron Brown         CAN         male   1992-05-27    1.98    79     athletics    0    0      1      -
521041435 Aaron Cook          MDA         male   1991-01-02    1.83    80     taekwondo    0    0      0      -
33922579  Aaron Gate          NZL         male   1990-11-26    1.81    71     cycling      0    0      0      -
173071782 Aaron Royle         AUS         male   1990-01-26    1.80    67     triathlon    0    0      0      -
266237702 Aaron Russell       USA         male   1993-06-04    2.05    98     volleyball   0    0      1      -
382571888 Aaron Younger       AUS         male   1991-09-25    1.93    100    aquatics     0    0      0      -
87689776  Aauri Lorena Bokesa ESP         female 1988-12-14    1.80    62     athletics    0    0      0      -

Optional: Clean Up The File

The CSV file has a few fields we don’t need. Let’s clean it up by removing the info , id , weight, and date_of_birth columns.

mlr --csv -I cut -x -f id,info,weight,date_of_birth athletes.csv

Now we can move to our original problem: we want to find who won the highest number of medals. We have how many of each medal (bronze, silver, and gold) the athletes won, but not the total number of medals per athlete.

Let’s compute a new value called medals which corresponds to this total number (bronze, silver, and gold added together).

mlr --c2p put '$medals=$bronze+$silver+$gold' then head ./athletes.csv

It gives you the following output:

name                 nationality   sex      height  sport        gold silver bronze medals
A Jesus Garcia       ESP           male     1.72    athletics      0    0      0      0
A Lam Shin           KOR           female   1.68    fencing        0    0      0      0
Aaron Brown          CAN           male     1.98    athletics      0    0      1      1
Aaron Cook           MDA           male     1.83    taekwondo      0    0      0      0
Aaron Gate           NZL           male     1.81    cycling        0    0      0      0
Aaron Royle          AUS           male     1.80    triathlon      0    0      0      0
Aaron Russell        USA           male     2.05    volleyball     0    0      1      1
Aaron Younger        AUS           male     1.93    aquatics       0    0      0      0
Aauri Lorena Bokesa  ESP           female   1.80    athletics      0    0      0      0
Ababel Yeshaneh      ETH           female   1.65    athletics      0    0      0      0

Sort by the highest number of medals by adding a sort.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head ./athletes.csv

Respectively, the resulting output will be the following:

name              nationality  sex     height  sport       gold silver bronze medals
Michael Phelps    USA          male    1.94    aquatics      5    1      0      6
Katie Ledecky     USA          female  1.83    aquatics      4    1      0      5
Simone Biles      USA          female  1.45    gymnastics    4    0      1      5
Emma McKeon       AUS          female  1.80    aquatics      1    2      1      4
Katinka Hosszu    HUN          female  1.75    aquatics      3    1      0      4
Madeline Dirado   USA          female  1.76    aquatics      2    1      1      4
Nathan Adrian     USA          male    1.99    aquatics      2    0      2      4
Penny Oleksiak    CAN          female  1.86    aquatics      1    1      2      4
Simone Manuel     USA          female  1.78    aquatics      2    2      0      4
Alexandra Raisman USA          female  1.58    gymnastics    1    2      0      3

Restrict to the top 5 by adding -n 5 to your head operation.

mlr --c2p put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv

You will end up with the following file:

name             nationality  sex      height  sport        gold silver bronze medals
Michael Phelps   USA          male     1.94    aquatics       5     1      0      6
Katie Ledecky    USA          female   1.83    aquatics       4     1      0      5
Simone Biles     USA          female   1.45    gymnastics     4     0      1      5
Emma McKeon      AUS          female   1.80    aquatics       1     2      1      4
Katinka Hosszu   HUN          female   1.75    aquatics       3     1      0      4

As a final step, let’s convert this into a JSON file with the --c2j option.

Here is our final command:

mlr --c2j put '$medals=$bronze+$silver+$gold' \
    then sort -nr medals \
    then head -n 5 ./athletes.csv > top5.json

With a single command, we’ve computed new data, sorted the result, truncated it, and converted it to JSON.

[
  {
    "name": "Michael Phelps",
    "nationality": "USA",
    "sex": "male",
    "height": 1.94,
    "weight": 90,
    "sport": "aquatics",
    "gold": 5,
    "silver": 1,
    "bronze": 0,
    "medals": 6
  }
  // Other entries omitted for brevity.
]

Bonus: If you wanted to show the top 5 women, you could add a filter.

mlr --c2p put '$medals=$bronze+$silver+$gold' then sort -nr medals then filter '$sex == "female"' then head -n 5 ./athletes.csv

Respectively, you would end up with the following output:

name              nationality   sex       height   sport        gold silver bronze medals
Katie Ledecky     USA           female    1.83     aquatics       4    1      0      5
Simone Biles      USA           female    1.45     gymnastics     4    0      1      5
Emma McKeon       AUS           female    1.80     aquatics       1    2      1      4
Katinka Hosszu    HUN           female    1.75     aquatics       3    1      0      4
Madeline Dirado   USA           female    1.76     aquatics       2    1      1      4

Conclusion

I hope this article showed you how versatile Miller is and gave you a taste of the power of command-line tools. Feel free to scourge the internet for the best CLI next time you find yourself writing yet another random script.

Resources

Further Reading on Smashing Magazine

Smashing Magazine