Creating multi-worksheet Excel files with Simple Excel

https://www.csrhymes.com/img/multi-worksheet-hero.jpg

Published: Oct 13, 2021 by C.S. Rhymes

Recently I had to create a large data export for a project. I like using Spatie’s Simple Excel package to do this as it is very simple to use and works well when exporting large amounts of data to a CSV or Excel file with the ability to stream a download to the browser. This particular project had an additional requirement though, exporting multiple worksheet’s of data at once. Luckily, this package allows you to do this too.

The writer object

The Simple Excel package uses the box/spout package under the hood. In the readme it states that you can get to the underlying writer using ->getWriter().

$writer = SimpleExcelWriter::create($pathToCsv)->getWriter();

If we jump to the box/spout package docs, there is a section on Playing with sheets. The docs show we can see how to get the current sheet, set a name for the current sheet and how to create a new sheet.

Naming a worksheet

To name a worksheet we can use getCurrentSheet() to get the current sheet with the writer and then use setName() to set the name.

$writer = SimpleExcelWriter::streamDownload('your-export.xlsx')->getWriter()
$nameSheet = $writer->getCurrentSheet();
$nameSheet->setName('Names');

Creating a new worksheet

To create a new sheet we can use addNewSheetAndMakeItCurrent() and we can then use setName() once more to set the name of this new sheet.

$addressSheet = $writer->addNewSheetAndMakeItCurrent();
$addressSheet->setName('Addresses');

Bringing it all together

Now we know how to do the individual tasks we can bring it all together.

  • Create a streamDownload using SimpleExcelWriter
  • Get the writer, get the current sheet and name it ‘Names’
  • Add rows of data to the ‘Names’ sheet
  • Create a new sheet and make it the current sheet, before naming it ‘Addresses’
  • Add the header row to ‘Addresses’
  • Add rows of data to the ‘Addresses’ sheet
  • Finally, return the stream to the browser
use Spatie\SimpleExcel\SimpleExcelWriter;

$stream = SimpleExcelWriter::streamDownload('your-export.xlsx');

$writer = $stream->getWriter();

// Set the name of the current sheet to Names
$nameSheet = $writer->getCurrentSheet();
$nameSheet->setName('Names');

// Add rows to the Names sheet
$stream->addRows([
    ['first_name' => 'Boaty', 'last_name' => 'Mc Boatface'],
    ['first_name' => 'Dave', 'last_name' => 'Mc Dave'],
]);

// Create a new sheet and set the name to Addresses
$addressSheet = $writer->addNewSheetAndMakeItCurrent();
$addressSheet->setName('Addresses');

// Manually add header rows to the Addresses sheet
$stream->addRow(['house_number', 'postcode']);

// Add rows to the Addresses sheet
$stream->addRows([
    ['house_number' => '1', 'postcode' => 'AB1 2BC'],
    ['house_number' => '2', 'postcode' => 'AB1 2BD'],
]);

return $stream->toBrowser();

For more information on creating exports in Laravel, check out Using Laravel Resource Collections with exports.

When creating a single worksheet, the Simple Excel package normally creates the header row for us, but it seems when you create a new sheet you need to define the new headers for your data.

Here are a couple of screenshots of the outputted Excel file:

The Names Excel worksheet

The Addressed Excel worksheet

Photo by Wilfred Iven on StockSnap

Laravel News Links

How to update large data in Laravel using Commands, Chunking, and Database Transactions

https://42coders.com/storage/88/big_data_sets.jpg

Laravel

Max Hutschenreiter –

Sometimes you need to update the data in your Database.  The easiest possibility is to just run an update in your MySQL Database. This is not always working. Especially when you use events or you also want to update relations ….

Commands

In this case, I recommend creating a Command. Even for just one-time changes.

php artisan make:command YourCommandName

Progress Bar

The first tip would be to use a progress bar. In long-running commands, it’s helpful to see that there is progress.

To show you how I just copy the example from the

Laravel Documentation

.

$users = App\Models\User::all();

$bar = $this->output->createProgressBar(count($users));

$bar->start();

foreach ($users as $user) {
    $this->performTask($user);

    $bar->advance();
}

$bar->finish();

Chunking

This works fine until a couple of hundred entries with easy changes. If you want to change more entries with more complexity you should use chunking results.
The problem is if you load everything in your eloquent collection your ram will be a limitation. To avoid it you can use the build-in laravel function chunk on your queries to iterate through the table in sequences.

App\Models\User::chunk(200, function ($users){
    foreach($users as $user){
        $user->name .= ' :)';
        $user->save();
    }
});

One important thing to understand about the chunk function is to understand how the queries run. In this example after the 200 users got iterated through, the base query is being executed with the LMIT function on the table again. 
Imagine you have this case

App\Models\User::where('active', true)
    ->chunk(200, function ($users){
        foreach($users as $user){
            $user->active = false;
            $user->save();
        }
    });

In this code, it would go over the 200 users changing the active value to false.  In the second run, it would ask the Database again for the users which have active true. The problem is since we just changed the active status of 200 users we would get the list without them. But the Limit function would limit the result to start from 200 to 400 in the results. That means we would skip 200 users which we actually wanted to change.
Laravel has a function to overcome the problem it’s just important to understand when to use it. So the solution in this situation would be.

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        foreach($users as $user){
            $user->active = false;
            $user->save();
        }
    });

Database Transactions

Now we are able to execute a lot of changes to our Models and we avoid the problem that our Eloquent collections becoming too big. 
But in our last example, we would execute an updated Statement for every single user in our DB. To avoid this I found it a good tactic to use Transactions. 
This allows us to reuse our chunks and update the DB per chunk.

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        try {
            DB::beginTransaction();
            
            foreach($users as $user){
                $user->active = false;
                $user->save();
            }
           
            DB::commit();

        } catch (\Exception $e) {
            //handle your error (log ...)
            DB::rollBack();
        }
    });

In this code example, we combine the chunkById with Database Transactions. This can save a lot of time in updating the DB. You can read more about the

Database Transactions in the Laravel Documentation

.

Transactions can cause trouble if not used correctly. If you forget to commit or rollBack you will create nested transactions. You can read more in the Blogpost 

Combine it together

To finalize this code example we can bring in again the progress bar.

$count = App\Models\User::where('active', true)->count();

$bar = $this->output->createProgressBar($count);
$bar->start();

App\Models\User::where('active', true)
    ->chunkById(200, function ($users){
        try {
            DB::beginTransaction();
            
            foreach($users as $user){
                $user->active = false;
                $user->save();
                $bar->advance();
            }
           
            DB::commit();

        } catch (\Exception $e) {
            //handle your error (log ...)
            DB::rollBack();
            $bar->finish();
        }
    });

$bar->finish();

So this is my strategy to handle updates on bigger data sets. You can change the Chunk size by your needs and experiments that get you good results. In my experience something from 200 – 1000 is ok.
Sometimes especially when the calculation for the single entry is more complicated I see the whole process getting slower after each processing. It starts with around 2sec per bar advance up to 30 or 40 seconds. Since I experienced it across different commands I am not sure if it’s a general topic. If anyone has any info on it let me know.

Hope this article helps you.

Laravel News Links

Kel-Tec CP33 Pistol – The CyberPunk Plinker You Now Must Own ~ VIDEO

https://www.ammoland.com/wp-content/uploads/2021/10/Shield-Plus-OR-THUMB2.jpg

AmmoLand News can’t get enough of the Kel-Tec CP33 Pistol and soon neither will you.

U.S.A. -(AmmoLand.com)- I always try to be objective in my reviews, but I was obsessed with the Kel-Tec CP33 the moment I saw it; From its Robocop pseudo-subgun/PDW appearance to its capacious 33-round magazine, it was everything the 18-year-old-me ever wanted in a 22 handgun. But now that I’ve had a chance to fire nearly 1,000-rounds through the futuristic little gun, is the honeymoon over, or is the CP33 everything I’ve ever wanted?

Kel-Tec CP33 Pistol in .22lr

If you’re read your fair share of gun reviews in the past, it will likely come as no surprise that the answer isn’t a simple yes or no. But if you’re sitting at the gun counter right now, money in hand, and wondering if you should buy one, I’d say go for it if you’re looking for a fun range toy. But if you have a different role in mind for the CP33, read on.

Kel-Tec CP33 Pistol
The Kel-Tec CP33 Pistol – American polymer sitting atop American steel. IMG Jim Grant

Before I get into the details of the review, let’s first take a look under the hood to see how the Kel-Tec works. First off, the CP33 is a standard direct blowback-operated, semi-automatic magazine-fed pistol chambered in .22lr. If you know anything about rimfire auto-loaders, this should come as no surprise. Virtually all semi-automatic rimfire guns are blowback-operated because it’s very simple to produce and generally less ammunition-sensitive than locked-breech firearms. So does this mean the Kel-Tec CP33 is no different than a more traditional-looking rimfire pistol like a Ruger MKIV or Browning Buckmark?

Absolutely not. It may share the same method of operation, but by that same measure, all bolt-action rifles are identical. But it’s not how the mechanics of the actual firearm that separates the Kel-Tec from other handguns, but rather its magazine.

Magical Magazine

It’s not just that the CP33’s magazine holds more rounds than virtually any other traditional rimfire handgun, but how the magazine accomplishes this that makes the new Kel-Tec pistol so interesting.

Most rimfire pistols utilize a single stack magazine to feed cartridges to the chamber. By this, I mean literally, a spring-loaded box that situates a straight row of rounds directly beneath one another, not unlike say an M1911. Higher-capacity centerfire pistols like the Glock utilize a staggered column of rounds inside of a magazine whose internal space is roughly 50% wider than the cartridges themselves, but this isn’t practical for rimfire rounds.

The Kel-Tec CP33 magazine is very unique both in function and appearance. IMG Jim Grant

Why? Because the rims themselves tend to snag on each other, leading to a malfunction referred to as rim-lock. This is why the Soviets utilized a pan magazine on their DP-28 LMG chambered in the rimmed 7.62x54r cartridge, and why the capacity of the British Bren gun is limited to 30-rounds. (Although the British did field a 100-round pan magazine like the Soviets in limited numbers.)

So how did Kel-Tec solve this issue? With a coffin-style, dual-staggered column magazine. It’s basically two staggered column magazines combined into one.

But wait, you just said rimfire rounds don’t play well with staggered column magazines!

Indeed I did. And the solution by the engineers at Kel-Tec was to add open side walls to the magazine to allow shooters to properly align any rounds that tend to work themselves into a rim lock situation.

If that seems like a bandaid solution to a much bigger issue, you’re not wrong. It definitely doesn’t completely prevent the issues of rimfire rounds in a stagger column magazine, but it should allow a shooter to alleviate the problem before it becomes one.

But does it actually work?

Kel-Tec CP33 Dozer
Yes, this looks ridiculous, but isn’t that really what the Kel-Tec CP33 is going for anyway? IMG Jim Grant

When loaded properly, absolutely. But that’s a bigger caveat than it sounds. It’s very easy for an inexperienced shooter to load the magazine in such a way that it looks like it’s properly aligned, only to find out 20-rounds in, that some of the lower rounds aren’t quite line up. And because the alignment of one round affects all the rest, performing a standard tap rack bang malfunction clearing procedure will result in another failure to chamber. Truth be told, getting all the rounds perfectly lined up is more difficult than it looks, but with practice becomes pretty simple. The best source for how to do so is in the Kel-Tec CP33’s user manual and spare Kel-Tec CP33 22LR 33rd Magazines are readily available.

But enough about the magazine, let’s get a rundown of all the CP33’s features.

Kel-Tec CP33 Handgun Ergonomics

Starting at the business end, the Kel-Tec CP33 ships with a 5.5-inch, 1/2×28 threaded stainless steel barrel. I tested this barrel with several muzzle devices, and everything from flash-hiders and linear compensators to my favorite new rimfire suppressor (The Rugged Suppressor Mustang 22 from SilencerShop fit and ran flawlessly.

Behind the muzzle, the CP33 includes a set of fiber-optic super-low-profile post and notch iron sights that are clearly designed to get out of the way of any mounted optics. This is because the entire top of the CP33 features a monolithic Picatinny rail. I found that if a shooter isn’t running a brace, then a pistol optic like a Holosun HE507C or Trijicon RMR on the lowest possible mount made for the most natural-feeling setup.

Under the front sight, the Kel-Tec CP33 features an M-Lok slotted dust cover that appears to be the perfect length to not fit any of the M-Lok rail segments I had on hand. So I needed to modify one by cutting off one of the alignment notches and only using a single mounting bolt. This is something I wouldn’t normally advise since it does compromise the mounting strength of the rail. But since the CP33 is only chambered in .22lr, I took that risk and it paid off handsomely. The Streamlight TLR-10 Flashlight I mounted on the handgun never budged, and its laser held zero after a few hundred rounds.

CP33 Safety
The CP33 features a thumb safety that is easily actuated without shifting the firing grip. IMG Jim Grant

Alternatively, a shooter could simply buy a super short rail segment or an accessory that directly mounts to the M-Lok slot.

But be advised, a hand-stop or angled grip are completely fine, but a vertical grip can get you in hot water with the ATF if you don’t have a tax stamp for the little polymer pistol.

Behind the dust cover, the CP33 features the iconic Kel-Tec molded grip pattern on its oblong grip. Despite the grip’s appearance, it’s actually fairly comfortable to hold, and it positions the shooter’s hands perfectly to toggle the ambidextrous safety lever behind and above it. But there’s one thing conspicuously absent between the grip and the trigger – a magazine release.

That’s because the engineers at Kel-Tec decided to depart from the gun’s overall very futuristic appearance and incorporate an old-school European-style heel release at the bottom of the grip. (Not unlike the one found on the Walther PPK.) I’m not normally a fan of this setup, but given that the CP33 isn’t a combat pistol, it doesn’t bother me.

Kel-Tec CP33 Pistol Grip
The Kel-Tec CP33 Pistol’s grip features the iconic Kel-Tec molded panels, and the magazine release is on the heel of the grip. IMG Jim Grant

Above the grip, the CP33 features an ambi bolt-release that some shooters have reported issues with. But the example I reviewed – which wasn’t a T&E from the factory, but a gun I bought at a local shop – never had an issue with the release whatsoever.

At the very back of the handgun is the charging latch that takes more than a few notes from both the AR-15 and the HK MP7 PDW. It’s non-reciprocating, which is awesome, but it is made of very thin steel with polymer handles at the rear. And to be honest, its construction doesn’t inspire a tremendous amount of confidence. And if that really bothers you, there’s a cottage industry of aftermarket parts makers who now offer more robust all-aluminum charging latches.

Performance

Now that you know everything about the gun and its features, let’s talk about how the gun actually ran.

After 1,000 rounds of various types of .22lr ammo, including a half dozen different varieties of standard and high-velocity 22LR ammunition, the Kel-Tec CP33 encountered around 30 malfunctions in my testing. Half of these were first-round failures to chamber either during the first 200 rounds fired through the gun, or after a hundred or so rounds fired suppressed. The former is because the gun needs a little break-in period, while the latter is 100% due to excess carbon build-up from running the gun suppressed.

NVG CP33
I even tested the Kel-Tec CP33 with my PVS-14 and PEQ-15, and it was glorious. IMG Jim Grant

On an interesting side note, the gun never malfunctioned on the first round when using the bolt release.

Accuracy was good bordering on great, with the Kel-Tec CP33 easily capable of hitting targets out to 100 yards with a reflex sight attached. Though I suspect the gun would be infinitely more capable with a low-powered magnified optic and a stabilizing brace attached. But as it comes, the CP33 makes short work of tin cans, squirrels, and clay pigeons out to 50 yards.

CP33 Action
Something about the Kel-Tec CP33’s design just makes it practically beg to be suppressed with a quality can like this Rugged Suppressors Mustang 22 from SilencerShop.com. IMG Jim Grant

Kel-Tec CP33 Space Gat Verdict

So, is the futuristic polymer pistol worth a buy? With an MSRP of $475 (and in my experience street prices are much lower), the Kel-Tec CP33 Pistol is a solid deal that when babied a little bit, runs like a champ. Yes, the magazine can be problematic if not loaded properly, but with some practice, the CP33 makes a solid plinking pistol that would work well in a role as a hiking gun or varment pistol. Its looks might not appeal to everyone, but for those of us who dream of blasting cyborgs beneath neon signs in a rain-soaked Neo-Tokyo, the CP33 is pretty damn slick.


About Jim Grant

Jim is one of the elite editors for AmmoLand.com, who in addition to his mastery of prose, can wield a camera with expert finesse. He loves anything and everything guns but holds firearms from the Cold War in a special place in his heart.

When he’s not reviewing guns or shooting for fun and competition, Jim can be found hiking and hunting with his wife Kimberly, and their dog Peanut in the South Carolina low country.

Jim Grant

AmmoLand.com

Comic for October 13, 2021

https://assets.amuniversal.com/24447c80ff7f01397aa1005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Comic for October 11, 2021

https://assets.amuniversal.com/1f02c2b0ff7f01397aa1005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Comic for October 10, 2021

https://assets.amuniversal.com/4b9300d0f2400139769e005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Led by founders who met at Microsoft, Chronosphere lands $200M, reaches unicorn status

https://cdn.geekwire.com/wp-content/uploads/2019/11/Chronosphere-Co-Founders-Martin-and-Rob-1260×945.jpeg

Chronosphere co-founders Martin Mao (left, CEO) and Rob Skillington (CTO). (Chronosphere Photo)

Chronosphere has reached unicorn status in less than three years.

The company this week announced a $200 million Series C round, propelling its valuation past $1 billion. It comes nine months after the startup raised a $43 million Series B round.

Founded in 2019 by former Uber and Microsoft engineers, Chronosphere offers “data observability” software that helps companies using cloud-native architecture monitor their data. Customers include DoorDash, Genius Sports, and Cudo. Its annual recurring revenue has grown by 9X in 2021.

Chronosphere CEO Martin Mao and CTO Rob Skillington first met in the Seattle area at Microsoft, where they worked on migrating Office to the cloud-based Office 365 format.

They both later spent time at Uber on engineering teams. Uber couldn’t find any products to meet its growing data demands, so Mao and Skillington helped the company build one. The result was M3, Uber’s open-source production metrics system, which is capable of storing and querying billions of data points per second.

With Chronosphere, Mao and Skillington are building an end-to-end solution on top of M3 that helps companies both gather and analyze their data in the cloud with the help of visualization and analytics tools. The product works across multiple cloud platforms, including AWS and Azure.

Chronosphere recently decided to be remote-first. Its largest hub is in New York City, and there are a handful of employees in Seattle, including Mao. The company has 80 total employees and expects to add another 35 people this year.

General Atlantic led the Series C round. Other backers include Greylock Partners; Lux Capital; Addition; Founders Fund; Spark Capital; and Glynn Capital. Total funding to date is $255 million.

“Sitting at the intersection of the major trends transforming infrastructure software – the rise of open-source and the shift to containers – Chronosphere has quickly become a transformative player in observability,” Anton Levy, managing director at General Atlantic, said in a statement.

GeekWire

Ben Cook: PyTorch DataLoader Quick Start

PyTorch comes with powerful data loading capabilities out of the box. But with great power comes great responsibility and that makes data loading in PyTorch a fairly advanced topic.

One of the best ways to learn advanced topics is to start with the happy path. Then add complexity when you find out you need it. Let’s run through a quick start example.

What is a PyTorch DataLoader?

The PyTorch DataLoader class gives you an iterable over a Dataset. It’s useful because it can parallelize data loading and automatically shuffle and batch individual samples, all out of the box. This sets you up for a very simple training loop.

PyTorch Dataset

But to create a DataLoader, you have to start with a Dataset, the class responsible for actually reading samples into memory. When you’re implementing a DataLoader, the Dataset is where almost all of the interesting logic will go.

There are two styles of Dataset class, map-style and iterable-style. Map-style Datasets are more common and more straightforward so we’ll focus on them but you can read more about iterable-style datasets in the docs.

To create a map-style Dataset class, you need to implement two methods: __getitem__() and __len__(). The __len__() method returns the total number of samples in the dataset and the __getitem__() method takes an index and returns the sample at that index.

PyTorch Dataset objects are very flexible — they can return any kind of tensor(s) you want. But supervised training datasets should usually return an input tensor and a label. For illustration purposes, let’s create a dataset where the input tensor is a 3×3 matrix with the index along the diagonal. The label will be the index.

It should look like this:

dataset[3]

# Expected result
# {'x': array([[3., 0., 0.],
#         [0., 3., 0.],
#         [0., 0., 3.]]),
#  'y': 3}

Remember, all we have to implement are __getitem__() and __len__():

from typing import Dict, Union

import numpy as np
import torch

class ToyDataset(torch.utils.data.Dataset):
    def __init__(self, size: int):
        self.size = size

    def __len__(self) -> int:
        return self.size

    def __getitem__(self, index: int) -> Dict[str, Union[int, np.ndarray]]:
        return dict(
            x=np.eye(3) * index,
            y=index,
        )

Very simple. We can instantiate the class and start accessing individual samples:

dataset = ToyDataset(10)
dataset[3]

# Expected result
# {'x': array([[3., 0., 0.],
#         [0., 3., 0.],
#         [0., 0., 3.]]),
#  'y': 3}

If happen to be working with image data, __getitem__() may be a good place to put your TorchVision transforms.

At this point, a sample is a dict with "x" as a matrix with shape (3, 3) and "y" as a Python integer. But what we want are batches of data. "x" should be a PyTorch tensor with shape (batch_size, 3, 3) and "y" should be a tensor with shape batch_size. This is where DataLoader comes back in.

PyTorch DataLoader

To iterate through batches of samples, pass your Dataset object to a DataLoader:

torch.manual_seed(1234)

loader = torch.utils.data.DataLoader(
    dataset,
    batch_size=3,
    shuffle=True,
    num_workers=2,
)
for batch in loader:
    print(batch["x"].shape, batch["y"])

# Expected result
# torch.Size([3, 3, 3]) tensor([2, 1, 3])
# torch.Size([3, 3, 3]) tensor([6, 7, 9])
# torch.Size([3, 3, 3]) tensor([5, 4, 8])
# torch.Size([1, 3, 3]) tensor([0])

Notice a few things that are happening here:

  • Both the NumPy arrays and Python integers are both getting converted to PyTorch tensors.
  • Although we’re fetching individual samples in ToyDataset, the DataLoader is automatically batching them for us, with the batch size we request. This works even though the individual samples are in dict structures. This also works if you return tuples.
  • The samples are randomly shuffled. We maintain reproducibility by setting torch.manual_seed(1234).
  • The samples are read in parallel across processes. In fact, this code will fail if you run it in a Jupyter notebook. To get it to work, you need to put it underneath a if __name__ == "__main__": check in a Python script.

There’s one other thing that I’m not doing in this sample but you should be aware of. If you need to use your tensors on a GPU (and you probably are for non-trivial PyTorch problems), then you should set pin_memory=True in the DataLoader. This will speed things up by letting the DataLoader allocate space in page-locked memory. You can read more about it here.

Summary

To review: the interesting part of custom PyTorch data loaders is the Dataset class you implement. From there, you get lots of nice features to simplify your data loop. If you need something more advanced, like custom batching logic, check out the API docs. Happy training!

The post PyTorch DataLoader Quick Start appeared first on Sparrow Computing.

Planet Python

Visualizing Ammo Cost Trends Across Nine Popular Calibers

https://www.thefirearmblog.com/blog/wp-content/uploads/2021/10/ammo-cost-trends-180×180.png

Ammo Cost TrendsIt’s no secret that the ammunition market has been volatile (to say the least) since the Covid pandemic took hold, but some calibers seem to be easing off while others rise. Redditor, Chainwaxologist, owns a web-based hunting and fishing retail store, FoundryOutdoors.com, so to keep his ammo cost and supply competitive, he set out to […]

Read More …

The post Visualizing Ammo Cost Trends Across Nine Popular Calibers appeared first on The Firearm Blog.

The Firearm Blog