Go inside Sigma’s factory to see how lenses are made

https://2.img-dpreview.com/files/p/E~C213x0S3413x2560T1200x900~articles/5410838737/20230228-DSC01945_-_shipping_and_receiving.jpeg

Introduction

If you’ve ever bought a new lens, you know the joy of removing a beautiful, pristine optic from its box and attaching it to your camera for the first time. But have you ever wondered what it takes to design and build that lens? During a recent trip to Japan, we had the opportunity to go behind the scenes at Sigma’s lens factory in the city of Aizu to answer that question, and we’re going to walk you through how it’s done, step by step.

Most photographers are familiar with Sigma, but maybe not with its unique history. Sigma is a family-owned business founded in 1961 by Michihiro Yamaki. Mr. Yamaki was an engineer at a small optical company that made binoculars, cameras and video lenses. When the company went bankrupt, some of its suppliers, needing new clients, approached Mr. Yamaki about starting a new business, and Sigma was born.

Today, Sigma has over 1,700 employees, nine subsidiaries in eight countries, and annual sales of 42 billion yen (about $322 million).

Sigma’s business today

As Sigma’s current CEO, Kazuto Yamaki, explained in our recent interview, Sigma has a business philosophy of ‘small office, big factory.’ It has a minimal administrative, sales and marketing staff and prioritizes investment in engineering and manufacturing. This explains why about 75% of the employees at the company’s headquarters in Kawasaki, Japan, are engineers.

Sigma opened its current factory in Aizu, about 300km north of Tokyo in Fukushima Prefecture, in 1974 and it’s home to about 1,500 of Sigma’s employees – the majority of its workforce. The proximity means that Sigma teams can quickly meet in person to resolve problems when needed to solve engineering and manufacturing challenges. This is Sigma’s only factory, and all of its products are made here.

Above: Sigma’s factory in Aizu, Japan.

Sigma’s Aizu factory

Sigma’s factory covers almost 72,000 square meters of floor space (approximately 775,000 square feet) and produces 80,000 lenses and 2,000 cameras annually. It’s a vertically integrated factory, meaning that almost every aspect of manufacturing, including the individual parts that make up each lens – right down to the screws – are produced here.

From the company’s early days, Michihiro Yamaki believed that to make a good product, working with local people and businesses was essential. That approach continues to this day; all of Sigma’s suppliers are located in the northern part of Japan. Essentially, Sigma aims to do everything by itself and with local partners, an approach that paid off during the global pandemic. Unlike companies with complex global supply chains, Sigma was able to keep its factory in operation during that time.

With that background, let’s dive into how lenses are made.

Above: The Sigma’s Aizu factory in 1974 and today, with Mt. Bandai behind. (Image courtesy of Sigma)

Lens testing room

Since optics are the core of any lens, we’ll start with glass. Before diving into the manufacturing process, let’s talk about how Sigma establishes performance metrics for every lens it manufactures.

Each time a new lens is designed, a high-quality master copy is created which is used for benchmark testing; it establishes a baseline performance spec for each new interaction constructed. This baseline becomes the reference point for all Modulation Transfer Function (MTF) machines used throughout the lens manufacturing process, ensuring that each lens meets the design specification for resolution. Sigma designs and builds its own MTF machines in-house.

Testing of the master lens takes place on a Trioptic measurement machine, one of only a few of its type in Japan. It can measure the range between the minimum focusing distance and infinity for all of Sigma’s lenses and uses collimators up to 1000mm.

Glass: lens blanks

Most lens elements start as lens blanks, glass disks with a slight curve. Before any grinding occurs, lens blanks have an opaque, white appearance. Sigma uses lens blanks from Hoya to manufacture its lenses.

Glass grinding

The first step in the lens manufacturing process is to roughly grind out the curve of the lens. Each lens blank is attached to a plunger that guides it into a machine where the glass is ground to the correct curve for the lens. Since this part of the process is intended only to create the right shape for the lens, it still appears somewhat opaque when finished.

A second, more refined precision grinding step is then performed, which gives the lens its clear, smooth finish.

There are about 330 machines used in the glass manufacturing process at Sigma’s Aizu factory, and every few minutes a technician is checking one with a gauge to ensure that its curve is correct.

Lens polishing

After the lenses are ground, the third and final step is polishing. Lenses are set into a machine on polyurethane pads mounted in a mold that matches the final shape or curve of the lens. These grind the lens using a special polishing paste, typically cerium or zirconium oxide. The process usually takes two to ten minutes, depending on the size and type of the lens.

Glass molding

Not all glass elements are ground. Aspherical optics are manufactured through glass molding using one of two processes, depending on their size. Smaller elements begin life as what looks like a bulbous glob of clear optical glass, while larger elements begin as pre-formed glass units.

In both cases, these pieces of glass are put into molds and pushed through a machine that presses them into their final shape using high pressure and heat. Sigma currently manufactures aspherical elements up to 84mm in diameter. Sigma also makes its own molds for manufacturing aspherical lenses, part of its philosophy of building its own tools in-house to maintain quality control.

Aspherical lens manufacturing is one of the better-guarded parts of the lens manufacturing process and something that makes lenses unique, so we were asked to refrain from taking photos in this area. Instead, I’ve included a picture above of one of my favorite Sigma lenses, the 14mm F1.8 Art, which features a front aspherical element that’s 80mm in diameter.

Lens centering

After glass elements are formed and coated, they receive a final grinding around the edges to make sure the diameter and thickness of the edges of the elements are within specifications and will mount correctly inside the lens. This also ensures that the glass elements will be optically centered inside the lens housing.

Lens coatings

Once lenses are polished or molded, it’s time to apply Sigma’s Super Multi-Layer coatings, which suppress flare and ghosting. Before applying coatings, each lens is visually inspected for dust. The lens is then loaded onto a ‘planet,’ a dome-shaped device with inserts for specific lens elements.

The planets are then loaded into machines in which special chemicals have been evaporated into a chamber. The planet rotates inside the device, evenly spreading the vaporized chemicals onto the lens elements, before holding them to be cured by UV light.

Lens caulking, joining and bonding

Once the individual optical elements are manufactured, those that get grouped in a lens go through a process called caulking and joining. Each element is placed into a plastic frame, and high heat and pressure are used to fix them into place, creating a single unit made of multiple lens elements.

A separate process called lens bonding (photo above) is used when two to three elements must be bonded directly with no space in between. The two lenses are joined with a special adhesive, and a machine ensures that both lenses are optically centered. The bonded lenses are then exposed to UV light to cure the adhesive.

Metal processing

Now that we’ve covered glass, let’s move on to the rest of the manufacturing process, starting with metals. A lens has many metal parts, so metal processing covers a large part of the factory floor, producing components out of materials like steel, aluminum and brass.

The automated blue machines visible in the photo above carry out the process of cutting out, shaping and drilling holes in metal for a perfect fit. The process is quick, often taking only a few minutes per part. The factory runs about 160 of the machines you see in the photo.

Metal processing

Although numerous parts are produced in this factory area, lens barrels and bayonet mounts are the most recognizable. This photo shows freshly milled inner rotation barrels for lenses before any surface treatment has been applied.

Recycling

The machining process that turns metal blocks into lens components produces metal waste, like the shavings in this photo. Sigma captures this material and works with a local recycling facility to ensure the materials are repurposed rather than landing in a landfill.

Metal pressing

Some metal parts are stamped out by machines rather than being machined individually. In this case, a strip of metal is fed through a pressing machine by what looks like a giant roll of metal tape.

Metal pressing

And here’s the final product you saw being stamped out on the previous page: metal plates that help control the movement of the aperture blades on a lens.

Magnesium processing

For safety reasons, magnesium parts are machined in a separate building that isn’t connected to the main factory. Magnesium can be flammable, mainly when it’s in a fine shaving or powder form – precisely the forms you tend to produce when milling parts out of metal. As a result, this facility has extra thick concrete walls that offer protection in the event of an accident.

Magnesium is used for many components because it’s strong and durable yet lighter than aluminum. This allows parts to be smaller and lighter than if they were made from aluminum, but there’s a tradeoff: magnesium is more costly. As a result, it’s not used for everything, but only for parts where durability and light weight matter.

Magnesium components start as die-cast parts, one of the few items Sigma doesn’t produce in its factory. There are only a small number of suppliers for die-cast parts worldwide, and Sigma only sources die-cast from a limited number of suppliers in Japan. This photo shows a die-cast lens collar on the left and a fully machined version on the right.

Magnesium machining

These automated machines cut out, shape and drill holes in the die-cast part with high precision. After milling, each piece goes through a cleaning process to remove oils left over from production, then receives a corrosion-resistant coating to protect the metal. Once milling is complete, each part is measured by a machine to make sure it’s within the specified tolerances for the part. In this photo, you can see a tripod collar mounted to a machine.

At any given time, there are about 20 machines milling parts from die-cast magnesium. For safety, each machine is equipped with specialized fire extinguishing equipment explicitly designed for magnesium fires.

New tool and mold creation

Sigma’s philosophy is to control as much of the manufacturing process as possible in its factory, as that allows them to better control the quality of everything it produces. This extends to the molds and tools used to manufacture its products.

This area isn’t automated. Prototypes and parts are handmade, as seen in the photo above. Hand-made parts become blanks and are used to create the injection molds used to manufacture many components. This is also where Sigma builds the specialized tools required to build a new product, typically with a focus on making the assembly process more efficient.

Injection molding

Molds for plastic or pressed parts are made using a process called electronic discharge machining (EDM). This thermal process removes material by applying discharging sparks in the gap between an electrode and the part being manufactured.

Once on the assembly line, viscous plastic is fed into the mold through the center of the mold. A separate series of tubes delivers coolant to the mold, causing the plastic parts to harden. Around 40 injection molding machines are making parts on the assembly line at any given time, and molds will be rotated in and out of production depending on what products are being produced.

Surface mounting

Surface mounting refers to mounting electronics onto circuit boards for lenses. As with many things, Sigma manufactures circuit boards in-house. Baseboards are fed into one end of a machine where soldering paste is applied and heated to spread out the paste evenly. Components are then fed into the machine from rolls that look like tape and stamped onto the board.

Surface mounting

A 3D photo test of each board confirms that all the parts are in the correct positions, followed by an X-ray check. There are about 20 machines in the factory building circuit boards.

The board in this photo is a two-sided circuit board. One side has been printed and is now ready to feed back through the machine for surface mounting on the opposite side.

Painting, printing and surface treatment

Once metal parts like lens barrels have been milled, and any necessary surface coatings applied, it’s time for a paint job. Each part is mounted on a metal jig which spins in circles as the paint is sprayed on, ensuring an even coat of paint. These painted parts are then dried in ovens.

Some materials, like those made of aluminum, may receive a black anodized coating instead.

Printing

Most of the reference markings on a lens, like scale windows and apertures, as well as labels on accessories like lens hoods, are printed or painted on. A technician applies the ‘Sigma’ logo to part of a lens barrel in this photo.

Electroplating

Electroplating is used to apply a chrome surface to metal parts, which makes them more durable. The most recognizable parts to go through the electroplating process are the brass lens mounts for each lens, which are chrome-plated here. Some smaller metal parts are plated here as well.

Final assembly and quality assurance

Once all lens elements are made and all parts have been manufactured, surface treated and painted, they meet at the final assembly line. In this clean room environment, each line is set up based on what models are in production on a given day. A single assembly line extends from the first set of parts to the final build.

Each lens’ alignment is performed using Sigma’s in-house designed and built MTF machines to adjust and confirm that they meet MTF specs. Although total assembly time varies by product, it can take as little as 30 minutes to assemble one lens, but it can certainly extend to longer periods for complex products.

After assembly, lenses are sent to the quality assurance division, which checks them using an MTF measuring machine. Additionally, they are inspected for dirt, surface scratches and other anomalies, and to confirm that zoom mechanisms, apertures and electronic contacts all work correctly. Some products may go through a resolution test at this stage as well.

Packing and shipping

Products arrive in the packing and shipping area without serial numbers. Until a product receives a serial number, it’s like a person without an identity. Once a serial number is assigned, the lens learns where it will be shipped.

Finished products and accessories are matched together and boxed in retail packaging, then loaded into large cardboard shipping boxes based on their final destination.

Finished product storage

The final step before a product leaves the factory is to be placed into the finished product storage area. With over 60 lens models in production across multiple lens mounts, there are a lot of lenses in this room. Products don’t sit here long – Sigma’s factory is producing at capacity, and there’s a constant need to clear this space to make room for new products coming off the assembly line.

Trucks arrive in the evening to ship boxes off to Narita airport in Tokyo, where they will be sent to distributors or subsidiaries worldwide.

I’m pretty sure the Ark of the Covenant is hiding in here somewhere.

Customer support and service

In addition to manufacturing, the Aizu factory serves as a center for Sigma’s customer support services. Most items received for repair here are from Japan (most countries will have their own service centers). However, products from other regions may be sent here if they require specialized repair.

Once a product is checked in, it’s handed over to one of Sigma’s ace repair technicians, who will repair it to Sigma’s original specifications, ensuring that it does so using the projection room shown on the next slide.

Projection room

The projection room, located next to the customer support and service area, is used to test products before and after repair. On the opposite side of the room, there’s a reverse projector for testing Sigma’s cinema lenses.

Sigma’s standard practice is to test lenses on both the resolution chart and MTF machine to ensure that they meet Sigma’s product specs before returning them to customers.

Additionally, technicians will even go outside to take real-world before and after photos to check a lens depending on the nature of the repair, for example to check for flare.

Sigma museum

Of course, no visit to Sigma would be complete without a trip to the Sigma museum, where it’s possible to see cameras and lenses past and present. There’s a lot to see, including modern lenses, classic lenses, SA-mount lenses and even cameras, like Sigma’s SD10 DSLR or compact Merrill models.

Finding some of the lenses you used early in your photography career is a fun, nostalgic trip down memory lane.

Articles: Digital Photography Review (dpreview.com)

Best disk space analyzer apps for monitoring your Mac’s storage in macOS

https://photos5.appleinsider.com/gallery/53110-106413-hard-drive-illustration-xl.jpg

Modern Mac storage uses chips, but we still think of spinning disks when it comes to drives. [Unsplash/Patrick Lindenberg]



AppleInsider may earn an affiliate commission on purchases made through links on our site.

If you’re feeling the pinch of limited storage capacity on your Mac, these disk space analyzer apps could help you see how it has been consumed, and potentially free some space up too.

There are numerous macOS apps that allow you to peek into the contents of your Mac’s storage devices. Disk space analyzer apps let you inspect the storage devices connected to your Mac, and take a look at what they contain.

Some of these utilities are simple viewers, which display drive contents as pie charts, graphics, or maps. Others allow you to clean and move files off your devices when they’re no longer needed.

There are several disk scanner utilities for macOS that can grant you to gain quick insight into your drives – far too many to cover here. The most popular disk viewers for macOS include:

Some also provide cleanup/removal abilities.

Only two of the above apps don’t yet have native Apple Silicon binary support: Disk Diag and Disk Analyzer Pro. However, note that in many cases Intel apps running in Apple’s Rosetta 2 emulation layer on M1 and M2 Macs have better performance than if they run natively on Intel Macs.

As usual, you can check for native Apple Silicon versions of any app by selecting it in the Finder and pressing Command-I (Get Info) on the keyboard.

Disk Xray

Disk Xray by Naarak Studio is a simple disk space analyzer and cleaner which can also find duplicate files. The streamlined interface consists of a Scanner window with buttons for scanning, duplicates, and cleanup.

To scan, you first click the small folder button at the top of the window to select a folder or an entire disk volume to scan, then click the Play button. Disk Xray is incredibly fast – scanning large volumes in under a second or two.

Once the scan completes, volume or folder contents are displayed at the bottom of the window, broken down by total, general file types, and subfolders.

Displayed data shows the size of each item, and how much of the total volume space it occupies by percentage. For folders, the number of subitems is also displayed.

Clicking one of the small buttons on the left allows you to delete, open, inspect, and get info on each item. Clicking Delete provides a warning, and if you confirm it, the item or items are deleted from the volume.

The only downside to Disk Xray is that you must rescan for each of the three options: scanning, duplicates, and cleanup. But this is a minor annoyance and the app’s speed more than makes up for the inconvenience.

Disk Xray costs $15, with a 14-day free trial available to try it out.

DaisyDisk

DaisyDisk by Software Ambience Corp is one of the oldest and best disk space analyzers for macOS.

On startup, a list of attached volumes is displayed in a single window. Clicking “Scan” starts scanning a volume, and when the scan is done, a detailed graph showing disk space usage is displayed.

On the right is a list of folders on the volume, and across the top, the current folder’s path on disk. Clicking an item on the right dives into that folder, updating the graph with fluid animation.

You can select any item on the right and drag it to the Collector at the bottom, removing it from the list.

Once you’ve collected all items you wish to remove, clicking the Delete button starts a countdown – giving you time to cancel if you wish. If you don’t cancel, the collected items are deleted from the volume.

This tool is inexpensive and a joy to use – a must-have for your desktop.

DaisyDisk costs $10, but is available with a free trial.

GrandPerspective

GrandPerspective from Eriban Software is a unique and simple volume treemap generator.

The generator shows every file on a volume in a single window containing blocks representing each file or folder. File sizes are indicated by the size of each block in the diagram – with larger blocks indicating larger items.

Using the toolbar along the top, or by right-clicking, you can zoom in and out, delete, open, Quick Look, and reveal items’ locations in the Finder. You can also copy an item’s full path on the disk.

There’s also a Get Info window that allows you to show .pkg contents in the map. The same window lets you change its colors, though some of the pallets are a bit garish.

OmniDiskSweeper

OmniDiskSweeperfrom The OMNI Group is almost as old as the Mac itself and is a disk space analyzer that displays a volume’s items in descending size order for easy removal of large files and folders.

On launch, OmniDiskSweeper displays a simple list of attached volumes, and disk space info for each. Selecting a volume and clicking “Sweep Selected Drive” displays items on that volume in a NeXT-style file browser window.

You can select and view subfolders, including contents of macOS app and .pkg bundles and their contents. You can delete any part of any folder or bundle on the disk by selecting items, and clicking the Trash button.

OmniDiskSweeper may seem a bit simplistic, but keep in mind it’s free, and it was created back when the Mac and its OS and filesystem were much smaller and simpler.

OMNI Group has probably kept it around for historical reasons. There are also older versions available for all versions of macOS back to 10.4 Tiger.

OmniDiskSweeper is free to download, though it’s not the only software the developer produces.

They also make a mean Gantt chart project management app called OMNIPlan ($199, $399 Pro, $20/mo subscription, 14-day free trial).

Disk Drill

Disk Drill by CleverFiles for macOS, iOS, and Android is a disk space analyzer that allows you to scan devices and volumes, and view and remove files and folders. You can also search for deleted files and folders, attempt recovery of lost partitions, and use a host of other features.

Due to lack of disclosure by Apple, Disk Drill can’t run all features on APFS volumes, but it supports macOS Extended (HFS), and Windows FAT32 and NTFS volume formats.

With Disk Drill you can scan both devices and volumes, including RAID devices. There are also S.M.A.R.T monitoring tools, data protection, bit-level backups, trash recovery, a cleaner, duplicate finder, data shredder, free space eraser, and macOS boot drive creator.

The UI is simple enough – with a window displaying each connected device and all its partitions. You can run most operations at both the device and volume level, and there are quick and deep scan levels which trade-off scan speed for completeness.

For a limited time, if you buy the Mac version of Disk Drill, you get the Windows version free.

Disk Diag

Disk Diag from Rocky Sand Studios is a disk space analyzer and cleaner app with features for finding large files, scanning and removing unused System, User, Developer, duplicate files, and unused applications.

There’s a simple OneClick mode and more advanced modes that allow you to view and remove individual files, folders, and apps.

There’s also a feature to scan for unused .dmg disk image files and an overall summary dashboard view. The dashboard view also displays current memory and CPU usage.

Disk Diag also adds a macOS menubar option for quick access, which you can disable.

Disk Space Analyzer and Funter

Disk Space Analyzer from Nektony is a full-featured and aptly named disk space analyzer that also uses sunburst graphs similar to DaisyDisk to display disk storage and contents.

Features include scanning, display, large and unused file search and removal, and copying/moving features.

Nektony also offers a simple macOS menubar product called Funter (free), which allows you to view and clean up both your drives and your Mac’s memory.

Disk Space Analyzer costs $5 per month or $10 per year, and is also offered with a free trial.

Disk Analyzer Pro

Disk Analyzer Pro from Systweak Software is a full-featured disk space analyzer and scanner with a dashboard interface. A simple pie chart with a legend shows disk usage and occupancy by file type/size.

It allows you to search a volume for files and folders by size and type, and to move, delete, and compress files with the single click of a toolbar button.

You can also view all files of a given type instantly in a new window simply by double-clicking on its category in the pie chart legend – a very cool feature.

Additional features include scanning/viewing by subfolders, and the ability to view both the top 100 files by size and date.

Disk Analyzer Pro costs $10 from the Mac App Store.

There’s also a Windows version available.

Built-in

An easy way to view disk usage in macOS is to select “About this Mac” from the Apple menu in the Finder. This opens a device info window for the Mac.

If you then click the More Info” button, you’ll be taken to the System Settings->General->About pane, which has a “Storage” section at the bottom.

Clicking the “Storage Settings” button takes you to an overview pane that shows disk usage for both the internal drive and each category of files stored on your Mac.

If you click the “All Volumes” button, a list of all attached disk volumes, their capacities, and usage graphs are displayed.

Using any of these apps will help you monitor your storage devices, better understand what’s on them, and make it easier to increase free space by removing unwanted and unused files and apps from your drives.

However, depending on your preferences, you may want to try out a third-party disk space analyzer that can provide more granular data for you to use.

AppleInsider News

The Dungeons & Dragons Movie’s Final Trailer Is Very, Very Weird

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/ce18459b349fd7d58666159ba9b2d0c4.jpg

It’s a mere eight days before Dungeons & Dragons: Honor Among Thieves hits theaters, a movie that by all accounts is quite fun if not particularly consequential. Seriously, I haven’t heard anybody bad-mouth the film since its first trailer was released back in July of 2022. So why does this final trailer seem so convinced that everyone thinks the movie is terrible?

The trailer is so bizarre that the choice to use /Film’s quote about how it contains “The most Chris Pine a Chris performance has been in a long time” is not the weirdest thing about it:

Dungeons & Dragons: Honor Among Thieves | Final Trailer (2023 Movie)

The trailer begins with “Forget everything you think you know… everyone is raving about Dungeons & Dragons!” Charitably, it reads like the announcer is certain everyone thinks the movie is going to be a huge pile of crap, but don’t listen to the haters! Except… there aren’t any? Seriously, the film’s gotten good critical reactions and looks—and has always looked—like a lot of fun! There’s a giant list of publications that have given the movie positive reviews right in the trailer! It’s weirdly defensive, trying to fight a problem that doesn’t seem to exist.

With that in mind, it sounds more like the announcer wants you to have some sort of amnesia before you go to watch the film when it premieres on March 31. “Forget everything you know! …also, unrelatedly, people seem to like Dungeons & Dragons: Honor Among Thieves. It’s Chris Pine-y as hell, guys. You like Chris Pine, right? Well, forget that you like Chris Pine, too! I demand it!”


Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

Comparisons of Proxies for MySQL

https://www.percona.com/blog/wp-content/uploads/2023/03/lucas.speyer_an_underwater_high_tech_computer_server_a_dolpin_i_9337e5c5-e3c5-41dd-b0b1-e6504186488b-150×150.pngmysql proxy

With a special focus on Percona Operator for MySQL

Overview

HAProxy, ProxySQL, MySQL Router (AKA MySQL Proxy); in the last few years, I had to answer multiple times on what proxy to use and in what scenario. When designing an architecture, many components need to be considered before deciding on the best solution.

When deciding what to pick, there are many things to consider, like where the proxy needs to be, if it “just” needs to redirect the connections, or if more features need to be in, like caching and filtering, or if it needs to be integrated with some MySQL embedded automation.

Given that, there never was a single straight answer. Instead, an analysis needs to be done. Only after a better understanding of the environment, the needs, and the evolution that the platform needs to achieve is it possible to decide what will be the better choice.

However, recently we have seen an increase in the usage of MySQL on Kubernetes, especially with the adoption of Percona Operator for MySQL. In this case, we have a quite well-defined scenario that can resemble the image below:

MySQL on Kubernetes

In this scenario, the proxies must sit inside Pods, balancing the incoming traffic from the Service LoadBalancer connecting with the active data nodes.

Their role is merely to be sure that any incoming connection is redirected to nodes that can serve them, which includes having a separation between Read/Write and Read Only traffic, a separation that can be achieved, at the service level, with automatic recognition or with two separate entry points.

In this scenario, it is also crucial to be efficient in resource utilization and scaling with frugality. In this context, features like filtering, firewalling, or caching are redundant and may consume resources that could be allocated to scaling. Those are also features that will work better outside the K8s/Operator cluster, given the closer to the application they are located, the better they will serve.

About that, we must always remember the concept that each K8s/Operator cluster needs to be seen as a single service, not as a real cluster. In short, each cluster is, in reality, a single database with high availability and other functionalities built in.

Anyhow, we are here to talk about Proxies. Once we have defined that we have one clear mandate in mind, we need to identify which product allows our K8s/Operator solution to:

  • Scale at the maximum the number of incoming connections
  • Serve the request with the higher efficiency
  • Consume as fewer resources as possible

The environment

To identify the above points, I have simulated a possible K8s/Operator environment, creating:

  • One powerful application node, where I run sysbench read-only tests, scaling from two to 4096 threads. (Type c5.4xlarge)
  • Three mid-data nodes with several gigabytes of data in with MySQL and Group Replication (Type m5.xlarge)
  • One proxy node running on a resource-limited box (Type t2.micro)

The tests

We will have very simple test cases. The first one has the scope to define the baseline, identifying the moment when we will have the first level of saturation due to the number of connections. In this case, we will increase the number of connections and keep a low number of operations.

The second test will define how well the increasing load is served inside the previously identified range. 

For documentation, the sysbench commands are:

Test1

sysbench ./src/lua/windmills/oltp_read.lua  --db-driver=mysql --tables=200 --table_size=1000000 
 --rand-type=zipfian --rand-zipfian-exp=0 --skip_trx=true  --report-interval=1 --mysql-ignore-errors=all 
--mysql_storage_engine=innodb --auto_inc=off --histogram  --stats_format=csv --db-ps-mode=disable --point-selects=50 
--reconnect=10 --range-selects=true –rate=100 --threads=<#Threads from 2 to 4096> --time=1200 run

Test2

sysbench ./src/lua/windmills/oltp_read.lua  --mysql-host=<host> --mysql-port=<port> --mysql-user=<user> 
--mysql-password=<pw> --mysql-db=<schema> --db-driver=mysql --tables=200 --table_size=1000000  --rand-type=zipfian 
--rand-zipfian-exp=0 --skip_trx=true  --report-interval=1 --mysql-ignore-errors=all --mysql_storage_engine=innodb 
--auto_inc=off --histogram --table_name=<tablename>  --stats_format=csv --db-ps-mode=disable --point-selects=50 
--reconnect=10 --range-selects=true --threads=<#Threads from 2 to 4096> --time=1200 run

Results

Test 1

As indicated here, I was looking to identify when the first Proxy will reach a dimension that would not be manageable. The load is all in creating and serving the connections, while the number of operations is capped at 100. 

As you can see, and as I was expecting, the three Proxies were behaving more or less the same, serving the same number of operations (they were capped, so why not) until they weren’t.

MySQL router, after the 2048 connection, could not serve anything more.

NOTE: MySQL Router actually stopped working at 1024 threads, but using version 8.0.32, I enabled the feature: connection_sharing. That allows it to go a bit further.  

Let us take a look also the latency:

latency threads

Here the situation starts to be a little bit more complicated. MySQL Router is the one that has the higher latency no matter what. However, HAProxy and ProxySQL have interesting behavior. HAProxy performs better with a low number of connections, while ProxySQL performs better when a high number of connections is in place.  

This is due to the multiplexing and the very efficient way ProxySQL uses to deal with high load.

Everything has a cost:

HAProxy is definitely using fewer user CPU resources than ProxySQL or MySQL Router …

HAProxy

.. we can also notice that HAProxy barely reaches, on average, the 1.5 CPU load while ProxySQL is at 2.50 and MySQL Router around 2. 

To be honest, I was expecting something like this, given ProxySQL’s need to handle the connections and the other basic routing. What was instead a surprise was MySQL Router, why does it have a higher load?

Brief summary

This test highlights that HAProxy and ProxySQL can reach a level of connection higher than the slowest runner in the game (MySQL Router). It is also clear that traffic is better served under a high number of connections by ProxySQL, but it requires more resources. 

Test 2

When the going gets tough, the tough get going

Let’s remove the –rate limitation and see what will happen. 

mysql events

The scenario with load changes drastically. We can see how HAProxy can serve the connection and allow the execution of more operations for the whole test. ProxySQL is immediately after it and behaves quite well, up to 128 threads, then it just collapses. 

MySQL Router never takes off; it always stays below the 1k reads/second, while HAProxy served 8.2k and ProxySQL 6.6k.

mysql latency

Looking at the latency, we can see that HAProxy gradually increased as expected, while ProxySQL and MySQL Router just went up from the 256 threads on. 

To observe that both ProxySQL and MySQL Router could not complete the tests with 4096 threads.

ProxySQL and MySQL Router

Why? HAProxy always stays below 50% CPU, no matter the increasing number of threads/connections, scaling the load very efficiently. MySQL router was almost immediately reaching the saturation point, being affected by the number of threads/connections and the number of operations. That was unexpected, given we do not have a level 7 capability in MySQL Router.

Finally, ProxySQL, which was working fine up to a certain limit, reached saturation point and could not serve the load. I am saying load because ProxySQL is a level 7 proxy and is aware of the content of the load. Given that, on top of multiplexing, additional resource consumption was expected.   

proxysql usage

Here we just have a clear confirmation of what was already said above, with 100% CPU utilization reached by MySQL Router with just 16 threads, and ProxySQL way after at 256 threads.

Brief summary

HAProxy comes up as the champion in this test; there is no doubt that it could scale the increasing load in connection without being affected significantly by the load generated by the requests. The lower consumption in resources also indicates the possible space for even more scaling.

ProxySQL was penalized by the limited resources, but this was the game, we had to get the most out of the few available. This test indicates that it is not optimal to use ProxySQL inside the Operator; it is a wrong choice if low resource and scalability are a must.    

MySQL Router was never in the game. Unless a serious refactoring, MySQL Router is designed for very limited scalability, as such, the only way to adopt it is to have many of them at the application node level. Utilizing it close to the data nodes in a centralized position is a mistake.  

Conclusions

I started showing an image of how the MySQL service is organized and want to close by showing the variation that, for me, is the one to be considered the default approach:

MySQL service is organized

This highlights that we must always choose the right tool for the job. 

The Proxy in architectures involving MySQL/Percona Server for MySQL/Percona XtraDB Cluster is a crucial element for the scalability of the cluster, no matter if using K8s or not. Choosing the one that serves us better is important, which can sometimes be ProxySQL over HAProxy. 

However, when talking about K8s and Operators, we must recognize the need to optimize the resources usage for the specific service. In that context, there is no discussion about it, HAProxy is the best solution and the one we should go to. 

My final observation is about MySQL Router (aka MySQL Proxy). 

Unless there is a significant refactoring of the product, at the moment, it is not even close to what the other two can do. From the tests done so far, it requires a complete reshaping, starting to identify why it is so subject to the load coming from the query more than the load coming from the connections.   

Great MySQL to everyone. 

References

Percona Database Performance Blog

How Has the Hunting Rifle Evolved Over the Last 300 Years?

https://www.alloutdoor.com/wp-content/uploads/2023/03/How-Has-the-Hunting-Rifle-Evolved-Over-the-Last-300-Years-Img-1.jpg

Modern humans have been around for thousands of years, so guns are a relatively new tool. The first firearm goes back to around the 10th century in China, where fire lances used bamboo and gunpowder to launch spears. Now, there are numerous types of guns for various recreational uses, with hunting among the top activities. Rifles have been the gun of choice for hunters for nearly 300 years. How did the modern hunting rifle make it here?

1. Pennsylvania Rifle

Nowadays, the standard for hunting rifles centers around models like the current hunting rifle from Christensen Arms. But to understand rifles in 2023, you’ll have to go back to the early 1700s.

North America was growing with European settlers from England, France, Spain and more. Though, the Germans inspired the first rifle — the Pennsylvania rifle. This firearm was an upgrade over the musket because it had a much better range. The Pennsylvania rifle drew inspiration from jäger rifles used in German hunting, which started at around 54 inches long but could expand to over 6 feet.

2. Medad Hills’ Long Rifle

The Pennsylvania rifle — also known as the Kentucky rifle — was successful in the American colonies and led to similar models in the 18th century. For example, gunsmith Medad Hills crafted fowling pieces for hunting. Hills produced guns in Connecticut and helped hunters by creating long-barreled guns for increased accuracy. He later served in the Revolutionary War and made muskets for Connecticut in 1776.

3. Plains Rifles

After the Revolutionary War, rifle manufacturing began to take off in the United States, starting with the plains rifles. The new Americans began to expand westward and used plains rifles on the flat lands. Also known as the Hawken rifle, the plains rifle was shorter than its Pennsylvania predecessor but had a larger caliber, typically starting at .50. They were popular among hunters and trappers who needed to take down large animals from a distance.

4. Winchester 1876

A few decades later, the country broke out into a civil war. This era used military rifles from manufacturers like Springfield. However, it wasn’t until after the war that you’d see the hunting rifle that would inspire hunting rifles for decades.

Winchester was critical for late 19th-century rifles, starting with its 1876 model. This rifle was among the most high-powered yet for hunters. The Winchester 1876 was among the earliest repeaters and it had powerful capabilities with sizable ammunition — the intense bullets were necessary to take down large game like buffalo.

5. Winchester 1895

The success of the 1876 model led Winchester to create the 1895. This rifle was a repeater that featured smokeless rounds. Unlike its predecessors, the 1895 model was innovative because it included a box magazine below the action. It may be less powerful than models today, but it was incredibly potent for the time.

6. Winchester Model 70

Fast forward a bit to 1936. The country was in the Great Depression, but Winchester still produced excellent hunting rifles. Hunters called the Model 70 from Winchester the rifleman’s rifle, taking inspiration from Mauser, the German manufacturer. Winchester made the rifle with a controlled feed until 1964 before switching to a push feed and it still makes variations of the Model 70 today.

7. Marlin 336 (1948)

After World War II, Marlin introduced the 336 model as a successor to its 1893 rifle. It’s a lever-action rifle your grandfather may have owned to go deer hunting. Its specs may vary, but you’ll typically see it with a .30 or .35 caliber. The barrel can be as short as 20 inches or extend to 24 inches long. Marlin no longer makes the 336, but, Ruger — who purchased Marlin — plans to bring it back in 2023.

8. Remington 700 (1962)

1962 saw what could be the best hunting rifle ever made — the Remington Model 700. This rifle is the most popular bolt-action firearm, with over five million sold since its inception. In the last 60 years, Remington has made numerous variations to keep up with modern demand. This model is famous for its pair of dual-opposed lugs and a recessed bolt face.

The Remington 700 became the hunting rifle of choice for many across America, leading to its adoption by the U.S. military and law enforcement. Remington also makes 700s for the police — the 700P. The manufacturer makes the M24 and M49 sniper rifles for the military based on the 700.

The Evolution of Hunting Rifles

Rifles have come a long way since the beginning. Imagine picking up a Pennsylvania rifle and comparing it to your Mauser 18 Savanna. The hunting rifle helped settlers and early Americans hunt and sustain themselves and the evolution has led to the great rifles you know today, like the Remington 700.

How Has the Hunting Rifle Evolved Over the Last 300 Years?
How Has the Hunting Rifle Evolved Over the Last 300 Years?

The post How Has the Hunting Rifle Evolved Over the Last 300 Years? appeared first on AllOutdoor.com.

AllOutdoor.com

We Didn’t Start the Fire: Heavy Metal Edition

https://theawesomer.com/photos/2023/03/we_didnt_start_the_fire_leo_moracchioli_t.jpg

We Didn’t Start the Fire: Heavy Metal Edition

Link

Wheel of Fortune, Sally Ride, heavy metal suicide. Leo Morachiolli didn’t start the fire, but he did an impressive job covering Billy Joel’s wordy 1989 hit, adding fuel to the inferno with his hard-edged guitar and gravelly vocals. If you’re waiting for Joel to update the song for the 21st century, don’t hold your breath.

The Awesomer

Laravel Open Weather Package


README

Packagist
GitHub stars
GitHub forks
GitHub issues
GitHub license

Laravel OpenWeather API (openweather-laravel-api) is a Laravel package to connect Open Weather Map APIs ( https://openweathermap.org/api ) and access free API services (current weather, weather forecast, weather history) easily.

Supported APIs

Installation

Install the package through Composer.
On the command line:

composer require rakibdevs/openweather-laravel-api

Configuration

If Laravel > 7, no need to add provider

Add the following to your providers array in config/app.php:

'providers' => [
    // ...
    RakibDevs\Weather\WeatherServiceProvider::class,
],
'aliases' => [
    //...
    'Weather' => RakibDevs\Weather\Weather::class,	
];

Add API key and desired language in .env

OPENWAETHER_API_KEY=
OPENWAETHER_API_LANG=en

Publish the required package configuration file using the artisan command:

	$ php artisan vendor:publish

Edit the config/openweather.php file and modify the api_key value with your Open Weather Map api key.

	return [
	    'api_key' 	        => env('OPENWAETHER_API_KEY', ''),
    	    'onecall_api_version' => '2.5',
            'historical_api_version' => '2.5',
            'forecast_api_version' => '2.5',
            'polution_api_version' => '2.5',
            'geo_api_version' => '1.0',
	    'lang' 		=> env('OPENWAETHER_API_LANG', 'en'),
	    'date_format'       => 'm/d/Y',
	    'time_format'       => 'h:i A',
	    'day_format'        => 'l',
	    'temp_format'       => 'c'         // c for celcius, f for farenheit, k for kelvin
	];

Now you can configure API version from config as One Call API is upgraded to version 3.0. Please set available api version in config.

Usage

Here you can see some example of just how simple this package is to use.

use RakibDevs\Weather\Weather;

$wt = new Weather();

$info = $wt->getCurrentByCity('dhaka');    // Get current weather by city name

Access current weather data for any location on Earth including over 200,000 cities! OpenWeather collect and process weather data from different sources such as global and local weather models, satellites, radars and vast network of weather stations

// By city name
$info = $wt->getCurrentByCity('dhaka'); 

// By city ID - download list of city id here http://bulk.openweathermap.org/sample/
$info = $wt->getCurrentByCity(1185241); 

// By Zip Code - string with country code 
$info = $wt->getCurrentByZip('94040,us');  // If no country code specified, us will be default

// By coordinates : latitude and longitude
$info = $wt->getCurrentByCord(23.7104, 90.4074);

Output:

{
  "coord": {
    "lon": 90.4074
    "lat": 23.7104
  }
  "weather":[
    0 => { 
      "id": 721
      "main": "Haze"
      "description": "haze"
      "icon": "50d"
    }
  ]
  "base": "stations"
  "main": {
    "temp": 26
    "feels_like": 25.42
    "temp_min": 26
    "temp_max": 26
    "pressure": 1009
    "humidity": 57
  }
  "visibility": 3500
  "wind": {
    "speed": 4.12
    "deg": 280
  }
  "clouds": {
    "all": 85
  }
  "dt": "01/09/2021 04:16 PM"
  "sys": {
    "type": 1
    "id": 9145
    "country": "BD"
    "sunrise": "01/09/2021 06:42 AM"
    "sunset": "01/09/2021 05:28 PM"
  }
  "timezone": 21600
  "id": 1185241
  "name": "Dhaka"
  "cod": 200
}

Make just one API call and get all your essential weather data for a specific location with OpenWeather One Call API.

// By coordinates : latitude and longitude
$info = $wt->getOneCallByCord(23.7104, 90.4074);

4 day forecast is available at any location or city. It includes weather forecast data with 3-hour step.

// By city name
$info = $wt->get3HourlyByCity('dhaka'); 

// By city ID - download list of city id here http://bulk.openweathermap.org/sample/
$info = $wt->get3HourlyByCity(1185241); 

// By Zip Code - string with country code 
$info = $wt->get3HourlyByZip('94040,us');  // If no country code specified, us will be default

// By coordinates : latitude and longitude
$info = $wt->get3HourlyByCord(23.7104, 90.4074);

Get access to historical weather data for the previous 5 days.

// By coordinates : latitude, longitude and date
$info = $wt->getHistoryByCord(23.7104, 90.4074, '2020-01-09');

Air Pollution API provides current, forecast and historical air pollution data for any coordinates on the globe

Besides basic Air Quality Index, the API returns data about polluting gases, such as Carbon monoxide (CO), Nitrogen monoxide (NO), Nitrogen dioxide (NO2), Ozone (O3), Sulphur dioxide (SO2), Ammonia (NH3), and particulates (PM2.5 and PM10).

Air pollution forecast is available for 5 days with hourly granularity. Historical data is accessible from 27th November 2020.

// By coordinates : latitude, longitude and date
$info = $wt->getAirPollutionByCord(23.7104, 90.4074);

Geocoding API is a simple tool that we have developed to ease the search for locations while working with geographic names and coordinates.
-> Direct geocoding converts the specified name of a location or area into the exact geographical coordinates;
-> Reverse geocoding converts the geographical coordinates into the names of the nearby locations.

// By city name
$info = $wt->getGeoByCity('dhaka');

// By coordinates : latitude, longitude and date
$info = $wt->getGeoByCity(23.7104, 90.4074);
  • 60 calls/minute
  • 1,000,000 calls/month
  • 1000 calls/day when using Onecall requests

License

Laravel Open Weather API is licensed under The MIT License (MIT).

Laravel News Links

Valet 4.0 is released

https://laravelnews.s3.amazonaws.com/images/laravel-valet-version-four.png

Valet 4 is officially released! Let’s look into what v4 offers and how you can upgrade your local install today.

The backdrop

Valet was originally introduced in May 2016 with this incredible video. Valet v2 was released soon after, bringing about the move from Caddy to Nginx. But after that, development on Valet slowed; as Taylor has often pointed out, “at that point, Valet was feature complete.”

However, when I picked up maintenance of Valet a few years back, there were two things I noticed: first, that many people needed different versions of PHP for their different sites; and second, that miscellaneous features and bug fixes addressed over the years made the codebase a bit difficult to reason with at times.

Valet v3 was released in March 2022, with the primary focus on adding support for multiple versions of PHP running in parallel on the same machine.

And now, we’re looking at Valet v4.

What’s new in Valet 4?

The most important change to Valet 4 is something you can’t even see from the outside: the internals of the project has been re-architected and tested heavily. Just to be clear, they’ve been re-architected back toward the style of simplicity Taylor and Adam’s original code had. But they’re now covered with all forms of unit and integration tests, and the changes made since Valet 2 are now much better integrated.

What does that mean?

Valet 4 is the most stable, easy to debug, and easy to fix version of Valet yet.

New features in Valet 4

There are a few user-facing new features:

  • valet status command: If you run valet status, you’ll get a table showing you the “health” of a few important aspects of your Valet application. This is helpful both because you can use it when you’re debugging, but, like any good CLI tool, it’ll also return codes for success or failure that other CLI tools can consume.
  • Upgrades to ngrok: If you use ngrok to share your sites, older versions of Valet bundled ngrok as an install. Now, Valet will prompt you to install ngrok through Homebrew, allowing you to have one universal version installed, and allowing you to keep it up to date as you please.
  • Expose as a share option: If you use Expose to share your sites, it’s now integrated into Valet! Run valet share-tool expose and, if you don’t have Expose installed, it’ll prompt you to install it. Once you’ve set up your Expose token, you’re ready to share using the same valet share command you’re familiar with.

Upgrade notes

If you’re upgrading from Valet 3, here’s my preferred way to upgrade:

  1. Edit your ~/.composer/composer.json file and update your Valet requirement to "^4.0"
  2. Update: composer global update laravel/valet
  3. Run valet install

Make sure you run valet install, as it’ll check your system’s compatibility and upgrade some configuration files for you.

Custom drivers

If you have any custom drivers, you’ll want to update them to match the new syntax (basically, drivers are now namespaced and have type hints and return types).

.valetphprc

If you use .valetphprc to define your sites’ PHP versions, you’ll want to rename those files to .valetrc and change their contents; .valetphprc files just contain a PHP Brew formula (e.g. php@8.1), but the new .valetrc files are broader config files, so you’ll need to prefix the formula with php=.

So if your project had this .valetphprc file:

php@8.1

You’ll want to rename it to .valetrc and update its contents to this:

php=php@8.1

Backwards compatibility: PHP 7.1-7.4

Valet 4 requires PHP 8.0+ to be installed on your system via Homebrew. As I mentioned already, you can use Valet’s isolation feature to set individual sites to use older versions of PHP, back to 7.1.

However, if you have a reason you need to use PHP 7.1-7.4 as your primary linked PHP (meaning if you just type php -v you see something between 7.1 and 8.0), you can do that! Just make sure that you have a modern version of PHP installed on your machine, and Valet will use that version to run its internal commands.

However, a quick warning: If you use Valet 4 and your primary linked version of PHP is lower than PHP 8, all of your local Valet CLI commands will run a bit more slowly, as they have to find your modern PHP install and proxy their calls through it.

The future

That’s it! The primary goal of Valet 4 is stability, but it also opens up some great new options for the future. First, the .valetrc file is much more powerful than .valetphprc was, and we can make it a lot more configurable. And second, I dropped a concept called Extensions that was basically entirely unused, with the hope of building a plugin system sometime in the near future.

If you followed my journey of rebuilding Valet for v4 on Twitter, you might have seen that I attempted to make it work on Linux. Sadly, that wasn’t successful, but I still have dreams of one day attempting it again. No promises… but it’s still a dream!

I hope you all love Valet 4. Enjoy!

Laravel News