Caveman Spark AR-15 Crush Washers That Change Color When Hot

https://www.thefirearmblog.com/blog/wp-content/uploads/2022/08/Spark-Crush-Washers-Orange-180×180.png

Caveman LLC Spark AR-15 Crush WashersEarlier this year, Caveman LLC introduced a series of crush washers for AR-15s that change to bright colors when the barrel starts to heat up past 131 degrees F (55 degrees C). Caveman states that the intended idea behind their Spark AR-15 Crush Washers is to warn shooters when the barrel is too hot to […]

Read More …

The post Caveman Spark AR-15 Crush Washers That Change Color When Hot appeared first on The Firearm Blog.

The Firearm Blog

Seven Ways To Reduce MySQL Costs in the Cloud

https://www.percona.com/blog/wp-content/uploads/2022/08/image1.pngReduce MySQL Costs in the Cloud

Reduce MySQL Costs in the CloudWith the economy slowing down and inflation raging in many parts of the world, your organization will love you if you find ways to reduce the costs of running their MySQL databases. This is especially true if you run MySQL in the cloud, as it often allows you to see the immediate effect of those savings, which is what this article will focus on.

With so many companies announcing layoffs or hiring freezes, optimizing your costs may free enough budget to keep a few team members on or hire folks your team needs so much. 

1. Optimize your schema and queries

While optimizing schema and queries is only going to do so much to help you to save on MySQL costs in the cloud, it is a great thing to start with. Suboptimal schema and queries can require a much larger footprint, and it is also something “fully managed” Database as a Service (DBaaS) solutions from major cloud vendors do not really help you with. It is not uncommon for a system with suboptimal schema and queries to require 10x or more resources to run than an optimized system. 

At Percona, we’ve built Percona Monitoring and Management (PMM), which helps you to find out which of the queries need attention and how to optimize it. If you need more help, Percona Professional Services are often able to get your schema and queries in shape in a matter of days, providing long-term saving opportunities with a very reasonable upfront cost.

2. Tune your MySQL configuration

Optimal MySQL configuration depends on the workload, which is why I recommend tuning your queries and schema first. While gains from MySQL configuration tuning are often smaller than from fixing your queries, it is still significant. We have an old but still very relevant article on the basic MySQL settings you’ll want to tune, which you can check out. You can also consider using tools like Releem or Ottertune to help you to get a better MySQL configuration for your workload.  

More advanced tuning might include exploring alternative storage engines, such as MyRocks (included in Percona Distribution for MySQL). MyRocks can offer fantastic compression and minimize required IO, thereby drastically reducing storage costs. For a more in-depth look at MyRocks performance in the cloud, check out the blog Scaling IO-Bound Workloads for MySQL in the Cloud. 

3. Implement caching

Caching is cheating — and it works great! The most state-of-the-art caching for MySQL is available by rolling out ProxySQL. It can provide some additional performance benefits, such as connection pooling and read-write splitting, but I think caching is most generally useful for MySQL cost reduction. ProxySQL is fully supported by Percona with a Percona Platform subscription and is included in Percona Distribution for MySQL.  

Enabling query cache for your heavy queries can be truly magical — often enough it is the heaviest queries that do not need to provide the most up-to-date information and can have their results cached for a significant time.

You can read more about how to configure ProxySQL caching on the Percona Blog, as well as these articles:

4. Rightsize your resources

Once you have optimized your schema and queries and tuned MySQL configuration, you can check what resources your MySQL instances are using and where you can put them on a diet without negatively impacting performance. CPU, memory, disk, and network are four primary resources that impact MySQL performance and often can be managed semi-independently in the cloud.  For example, if your workload needs a lot of CPU power but does not need a lot of memory for caching, you can consider CPU-intensive instances. PMM has some great tools to understand which resources are most demanded by your workload.  

MySQL Costs in the Cloud

You can also read some additional resource-specific tips in my other article on MySQL performance

In our experience, due to the simplicity of “scaling with credit cards”, many databases in the cloud become grossly over-provisioned over time, and there can be a lot of opportunity for savings with instance size reduction!

5. Ditch DBaaS for Kubernetes

The price differential between DBaaS and comparable resources continues to grow.  With the latest Graviton instances, you will pay double for Amazon RDS compared to the cost of the underlying instance. Amazon Aurora, while offering some wonderful features, is even more expensive. If you are deploying just a couple of small nodes, this additional cost beats having to hire people to deploy and manage “do it yourself” solutions. If you’re spending tens of thousands of dollars a month on your DBaaS solution, the situation may be different.   

A few years ago, building your own database service using building blocks like EC2, EBS, or using a DBaaS solution such as Amazon RDS for MySQL were the only solutions. Now, another opportunity has emerged — using Kubernetes.  

You can get Amazon EKS – Managed Kubernetes Service, Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS) for a relatively small premium as infrastructure costs. Then, you can use Percona’s MySQL Operator to deploy and manage your database at a fraction of the complexity of traditional deployments. If you’re using Kubernetes for your apps already or use an infrastructure as code (IaC) deployment approach, it may even be handier than DBaaS solutions and 100% open source. 

Need help? Percona has your back with Percona Platform.

6. Consider lower-cost alternatives

A few years ago, only major cloud providers (in the U.S.: AWS, GCP,  Azure) had DBaaS solutions for MySQL.  Now the situation has changed, with MySQL DBaaS being available from second-tier and typically lower-cost providers, too. You can get MySQL DBaaS from Linode, Digital Ocean, and Vultr at a significantly lower cost (though, with fewer features). You can also get MySQL from independent providers like Aiven.    

If you’re considering deploying databases on a cloud vendor different than the one your application uses, make sure you’re using a close location for deployment and make sure to check the network latency between your database and your application — poor network latency or reliability issues can negate all the savings you’ve achieved. 

7. Let experts manage your MySQL database

If you’re spending more than $20,000 a month on your cloud database footprint, or if your application is growing or changing rapidly, you’ll often have better database performance, security, and reliability at lower cost by going truly “fully managed” instead of “cloud fully managed.”  The latter relies on “shared responsibility” in many key areas of MySQL best practices.  

“Cloud fully managed” will ensure the database infrastructure is operating properly (if scaled with a credit card), but will not fully ensure you have optimal MySQL configuration and optimized queries, are following best security practices, or picked the most performant and cost-effective instance for your MySQL deployment. 

Percona Managed Services is a solution from Percona to consider, though there are many other experts on the market that can take better care of your MySQL needs than DBaaS at major clouds.  

Summary

If the costs of your MySQL infrastructure in the cloud are starting to bite, do not despair. There are likely plenty of savings opportunities available, and I hope some of the above tips will apply to your environment.  

It’s also worth noting that my above advice is not theory, but is based on the actual work we’ve done at Percona. We’ve helped many leading companies realize significant cost savings when running MySQL in the cloud, including Patreon, which saved more than 50% on their cloud database infrastructure costs with Percona. 

 

Learn how Patreon saved more than 50% on database infrastructure costs

Percona Database Performance Blog

Why You Should Use Administrative Interfaces to Manage Linux Servers

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/08/linux-management-interfaces-systems.jpg

The biggest problem for Linux system and server administrators is troubleshooting the errors encountered. Fixing these issues, managing security problems, and analyzing the primary cause behind such issues from the command screen can sometimes pose serious challenges.

Linux itself is a command-line universe. It is not easy to learn all the commands and their parameters, let alone use them to troubleshoot errors.

That’s why there are Linux management interfaces to keep everything in sight. Most system and server administrators prefer these administrative interfaces for managing their Linux systems instead. Here’s why you should consider using an admin interface to manage a Linux server.

Why Use an Admin Interface for Linux Management?

For Linux system administrators, it is important to learn how these interfaces work in addition to knowing how to use the management interfaces properly. To summarize this, you can think of management interfaces as tools that you will use between your network management station and the object or tool you want to manage, in this case, a Linux machine.

So that you can imagine it better, you can think of it this way. Imagine you have a Linux server. To manage this server and access various objects, you need to use some management protocol. It is possible to monitor the relationship between these management protocols and the object to be managed with management interfaces.

It is quite difficult to do all this tracking from the command screen. You need to spend a lot of time on the command screen and master the Linux networking commands. Moreover, even if you do all these, there’s an increased possibility of making mistakes. As a result, it will be risky and difficult to manage a system manually using commands.

Using a Web Interface for Linux Administration

Web interfaces are accessible and easy to use. If you’re managing a system using a web interface, you can often find databases, customer information, user agreements, uploaded files, IP addresses, and even error logs, all in one place. Since everything will be in front of your eyes, you can perform your management operations with just a few mouse clicks.

What Is Webmin?

It is very practical to manage web-based systems with Webmin. If you have used environments such as cPanel and Plesk before, you will never be unfamiliar when using Webmin. Moreover, Webmin is open source and has a lot of features.

Webmin allows you to manage the accounts of all registered users in the system from a single location. Furthermore, no coding abilities are required. You also don’t require shell commands to configure your network or change network files, as Webmin can assist you with network configurations as well.

Another management issue that Linux users are closely familiar with is disk partitioning. Webmin comes with partitioning and automatic backup features. It also takes care of security protocols so you don’t have to worry about SSL renewal. In addition, there is a command shell feature using which you can issue Linux and Unix commands within Webmin.

Today, cloud technologies continue to grow at a very rapid pace. If you are considering using a cloud computing service or want to build your system on a cloud, Webmin also has a cloud installation feature.

Another very useful feature of Webmin is that it has different modules. Since it is open source, you can write your own modules and can even benefit from ready-made modules on the internet. For example, using the Virtualmin GPL module, you can control your hosting service. Moreover, it is possible to manage virtual hosts and DNS from here.

If you have more than one virtual server, Virtualmin GPL creates a Webmin user for each virtual server. Each server manages only its own virtual server with Webmin. Thus, it is possible to have independent mailboxes, websites, applications, database servers, and software in each of these virtual servers.

Package Configuration in Linux System Management

Another topic that Linux system administrators should be familiar with is package configuration and management. When installing a package on your system, you only follow what is happening on the command screen. The download process takes place, it writes what the installed files are, and you are given information about the installation. However, this adventure is not that simple.

When you want to install a package, it needs to be configured system-wide. To give an example from Debian and Ubuntu systems, the configuration tool that does this is debconf. It configures the package you want to install, according to the settings in the dpkg-reconfigure file.

It would make sense to examine it through an example to better understand why you should consider using debconf within the management interfaces. You can query the packages available in your debconf database using a simple command. The below debconf-show command lets you query the entire database and the –listowners parameter returns only owners:

sudo debconf-show 

Now try to reconfigure an item of your choice using dpkg-reconfigure:

sudo dpkg-reconfigure wireshark-common

As you can see, a configuration interface for wireshark-common will open. Now the configuration operations will be easier using the debconf interface. There is no debconf command on the command line, though. This is because debconf is already integrated into dpkg.

If you are going to write your own Linux packages and use them in system administration, it will be useful to be familiar with debconf. Because it provides an interface to talk to users who will install your package and get some input from them. For this, you need to use the frontend and backend APIs that debconf provides.

Importance of Admin Interfaces in Linux System Management

There are a lot of commands you can use when managing Linux systems and servers. Each of these commands has dozens of different parameters. Of course, it is very valuable for you to become familiar with and learn about them. However, you can’t ignore the convenience and accessibility provided by management interfaces.

Even just to change a basic configuration setting, you need to make some changes to the files. Moreover, these changes can damage your system. In a large-scale project, such configuration issues can cause huge problems in terms of both expenses and security. However, the management interfaces will save you from this whole pile of commands and parameters.

The main purpose here is to reduce the workload and save time. Webmin and debconf are just examples. You may also want to learn technologies such as Cockpit and Nagios. These are powerful Linux system and server administrator tools that are used frequently and will be useful to you.

MUO – Feed

Objects + Duct Tape vs. Hydraulic Press

https://theawesomer.com/photos/2022/08/duct_tape_hydraulic_press_t.jpg

Objects + Duct Tape vs. Hydraulic Press

Link

Duct tape is an incredibly strong and versatile mending tool. But can it help objects stand up to the force of the mighty 150-ton hydraulic press? HPC wrapped some everyday items in thick layers of the sticky silver stuff to see if it improves its ability to survive the press?

The Awesomer

‘Eat s*** and die’: Man on the street offers his patriotic dissent against Biden

https://www.louderwithcrowder.com/media-library/image.png?id=31107219&width=980

The opening weekend of college football is two weeks away. It was opening day of 2021 when we first got a glimpse of how little the American people thought of Joe Biden. Before it showed up in polls, Americans gathered to chant "f*ck Joe Biden." Ol’ Puddinghead hasn’t done much to change that perception, other than spend money and make things more expensive. That is evident in this man on the street interview.

Remember, as the media taught us during the Trump years and the Bush years before that, dissent is the highest form of patriotism. However, for unknown reasons, patriotic dissent was not valid during the years 2009-2016.

"I’d tell him to eat sh*t and die and stick the American flag up his f*cking ass."

Inserting the flag up someone’s rectum goes against U.S. Flag Code. However, when you look at inflation and all the other things this president has made a mess of, the anger and disapproval are understood.

"Because he’s a f*cking traitor, son of a b*tch, old f*cking feeble-minded f*cking gaylord."

By all accounts, Biden is heterosexual. And I feel we all need to be more careful over recklessly accusing people of being traitors. The old and feeble-minded part is accurate.

"Donald Trump is the number one president out of all times from George Washington on down."

Donald Trump agrees.

"And he freed Kodak! Free Kodak!"

This is correct. President Trump commuted rapper Kodak Black’s prison sentence before leaving the White House. Joe Biden can’t even free WNBA star Brittney Griner.

According to RCP polling averages, only 40% of Americans approve of Joe Biden. 55% disapprove.

The Louder with Crowder Dot Com Website is on Instagram now! Follow us at @lwcnewswire and tell a friend!

Louder With Crowder

This Throwing Knife Launcher Is an Amazing and Terrifying Feat of Engineering

http://img.youtube.com/vi/-BKEZbYOMpI/0.jpg

Steven Seagal makes knife throwing look effortless in movies like Under Siege, but it’s actually an incredibly hard skill to master if you don’t have Hollywood’s movie magic helping you out. That’s what Quint, of the YouTube channel Quint BUILDs, was lacking. But with some good old fashioned engineering, they managed to build a handheld throwing knife launcher that reliably hit their target from varying distances.

Jeff Goldblum’s “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should,” is what comes to my mind here. That apparently wasn’t the case with Quint, though, because after months of designing, redesigning, programming, and reprogramming, the YouTuber’s built one of the most impressively terrifying pieces of amateur engineering we’ve ever seen.

Knife Throwing Machine!

Once you learn to hit a target with an arrow, hitting it again from varying angles and distances is relatively easy with some minor aim adjustments. That’s not the case when throwing knives. Added spin is needed to give the knife enough kinetic energy to embed itself into a target, with a specific number of rotations needed to ensure its sharp tip hits the target first, not the flat side or blunt handle. If you’ve ever been axe throwing, you’ll know what we’re talking about.

As with many other talented makers and hardware hackers on YouTube, Quint decided that brute force engineering was a better alternative to diligent practice, so they created a handheld launcher that uses high-performance servo motors, solenoids, custom 3D-printed components, and some heavy batteries to hurl throwing knives with the right amount of spin so they hit the mark and stick every single time. To allow the launcher to work from varying distances, the launcher even employs a LiDAR sensor that’s used to measure how far away the target is and adjust its green targeting laser appropriately.

It appears to work almost flawlessly, launching knives from a magazine as quickly as the machine can reset itself and Quint can pull the trigger. Steven Seagal might finally have some stiff competition for Under Siege 3.

Gizmodo

What are the caveats of running Laravel on AWS Lambda


Let’s set the scene. We’re looking for scaling a PHP
application. Googling around take us to find out that
AWS Lambda is the most scalable service out there. It doesn’t
support PHP natively, but we got https://bref.sh. Not only
that, we also have Serverless Visually Explained
which walk us through what we need to know to get PHP up
and running on AWS Lambda. But we have a 8 year old
project that was not designed from the ground up to be
serverless. It’s not legacy. Not really. It works well,
has some decent test coverage, a handful of engineers
working on it and it’s been a success so far. It just has
not been designed for horizontal scaling. What now?

Bref has two primary goals: Feature parity with other
AWS Lambda runtime (the Event-Driven layer) and also
to be a replacement for web hosting (the Web Apps layer).
In a way, with the Web App layer, Bref allows us to lift-and-shift
from our current hosting provider into AWS Lambda and have
PHP-FPM working the same way as we’re all used to love.
So if we just take a large codebase and redeploy it on
AWS Lambda, will everything just work?

Here are some caveats to pay close attention to help with
this journey.

30 seconds API Gateway Timeout

When deploying to AWS Lambda, it’s very common to use API
Gateway as the routing solution. It’s like an nginx for
our PHP-FPM, but completely scalable and managed by AWS.
It does come with a hard limit of timeout at 30 seconds.
If your application never takes longer to process any
HTTP request, then this is not a big deal, but if it does,
even if very rarely, API Gateway will kill the request.

If this is a deal-breaker for your application, a workaround
could be to use Application Load Balancer instead.

PHP Extensions

Not every PHP extension is available easily. Tobias Nyholm
maintains a great community project for PHP Extensions at
https://github.com/brefphp/extra-php-extensions. A lot of
great extensions are available by default
(as documented in https://bref.sh/docs/environment/php.html#extensions),
but if you need an extension that is not available by default
or not provided by Tobias, you’ll either have to build an
AWS Lambda layer yourself or you’ll have to find a way
without that extension.

Incoming Payload limit

When someone sends a HTTP request to your application,
they’re usually sending in some raw data with it, typically
on the body of the request. If the body of the request
goes above AWS limit, your code will never even be executed
and AWS will kill the request upfront. The limits
are currently as follows:

  • 1MB for Application Load Balancer
  • 10MB for API Gateway

A very big portion of use cases can fit in both of these
offerings. The most common use case that may pose a threat
is file uploads. If the file is bigger than the allowed
size, AWS will not accept the request. A common workaround
is to refactor the application so that the backend returns
an S3 Signed Url for the frontend to do the file upload
directly to S3 and then notify the backend once it’s done.
Without going into complex use-case, S3 can take 100MB
of upload in a plain and simple request, but it also
support multi-part upload with a much bigger limit.

HTTP Response Limit

AWS Lambda has a limit of 1MB for the Http Response. When
I started getting 502 Gateway Timeout due to large response
size, the workaround that worked best for me was to gzip
the Http Response. That brought down the response size
from 1MB to roughly 50~100kb. I never had to work on
a use-case where gzipped responses would be bigger than
1MB, so if you have such a case, let me know on Twitter
because I’m very interested in it!

AWS Lambda execution limit

AWS Lambda is capped at 15 minutes of execution. If you
use API Gateway, that’s capped at 30 seconds anyway, but
if you use Application Load Balancer, then it’s possible to
have Http requests going up to 15 minutes. Although that’s
not very common / likely to be useful, one thing that
does come up is background jobs. Sometimes they can go
above 15 minutes. One use case I worked on was importing
CSV file. The Http layer would return a Signed Url, frontend
would upload to S3 and notify the backend of a new file
ready for processing. A message for SQS would be produced
and a new AWS Lambda would pick up the message as the worker.
Downloading the CSV file from S3 and processing each row
individually could take more than 15 minutes. The way
I handled that was to create a recursive Job. What I did
was:

  • Download the file from S3
  • Process 1k rows on the file (for loop)
  • Each row processed increments a record on the database
  • Produce a new SQS message IDENTICAL to the one currently being worked on
  • Finish the job successfully

With this approach, the SQS message that gets picked up will
finish before 15 minutes and it will produce a new SQS message
with the exact same work unit. The job can always load
the pointer from the database to know where it left off
(the starting point of the for loop). If we don’t have 1k
records to run through, we can stop producing SQS messages
because we reached the end of the file. If there are
still rows to be processed, the new IDENTICAL SQS message
will make a new Lambda start, which means another 15 minutes
limit.

5 Layers per function

If your project uses too many exotic PHP extensions, you
may end up hitting the limit of 5 layers per function.
Bref itself will consume 2 of 5 layers. Each extension
is a separate layer. If you need, e.g., GMP for verifying
JWT tokens, Imagick for image processing and Redis, you’ve
reached your limit and the next extension you need might
be problematic.

50 MB of zipped Codebase / 250 MB of unzipped codebase

Your project source code cannot have more than 50MB of
zipped content or 250MB of unzipped. The 250MB unzipped also
include the Lambda layers extracted into your Lambda environment.
It sounds like a lot of code, but there are some PHP packages
out there that put binaries on your vendor folder which
may take a lot of space. There are some workaround
documented by Bref which involves zipping your vendor,
uploading it to S3 and deploying your code without
the vendor folder. The vendor then gets downloaded
on-the-fly when your Lambda starts. It adds a little
bit of overhead on your cold-start but overcome AWS
limitation on the codebase size.

read-only file-system

The entire file-system on AWS Lambda is read-only with
the exception of /tmp. Any process that is writing
files into disk will need tweaking to work with the temporary
folder. Furthermore, two different requests will not be
guarantee to hit the exact same Lambda, so relying
on local disk is not a safe bet. Any data that
may need to be accessed in the future must go to S3
or a durable storage.

500MB on /tmp

Another thing to keep in mind is that /tmp only has
500MB of disk space. If you fill that up, the Lambda environment
will start to get errors for full disk.

SQS messages cannot have more than 256kb of payload

Maybe on your server you use Redis as the Queue Storage
for your background jobs. If your serialized job class
has more than 256kb, it will not fit into AWS SQS.
A common workaround is to pass to the job a reference
to something big instead of the big thing itself.
For instance, if you have a job class that takes a collection
of 1 million rows of a database, that can be mitigated
by passing the instructions to query the database instead.
An SQL Query would be extremely small and produce a somewhat
similar result.

Every-minute CRON

When you have a server, putting a CRON to run every
minute and doing some clean up or tidy-up tasks sounds
trivial. But since AWS Lambda charges per code execution,
having a CRON running every minute can add up a bill
and/or be an unwanted side-effect. AWS offers
Scheduler for AWS Lambda, which means code can still
run on a regular basis, but it’s important to keep in
mind that everytime Lambda starts, it adds up to your
bill. Things like cleaning up old/stale files might not
even be necessary since every request may hit a clean
AWS Lambda environment.

Conclusion

This may not be an extensive list of gotchas, but I tried
to include everything that I may have faced while working
with AWS Lambda for the past 3 years. A lot of these limitations
have acceptable workarounds and overall hosting a Laravel
application on a single fat AWS Lambda has brought more
benefits than problems, so I would still advise in favour
of it if you’re looking to scale and ditch server maintenance.

As always, hit me up on Twitter with any
questions.

Cheers.

Laravel News Links