The Dungeons & Dragons Movie’s Final Trailer Is Very, Very Weird

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/ce18459b349fd7d58666159ba9b2d0c4.jpg

It’s a mere eight days before Dungeons & Dragons: Honor Among Thieves hits theaters, a movie that by all accounts is quite fun if not particularly consequential. Seriously, I haven’t heard anybody bad-mouth the film since its first trailer was released back in July of 2022. So why does this final trailer seem so convinced that everyone thinks the movie is terrible?

The trailer is so bizarre that the choice to use /Film’s quote about how it contains “The most Chris Pine a Chris performance has been in a long time” is not the weirdest thing about it:

Dungeons & Dragons: Honor Among Thieves | Final Trailer (2023 Movie)

The trailer begins with “Forget everything you think you know… everyone is raving about Dungeons & Dragons!” Charitably, it reads like the announcer is certain everyone thinks the movie is going to be a huge pile of crap, but don’t listen to the haters! Except… there aren’t any? Seriously, the film’s gotten good critical reactions and looks—and has always looked—like a lot of fun! There’s a giant list of publications that have given the movie positive reviews right in the trailer! It’s weirdly defensive, trying to fight a problem that doesn’t seem to exist.

With that in mind, it sounds more like the announcer wants you to have some sort of amnesia before you go to watch the film when it premieres on March 31. “Forget everything you know! …also, unrelatedly, people seem to like Dungeons & Dragons: Honor Among Thieves. It’s Chris Pine-y as hell, guys. You like Chris Pine, right? Well, forget that you like Chris Pine, too! I demand it!”


Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

Comparisons of Proxies for MySQL

https://www.percona.com/blog/wp-content/uploads/2023/03/lucas.speyer_an_underwater_high_tech_computer_server_a_dolpin_i_9337e5c5-e3c5-41dd-b0b1-e6504186488b-150×150.pngmysql proxy

With a special focus on Percona Operator for MySQL

Overview

HAProxy, ProxySQL, MySQL Router (AKA MySQL Proxy); in the last few years, I had to answer multiple times on what proxy to use and in what scenario. When designing an architecture, many components need to be considered before deciding on the best solution.

When deciding what to pick, there are many things to consider, like where the proxy needs to be, if it “just” needs to redirect the connections, or if more features need to be in, like caching and filtering, or if it needs to be integrated with some MySQL embedded automation.

Given that, there never was a single straight answer. Instead, an analysis needs to be done. Only after a better understanding of the environment, the needs, and the evolution that the platform needs to achieve is it possible to decide what will be the better choice.

However, recently we have seen an increase in the usage of MySQL on Kubernetes, especially with the adoption of Percona Operator for MySQL. In this case, we have a quite well-defined scenario that can resemble the image below:

MySQL on Kubernetes

In this scenario, the proxies must sit inside Pods, balancing the incoming traffic from the Service LoadBalancer connecting with the active data nodes.

Their role is merely to be sure that any incoming connection is redirected to nodes that can serve them, which includes having a separation between Read/Write and Read Only traffic, a separation that can be achieved, at the service level, with automatic recognition or with two separate entry points.

In this scenario, it is also crucial to be efficient in resource utilization and scaling with frugality. In this context, features like filtering, firewalling, or caching are redundant and may consume resources that could be allocated to scaling. Those are also features that will work better outside the K8s/Operator cluster, given the closer to the application they are located, the better they will serve.

About that, we must always remember the concept that each K8s/Operator cluster needs to be seen as a single service, not as a real cluster. In short, each cluster is, in reality, a single database with high availability and other functionalities built in.

Anyhow, we are here to talk about Proxies. Once we have defined that we have one clear mandate in mind, we need to identify which product allows our K8s/Operator solution to:

  • Scale at the maximum the number of incoming connections
  • Serve the request with the higher efficiency
  • Consume as fewer resources as possible

The environment

To identify the above points, I have simulated a possible K8s/Operator environment, creating:

  • One powerful application node, where I run sysbench read-only tests, scaling from two to 4096 threads. (Type c5.4xlarge)
  • Three mid-data nodes with several gigabytes of data in with MySQL and Group Replication (Type m5.xlarge)
  • One proxy node running on a resource-limited box (Type t2.micro)

The tests

We will have very simple test cases. The first one has the scope to define the baseline, identifying the moment when we will have the first level of saturation due to the number of connections. In this case, we will increase the number of connections and keep a low number of operations.

The second test will define how well the increasing load is served inside the previously identified range. 

For documentation, the sysbench commands are:

Test1

sysbench ./src/lua/windmills/oltp_read.lua  --db-driver=mysql --tables=200 --table_size=1000000 
 --rand-type=zipfian --rand-zipfian-exp=0 --skip_trx=true  --report-interval=1 --mysql-ignore-errors=all 
--mysql_storage_engine=innodb --auto_inc=off --histogram  --stats_format=csv --db-ps-mode=disable --point-selects=50 
--reconnect=10 --range-selects=true –rate=100 --threads=<#Threads from 2 to 4096> --time=1200 run

Test2

sysbench ./src/lua/windmills/oltp_read.lua  --mysql-host=<host> --mysql-port=<port> --mysql-user=<user> 
--mysql-password=<pw> --mysql-db=<schema> --db-driver=mysql --tables=200 --table_size=1000000  --rand-type=zipfian 
--rand-zipfian-exp=0 --skip_trx=true  --report-interval=1 --mysql-ignore-errors=all --mysql_storage_engine=innodb 
--auto_inc=off --histogram --table_name=<tablename>  --stats_format=csv --db-ps-mode=disable --point-selects=50 
--reconnect=10 --range-selects=true --threads=<#Threads from 2 to 4096> --time=1200 run

Results

Test 1

As indicated here, I was looking to identify when the first Proxy will reach a dimension that would not be manageable. The load is all in creating and serving the connections, while the number of operations is capped at 100. 

As you can see, and as I was expecting, the three Proxies were behaving more or less the same, serving the same number of operations (they were capped, so why not) until they weren’t.

MySQL router, after the 2048 connection, could not serve anything more.

NOTE: MySQL Router actually stopped working at 1024 threads, but using version 8.0.32, I enabled the feature: connection_sharing. That allows it to go a bit further.  

Let us take a look also the latency:

latency threads

Here the situation starts to be a little bit more complicated. MySQL Router is the one that has the higher latency no matter what. However, HAProxy and ProxySQL have interesting behavior. HAProxy performs better with a low number of connections, while ProxySQL performs better when a high number of connections is in place.  

This is due to the multiplexing and the very efficient way ProxySQL uses to deal with high load.

Everything has a cost:

HAProxy is definitely using fewer user CPU resources than ProxySQL or MySQL Router …

HAProxy

.. we can also notice that HAProxy barely reaches, on average, the 1.5 CPU load while ProxySQL is at 2.50 and MySQL Router around 2. 

To be honest, I was expecting something like this, given ProxySQL’s need to handle the connections and the other basic routing. What was instead a surprise was MySQL Router, why does it have a higher load?

Brief summary

This test highlights that HAProxy and ProxySQL can reach a level of connection higher than the slowest runner in the game (MySQL Router). It is also clear that traffic is better served under a high number of connections by ProxySQL, but it requires more resources. 

Test 2

When the going gets tough, the tough get going

Let’s remove the –rate limitation and see what will happen. 

mysql events

The scenario with load changes drastically. We can see how HAProxy can serve the connection and allow the execution of more operations for the whole test. ProxySQL is immediately after it and behaves quite well, up to 128 threads, then it just collapses. 

MySQL Router never takes off; it always stays below the 1k reads/second, while HAProxy served 8.2k and ProxySQL 6.6k.

mysql latency

Looking at the latency, we can see that HAProxy gradually increased as expected, while ProxySQL and MySQL Router just went up from the 256 threads on. 

To observe that both ProxySQL and MySQL Router could not complete the tests with 4096 threads.

ProxySQL and MySQL Router

Why? HAProxy always stays below 50% CPU, no matter the increasing number of threads/connections, scaling the load very efficiently. MySQL router was almost immediately reaching the saturation point, being affected by the number of threads/connections and the number of operations. That was unexpected, given we do not have a level 7 capability in MySQL Router.

Finally, ProxySQL, which was working fine up to a certain limit, reached saturation point and could not serve the load. I am saying load because ProxySQL is a level 7 proxy and is aware of the content of the load. Given that, on top of multiplexing, additional resource consumption was expected.   

proxysql usage

Here we just have a clear confirmation of what was already said above, with 100% CPU utilization reached by MySQL Router with just 16 threads, and ProxySQL way after at 256 threads.

Brief summary

HAProxy comes up as the champion in this test; there is no doubt that it could scale the increasing load in connection without being affected significantly by the load generated by the requests. The lower consumption in resources also indicates the possible space for even more scaling.

ProxySQL was penalized by the limited resources, but this was the game, we had to get the most out of the few available. This test indicates that it is not optimal to use ProxySQL inside the Operator; it is a wrong choice if low resource and scalability are a must.    

MySQL Router was never in the game. Unless a serious refactoring, MySQL Router is designed for very limited scalability, as such, the only way to adopt it is to have many of them at the application node level. Utilizing it close to the data nodes in a centralized position is a mistake.  

Conclusions

I started showing an image of how the MySQL service is organized and want to close by showing the variation that, for me, is the one to be considered the default approach:

MySQL service is organized

This highlights that we must always choose the right tool for the job. 

The Proxy in architectures involving MySQL/Percona Server for MySQL/Percona XtraDB Cluster is a crucial element for the scalability of the cluster, no matter if using K8s or not. Choosing the one that serves us better is important, which can sometimes be ProxySQL over HAProxy. 

However, when talking about K8s and Operators, we must recognize the need to optimize the resources usage for the specific service. In that context, there is no discussion about it, HAProxy is the best solution and the one we should go to. 

My final observation is about MySQL Router (aka MySQL Proxy). 

Unless there is a significant refactoring of the product, at the moment, it is not even close to what the other two can do. From the tests done so far, it requires a complete reshaping, starting to identify why it is so subject to the load coming from the query more than the load coming from the connections.   

Great MySQL to everyone. 

References

Percona Database Performance Blog

How Has the Hunting Rifle Evolved Over the Last 300 Years?

https://www.alloutdoor.com/wp-content/uploads/2023/03/How-Has-the-Hunting-Rifle-Evolved-Over-the-Last-300-Years-Img-1.jpg

Modern humans have been around for thousands of years, so guns are a relatively new tool. The first firearm goes back to around the 10th century in China, where fire lances used bamboo and gunpowder to launch spears. Now, there are numerous types of guns for various recreational uses, with hunting among the top activities. Rifles have been the gun of choice for hunters for nearly 300 years. How did the modern hunting rifle make it here?

1. Pennsylvania Rifle

Nowadays, the standard for hunting rifles centers around models like the current hunting rifle from Christensen Arms. But to understand rifles in 2023, you’ll have to go back to the early 1700s.

North America was growing with European settlers from England, France, Spain and more. Though, the Germans inspired the first rifle — the Pennsylvania rifle. This firearm was an upgrade over the musket because it had a much better range. The Pennsylvania rifle drew inspiration from jäger rifles used in German hunting, which started at around 54 inches long but could expand to over 6 feet.

2. Medad Hills’ Long Rifle

The Pennsylvania rifle — also known as the Kentucky rifle — was successful in the American colonies and led to similar models in the 18th century. For example, gunsmith Medad Hills crafted fowling pieces for hunting. Hills produced guns in Connecticut and helped hunters by creating long-barreled guns for increased accuracy. He later served in the Revolutionary War and made muskets for Connecticut in 1776.

3. Plains Rifles

After the Revolutionary War, rifle manufacturing began to take off in the United States, starting with the plains rifles. The new Americans began to expand westward and used plains rifles on the flat lands. Also known as the Hawken rifle, the plains rifle was shorter than its Pennsylvania predecessor but had a larger caliber, typically starting at .50. They were popular among hunters and trappers who needed to take down large animals from a distance.

4. Winchester 1876

A few decades later, the country broke out into a civil war. This era used military rifles from manufacturers like Springfield. However, it wasn’t until after the war that you’d see the hunting rifle that would inspire hunting rifles for decades.

Winchester was critical for late 19th-century rifles, starting with its 1876 model. This rifle was among the most high-powered yet for hunters. The Winchester 1876 was among the earliest repeaters and it had powerful capabilities with sizable ammunition — the intense bullets were necessary to take down large game like buffalo.

5. Winchester 1895

The success of the 1876 model led Winchester to create the 1895. This rifle was a repeater that featured smokeless rounds. Unlike its predecessors, the 1895 model was innovative because it included a box magazine below the action. It may be less powerful than models today, but it was incredibly potent for the time.

6. Winchester Model 70

Fast forward a bit to 1936. The country was in the Great Depression, but Winchester still produced excellent hunting rifles. Hunters called the Model 70 from Winchester the rifleman’s rifle, taking inspiration from Mauser, the German manufacturer. Winchester made the rifle with a controlled feed until 1964 before switching to a push feed and it still makes variations of the Model 70 today.

7. Marlin 336 (1948)

After World War II, Marlin introduced the 336 model as a successor to its 1893 rifle. It’s a lever-action rifle your grandfather may have owned to go deer hunting. Its specs may vary, but you’ll typically see it with a .30 or .35 caliber. The barrel can be as short as 20 inches or extend to 24 inches long. Marlin no longer makes the 336, but, Ruger — who purchased Marlin — plans to bring it back in 2023.

8. Remington 700 (1962)

1962 saw what could be the best hunting rifle ever made — the Remington Model 700. This rifle is the most popular bolt-action firearm, with over five million sold since its inception. In the last 60 years, Remington has made numerous variations to keep up with modern demand. This model is famous for its pair of dual-opposed lugs and a recessed bolt face.

The Remington 700 became the hunting rifle of choice for many across America, leading to its adoption by the U.S. military and law enforcement. Remington also makes 700s for the police — the 700P. The manufacturer makes the M24 and M49 sniper rifles for the military based on the 700.

The Evolution of Hunting Rifles

Rifles have come a long way since the beginning. Imagine picking up a Pennsylvania rifle and comparing it to your Mauser 18 Savanna. The hunting rifle helped settlers and early Americans hunt and sustain themselves and the evolution has led to the great rifles you know today, like the Remington 700.

How Has the Hunting Rifle Evolved Over the Last 300 Years?
How Has the Hunting Rifle Evolved Over the Last 300 Years?

The post How Has the Hunting Rifle Evolved Over the Last 300 Years? appeared first on AllOutdoor.com.

AllOutdoor.com

We Didn’t Start the Fire: Heavy Metal Edition

https://theawesomer.com/photos/2023/03/we_didnt_start_the_fire_leo_moracchioli_t.jpg

We Didn’t Start the Fire: Heavy Metal Edition

Link

Wheel of Fortune, Sally Ride, heavy metal suicide. Leo Morachiolli didn’t start the fire, but he did an impressive job covering Billy Joel’s wordy 1989 hit, adding fuel to the inferno with his hard-edged guitar and gravelly vocals. If you’re waiting for Joel to update the song for the 21st century, don’t hold your breath.

The Awesomer

Laravel Open Weather Package


README

Packagist
GitHub stars
GitHub forks
GitHub issues
GitHub license

Laravel OpenWeather API (openweather-laravel-api) is a Laravel package to connect Open Weather Map APIs ( https://openweathermap.org/api ) and access free API services (current weather, weather forecast, weather history) easily.

Supported APIs

Installation

Install the package through Composer.
On the command line:

composer require rakibdevs/openweather-laravel-api

Configuration

If Laravel > 7, no need to add provider

Add the following to your providers array in config/app.php:

'providers' => [
    // ...
    RakibDevs\Weather\WeatherServiceProvider::class,
],
'aliases' => [
    //...
    'Weather' => RakibDevs\Weather\Weather::class,	
];

Add API key and desired language in .env

OPENWAETHER_API_KEY=
OPENWAETHER_API_LANG=en

Publish the required package configuration file using the artisan command:

	$ php artisan vendor:publish

Edit the config/openweather.php file and modify the api_key value with your Open Weather Map api key.

	return [
	    'api_key' 	        => env('OPENWAETHER_API_KEY', ''),
    	    'onecall_api_version' => '2.5',
            'historical_api_version' => '2.5',
            'forecast_api_version' => '2.5',
            'polution_api_version' => '2.5',
            'geo_api_version' => '1.0',
	    'lang' 		=> env('OPENWAETHER_API_LANG', 'en'),
	    'date_format'       => 'm/d/Y',
	    'time_format'       => 'h:i A',
	    'day_format'        => 'l',
	    'temp_format'       => 'c'         // c for celcius, f for farenheit, k for kelvin
	];

Now you can configure API version from config as One Call API is upgraded to version 3.0. Please set available api version in config.

Usage

Here you can see some example of just how simple this package is to use.

use RakibDevs\Weather\Weather;

$wt = new Weather();

$info = $wt->getCurrentByCity('dhaka');    // Get current weather by city name

Access current weather data for any location on Earth including over 200,000 cities! OpenWeather collect and process weather data from different sources such as global and local weather models, satellites, radars and vast network of weather stations

// By city name
$info = $wt->getCurrentByCity('dhaka'); 

// By city ID - download list of city id here http://bulk.openweathermap.org/sample/
$info = $wt->getCurrentByCity(1185241); 

// By Zip Code - string with country code 
$info = $wt->getCurrentByZip('94040,us');  // If no country code specified, us will be default

// By coordinates : latitude and longitude
$info = $wt->getCurrentByCord(23.7104, 90.4074);

Output:

{
  "coord": {
    "lon": 90.4074
    "lat": 23.7104
  }
  "weather":[
    0 => { 
      "id": 721
      "main": "Haze"
      "description": "haze"
      "icon": "50d"
    }
  ]
  "base": "stations"
  "main": {
    "temp": 26
    "feels_like": 25.42
    "temp_min": 26
    "temp_max": 26
    "pressure": 1009
    "humidity": 57
  }
  "visibility": 3500
  "wind": {
    "speed": 4.12
    "deg": 280
  }
  "clouds": {
    "all": 85
  }
  "dt": "01/09/2021 04:16 PM"
  "sys": {
    "type": 1
    "id": 9145
    "country": "BD"
    "sunrise": "01/09/2021 06:42 AM"
    "sunset": "01/09/2021 05:28 PM"
  }
  "timezone": 21600
  "id": 1185241
  "name": "Dhaka"
  "cod": 200
}

Make just one API call and get all your essential weather data for a specific location with OpenWeather One Call API.

// By coordinates : latitude and longitude
$info = $wt->getOneCallByCord(23.7104, 90.4074);

4 day forecast is available at any location or city. It includes weather forecast data with 3-hour step.

// By city name
$info = $wt->get3HourlyByCity('dhaka'); 

// By city ID - download list of city id here http://bulk.openweathermap.org/sample/
$info = $wt->get3HourlyByCity(1185241); 

// By Zip Code - string with country code 
$info = $wt->get3HourlyByZip('94040,us');  // If no country code specified, us will be default

// By coordinates : latitude and longitude
$info = $wt->get3HourlyByCord(23.7104, 90.4074);

Get access to historical weather data for the previous 5 days.

// By coordinates : latitude, longitude and date
$info = $wt->getHistoryByCord(23.7104, 90.4074, '2020-01-09');

Air Pollution API provides current, forecast and historical air pollution data for any coordinates on the globe

Besides basic Air Quality Index, the API returns data about polluting gases, such as Carbon monoxide (CO), Nitrogen monoxide (NO), Nitrogen dioxide (NO2), Ozone (O3), Sulphur dioxide (SO2), Ammonia (NH3), and particulates (PM2.5 and PM10).

Air pollution forecast is available for 5 days with hourly granularity. Historical data is accessible from 27th November 2020.

// By coordinates : latitude, longitude and date
$info = $wt->getAirPollutionByCord(23.7104, 90.4074);

Geocoding API is a simple tool that we have developed to ease the search for locations while working with geographic names and coordinates.
-> Direct geocoding converts the specified name of a location or area into the exact geographical coordinates;
-> Reverse geocoding converts the geographical coordinates into the names of the nearby locations.

// By city name
$info = $wt->getGeoByCity('dhaka');

// By coordinates : latitude, longitude and date
$info = $wt->getGeoByCity(23.7104, 90.4074);
  • 60 calls/minute
  • 1,000,000 calls/month
  • 1000 calls/day when using Onecall requests

License

Laravel Open Weather API is licensed under The MIT License (MIT).

Laravel News Links

Valet 4.0 is released

https://laravelnews.s3.amazonaws.com/images/laravel-valet-version-four.png

Valet 4 is officially released! Let’s look into what v4 offers and how you can upgrade your local install today.

The backdrop

Valet was originally introduced in May 2016 with this incredible video. Valet v2 was released soon after, bringing about the move from Caddy to Nginx. But after that, development on Valet slowed; as Taylor has often pointed out, “at that point, Valet was feature complete.”

However, when I picked up maintenance of Valet a few years back, there were two things I noticed: first, that many people needed different versions of PHP for their different sites; and second, that miscellaneous features and bug fixes addressed over the years made the codebase a bit difficult to reason with at times.

Valet v3 was released in March 2022, with the primary focus on adding support for multiple versions of PHP running in parallel on the same machine.

And now, we’re looking at Valet v4.

What’s new in Valet 4?

The most important change to Valet 4 is something you can’t even see from the outside: the internals of the project has been re-architected and tested heavily. Just to be clear, they’ve been re-architected back toward the style of simplicity Taylor and Adam’s original code had. But they’re now covered with all forms of unit and integration tests, and the changes made since Valet 2 are now much better integrated.

What does that mean?

Valet 4 is the most stable, easy to debug, and easy to fix version of Valet yet.

New features in Valet 4

There are a few user-facing new features:

  • valet status command: If you run valet status, you’ll get a table showing you the “health” of a few important aspects of your Valet application. This is helpful both because you can use it when you’re debugging, but, like any good CLI tool, it’ll also return codes for success or failure that other CLI tools can consume.
  • Upgrades to ngrok: If you use ngrok to share your sites, older versions of Valet bundled ngrok as an install. Now, Valet will prompt you to install ngrok through Homebrew, allowing you to have one universal version installed, and allowing you to keep it up to date as you please.
  • Expose as a share option: If you use Expose to share your sites, it’s now integrated into Valet! Run valet share-tool expose and, if you don’t have Expose installed, it’ll prompt you to install it. Once you’ve set up your Expose token, you’re ready to share using the same valet share command you’re familiar with.

Upgrade notes

If you’re upgrading from Valet 3, here’s my preferred way to upgrade:

  1. Edit your ~/.composer/composer.json file and update your Valet requirement to "^4.0"
  2. Update: composer global update laravel/valet
  3. Run valet install

Make sure you run valet install, as it’ll check your system’s compatibility and upgrade some configuration files for you.

Custom drivers

If you have any custom drivers, you’ll want to update them to match the new syntax (basically, drivers are now namespaced and have type hints and return types).

.valetphprc

If you use .valetphprc to define your sites’ PHP versions, you’ll want to rename those files to .valetrc and change their contents; .valetphprc files just contain a PHP Brew formula (e.g. php@8.1), but the new .valetrc files are broader config files, so you’ll need to prefix the formula with php=.

So if your project had this .valetphprc file:

php@8.1

You’ll want to rename it to .valetrc and update its contents to this:

php=php@8.1

Backwards compatibility: PHP 7.1-7.4

Valet 4 requires PHP 8.0+ to be installed on your system via Homebrew. As I mentioned already, you can use Valet’s isolation feature to set individual sites to use older versions of PHP, back to 7.1.

However, if you have a reason you need to use PHP 7.1-7.4 as your primary linked PHP (meaning if you just type php -v you see something between 7.1 and 8.0), you can do that! Just make sure that you have a modern version of PHP installed on your machine, and Valet will use that version to run its internal commands.

However, a quick warning: If you use Valet 4 and your primary linked version of PHP is lower than PHP 8, all of your local Valet CLI commands will run a bit more slowly, as they have to find your modern PHP install and proxy their calls through it.

The future

That’s it! The primary goal of Valet 4 is stability, but it also opens up some great new options for the future. First, the .valetrc file is much more powerful than .valetphprc was, and we can make it a lot more configurable. And second, I dropped a concept called Extensions that was basically entirely unused, with the hope of building a plugin system sometime in the near future.

If you followed my journey of rebuilding Valet for v4 on Twitter, you might have seen that I attempted to make it work on Linux. Sadly, that wasn’t successful, but I still have dreams of one day attempting it again. No promises… but it’s still a dream!

I hope you all love Valet 4. Enjoy!

Laravel News

A menu for the manly man

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMbtjUuceQaJ_JiqGW2aJ7bzET2TPRNs-7O888yvYKFxk8rgzPuhjVvS1eZPFQ7wU55UJIqYmKSoW2iC-CiucemUrlb_T_TTOMLPkEG-fOsKu9tO3hOlHeLWM8xvcFSVIgPSAfQSBchQTrmm6o7MCFkzT-90PlwsI9WVx_xL62ZqR9v5FAjclOFtsO/w400-h254/Elephant%20stew.png

 

Found on Gab (clickit to biggit):

I had to laugh at the instructions.

I’ve been present when an elephant that was destroying crops was shot near an African village.  The villagers swarmed the carcass, armed with machetes, axes and other edged instruments, and proceeded to have a gigantic meat-eating binge that lasted three days.  They dismembered that carcass from inside and out, literally:  some people crawled inside the belly cavity and cut their way out, while others stood on the ribs and cut their way in.  When an errant machete blade from one side or the other cut into someone on the other side of the skin, there were screams of outrage and anger;  but mostly they were too busy eating (yes, even raw meat!) to care.

At the end of three days, all that was left were the remains of the entrails, and the huge bones of the elephant skeleton.  Sixty-odd villagers had eaten until they bulged (literally):  their stomachs were so distended I was surprised they could still move.  Of course, in African heat, with no refrigeration available, the meat had already spoiled by the third day, but they ate and ate and ate so as to waste as little as possible of the precious nutrition deposited on their doorsteps by the Game Department.

Perhaps they should have tried this recipe . . .

Peter

Bayou Renaissance Man

Coiling Molten Steel Rod

https://theawesomer.com/photos/2022/03/molten_steel_coiling_t.jpg

Coiling Molten Steel Rod

Link

Redditor arcedup works in a steel mill and wanted to test out the video capabilities of their phone. While they were at it, they captured this wonderfully satisfying clip of molten hot steel being turned into a coiled rod. Is it wrong that we want to make the world’s largest Slinky with it?

The Awesomer