Defense Distributed Once Again Proves Gun Control Obsolete With A 0% Pistol

https://www.ammoland.com/wp-content/uploads/2023/06/0percentpistol-500×281.jpg

Defense Distributed Once Again Proves Gun Control Obsolete With A 0% Pistol
Defense Distributed Once Again Proves Gun Control Obsolete With A 0% Pistol

AUSTIN, Texas — In 2013, Cody Wilson printed the Liberator. The Liberator was the first 3D-printed firearm. His goal was simple. It was to make all gun control obsolete.

Wilson hoped that the gun world would embrace 3D printing and other methods of getting around gun control. Wilson’s dream came to fruition. Talented gun designers used computer-aided drawing (CAD) software to design and print firearms at home on 3D printers that cost as low as $100. At the same time, companies like Polymer80 sprung up to sell kits that let home users finish a piece of plastic into an unserialized firearm frame.

The revolution caused the Biden administration to order the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) to make new rules to prevent the dissemination of these kits that he and other anti-gun zealots demonized as “ghost guns.” The ATF rolled out a new rule that would make it a crime to sell a frame blank with a jig, but the market adapted. This adaptation once again forced the ATF’s hand. Two days after Christmas in 2022, the ATF would give anti-gun groups a belated gift. It would unilaterally declare frame blanks firearms. Giffords, Brady, and Everytown celebrated the closing of the so-called “ghost gun loophole” and the banning of “tools of criminals.”

Their victory would be short-lived as the injunctions from Federal courts in Texas started rolling in. First, it was 80% Arms, then Wilson’s Defense Distributed, and finally, Polymer80, meaning the original kits were back on the market. Although most of the industry was now back to selling the original product, liberal states started banning the sale of unfinished firearm frames and receivers.

Defense Distributed would take these states head-on by releasing a 0% AR-15 lower receiver for the company’s Ghost Gunner, a desktop CNC machine. The 0% lower was a hit with the gun-building community. All the user had to do was mill out the middle section of the lower and attach it to a top piece and a lower portion. Even if a state were to ban 80% AR-15 lowers, it would be impossible to ban a block of aluminum, although states like California have tried to ban the Ghost Gunner itself.

Wilson and Ghost Gunner are now tackling handguns by releasing a 0% handgun fire control unit (FCU).

All the user has to do is mill out the FCU using the Ghost Gunner 3 (aluminum) or the Ghost Gunner 3S (stainless steel) and install Gen 3 Glock parts. The user can then print the chassis on a 3D printer using the files supplied by Defense Distributed or buy a pre-printed chassis from the Ghost Gunner website and add a complete slide and barrel.

Mr. Wilson, who has faced some controversy over a relationship with a 16-year-old girl who lied about her age and claimed to be 18, recently had the case against him dropped, which frees him up to keep attacking ATF regulations.

“This is a homecoming ten years in the making,” Wilson told AmmoLand News. “The 0% pistol allows anyone to make a Glock type pistol in their own home with just a Ghost Gunner and a 3D-printer.”

This move by Defense Distributed, along with advancements in 3D printing, is the downfall of gun control. No matter what bans the government institutes, the market will adapt. With the rise of cheap 3D printers and machines like the Ghost Gunner, it has never been easier to circumvent gun control laws.

Let me make it clear, 3D-printing and CNC machining firearms are not illegal on a federal level. I also do not believe there is a will in Congress to attack that aspect of gun control because it will highlight that technology is empowering the people. The machines are available everywhere, from Amazon to Microcenter. The files live in cyberspace, where anyone can download them. Even if the files were banned (huge First Amendment legal challenge), they still would be traded anonymously on the Dark Web and by using VPN services.

The signal cannot be stopped. The internet has ushered in the fall of gun control, and there is nothing the Biden administration or states like California can do about it.


About John Crump

John is a NRA instructor and a constitutional activist. John has written about firearms, interviewed people of all walks of life, and on the Constitution. John lives in Northern Virginia with his wife and sons and can be followed on Twitter at @crumpyss, or at www.crumpy.com.

John Crump

AmmoLand Shooting Sports News

Working with third party services in laravel

https://laravelnews.s3.amazonaws.com/images/working-with-third-party-services-in-laravel.png

So a little over two years ago, I wrote a tutorial on how you should work with third-party services in Laravel. To this day, it is the most visited page on my website. However, things have changed over the last two years, and I decided to approach this topic again.

So I have been working with third-party services for so long that I cannot remember when I wasn’t. As a Junior Developer, I integrated API into other platforms like Joomla, Magento, and WordPress. Now it mainly integrates into my Laravel applications to extend business logic by leaning on other services.

This tutorial will describe how I typically approach integrating with an API today. If you have read my previous tutorial, keep reading as a few things have changed – for what I consider good reasons.

Let’s start with an API. We need an API to integrate with. My original tutorial was integrating with PingPing, an excellent uptime monitoring solution from the Laravel community. However, I want to try a different API this time.

For this tutorial, we will use the Planetscale API. Planetscale is an incredible database service I use to get my read-and-write operations closer to my users in the day job.

What will our integration do? Imagine we have an application that allows us to manage our infrastructure. Our servers run through Laravel Forge, and our database is over on Planetscale. There is no clean way to manage this workflow, so we created our own. For this, we need an integration or two.

Initially, I used to keep my integrations under app/Services; however, as my applications have gotten more extensive and complicated, I have needed to use the Services namespace for internal services, leading to a polluted namespace. I have moved my integrations to app/Http/Integrations. This makes sense and is a trick I picked up from Saloon by Sam Carrè.

Now I could use Saloon for my API integration, but I wanted to explain how I do it without a package. If you need an API integration in 2023, I highly recommend using Saloon. It is beyond amazing!

So, let’s start by creating a directory for our integration. You can use the following bash command:

mkdir app/Http/Integrations/Planetscale

Once we have the Planetscale directory, we need to create a way to connect to it. Another naming convention I picked up off of the Saloon library is to look at these base classes as connectors – as their purpose is to allow you to connect to a specific API or third party.

Create a new class called PlanetscaleConnector in the app/Http/Integrations/Planetscale directory, and we can flesh out what this class needs, which will be a lot of fun.

So we must register this class with our container to resolve it or build a facade around it. We could register this to “long” way in a Service Provider – but my latest approach is to have these Connectors register themselves – kind of …

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale;

 

use Illuminate\Contracts\Foundation\Application;

use Illuminate\Http\Client\PendingRequest;

use Illuminate\Support\Facades\Http;

 

final readonly class PlanetscaleConnector

{

public function __construct(

private PendingRequest $request,

) {}

 

public static function register(Application $app): void

{

$app->bind(

abstract: PlanetscaleConnector::class,

concrete: fn () => new PlanetscaleConnector(

request: Http::baseUrl(

url: '',

)->timeout(

seconds: 15,

)->withHeaders(

headers: [],

)->asJson()->acceptJson(),

),

);

}

}

So the idea here is that all the information about how this class is registered into the container lives within the class itself. All the service provider needs to do is call the static register method on the class! This has saved me so much time when integrating with many APIs because I don’t have to hunt for the provider and find the correct binding, amongst many others. I go to the class in question, which is all in front of me.

You will notice that currently, we have nothing being passed to the token or base url methods in the request. Let’s fix that next. You can get these in your Planetscale account.

Create the following records in your .env file.

PLANETSCALE_SERVICE_ID="your-service-id-goes-here"

PLANETSCALE_SERVICE_TOKEN="your-token-goes-here"

PLANETSCALE_URL="https://api.planetscale.com/v1"

Next, these need to be pulled into the application’s configuration. These all belong in config/services.php as this is where third-party services are typically configured.

return [

// the rest of your services config

 

'planetscale' => [

'id' => env('PLANETSCALE_SERVICE_ID'),

'token' => env('PLANETSCALE_SERVICE_TOKEN'),

'url' => env('PLANETSCALE_URL'),

],

];

Now we can use these in our PlanetscaleConnector under the register method.

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale;

 

use Illuminate\Contracts\Foundation\Application;

use Illuminate\Http\Client\PendingRequest;

use Illuminate\Support\Facades\Http;

 

final readonly class PlanetscaleConnector

{

public function __construct(

private PendingRequest $request,

) {}

 

public static function register(Application $app): void

{

$app->bind(

abstract: PlanetscaleConnector::class,

concrete: fn () => new PlanetscaleConnector(

request: Http::baseUrl(

url: config('services.planetscale.url'),

)->timeout(

seconds: 15,

)->withHeaders(

headers: [

'Authorization' => config('services.planetscale.id') . ':' . config('services.planetscale.token'),

],

)->asJson()->acceptJson(),

),

);

}

}

You need to send tokens over to Planetscale in the following format: service-id:service-token, so we cannot use the default withToken method as it doesn’t allow us to customize it how we need to.

Now that we have a basic class created, we can start to think about the extent of our integration. We must do this when creating our service token to add the correct permissions. In our application, we want to be able to do the following:
List databases.
List database regions.
List database backups.
Create database backup.
Delete database backup.

So, we can look at grouping these into two categories:
Databases.
Backups.

Let’s add two new methods to our connector to create what we need:

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale;

 

use App\Http\Integrations\Planetscale\Resources\BackupResource;

use App\Http\Integrations\Planetscale\Resources\DatabaseResource;

use Illuminate\Contracts\Foundation\Application;

use Illuminate\Http\Client\PendingRequest;

use Illuminate\Support\Facades\Http;

 

final readonly class PlanetscaleConnector

{

public function __construct(

private PendingRequest $request,

) {}

 

public function databases(): DatabaseResource

{

return new DatabaseResource(

connector: $this,

);

}

 

public function backups(): BackupResource

{

return new BackupResource(

connector: $this,

);

}

 

public static function register(Application $app): void

{

$app->bind(

abstract: PlanetscaleConnector::class,

concrete: fn () => new PlanetscaleConnector(

request: Http::baseUrl(

url: config('services.planetscale.url'),

)->timeout(

seconds: 15,

)->withHeaders(

headers: [

'Authorization' => config('services.planetscale.id') . ':' . config('services.planetscale.token'),

],

)->asJson()->acceptJson(),

),

);

}

}

As you can see, we created two new methods, databases and backups. These will return new resource classes, passing through the connector. The logic can now be implemented in the resource classes, but we must add another method to our connector later.

<?php

 

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale\Resources;

 

use App\Http\Integrations\Planetscale\PlanetscaleConnector;

 

final readonly class DatabaseResource

{

public function __construct(

private PlanetscaleConnector $connector,

) {}

 

public function list()

{

//

}

 

public function regions()

{

//

}

}

This is our DatabaseResource; we have now stubbed out the methods we want to implement. You can do the same thing for the BackupResource. It will look somewhat similar.

So the results can be paginated on the listing of databases. However, I will not deal with this here – I would lean on Saloon for this, as its implementation for paginated results is fantastic. In this example, we aren’t going to worry about pagination. Before we fill out the DatabaseResource, we need to add one more method to the PlanetscaleConnector to send the requests nicely. For this, I am using my package called juststeveking/http-helpers, which has an enum for all the typical HTTP methods I use.

public function send(Method $method, string $uri, array $options = []): Response

{

return $this->request->send(

method: $method->value,

url: $uri,

options: $options,

)->throw();

}

Now we can go back to our DatabaseResource and start filling in the logic for the list method.

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale\Resources;

 

use App\Http\Integrations\Planetscale\PlanetscaleConnector;

use Illuminate\Support\Collection;

use JustSteveKing\HttpHelpers\Enums\Method;

use Throwable;

 

final readonly class DatabaseResource

{

public function __construct(

private PlanetscaleConnector $connector,

) {}

 

public function list(string $organization): Collection

{

try {

$response = $this->connector->send(

method: Method::GET,

uri: "/organizations/{$organization}/databases"

);

} catch (Throwable $exception) {

throw $exception;

}

 

return $response->collect('data');

}

 

public function regions()

{

//

}

}

Our list method accepts the parameter organization to pass through the organization to list databases. We then use this to send a request to a specific URL through the connector. Wrapping this in a try-catch statement allows us to catch potential exceptions from the connectors’ send method. Finally, we can return a collection from the method to work with it in our application.

We can go into more detail with this request, as we can start mapping the data from arrays to something more contextually useful using DTOs. I wrote about this here, so I won’t repeat the same thing here.

Let’s quickly look at the BackupResource to look at more than just a get request.

declare(strict_types=1);

 

namespace App\Http\Integrations\Planetscale\Resources;

 

use App\Http\Integrations\Planetscale\Entities\CreateBackup;

use App\Http\Integrations\Planetscale\PlanetscaleConnector;

use JustSteveKing\HttpHelpers\Enums\Method;

use Throwable;

 

final readonly class BackupResource

{

public function __construct(

private PlanetscaleConnector $connector,

) {}

 

public function create(CreateBackup $entity): array

{

try {

$response = $this->connector->send(

method: Method::POST,

uri: "/organizations/{$entity->organization}/databases/{$entity->database}/branches/{$entity->branch}",

options: $entity->toRequestBody(),

);

} catch (Throwable $exception) {

throw $exception;

}

 

return $response->json('data');

}

}

Our create method accepts an entity class, which I use to pass data through the application where needed. This is useful when the URL needs a set of parameters and we need to send a request body through.

I haven’t covered testing here, but I did write a tutorial on how to test JSON:API endpoints using PestPHP here, which will have similar concepts for testing an integration like this.

I can create reliable and extendible integrations with third parties using this approach. It is split into logical parts, so I can handle the amount of logic. Typically I would have more integrations, so some of this logic can be shared and extracted into traits to inherit behavior between integrations.

Laravel News

Why Disney Fails: Their Blindness To Real Geek Culture

http://img.youtube.com/vi/zktqtTabJgk/0.jpg

Here’s a nice, short rant from Paul Chato (who I’d not heard of before) on why Disney’s social justice re-imagining of classic franchises fail: It’s not just their woeful ignorance of their own franchise, it’s the woeful ignorance of the vaster connected universe of fandom/geekdom/nerdom.

  • “The thing that really ties those of us who grew up reading comic books together is not the primary properties like Superman, Batman, Spider-man, or even Lord of the Rings, but the peripheral stuff or peripheral interests. When we talk to each other, we’ll also reference video games, anime, manga, computers, astronomy, network protocols, synthesizers, cars.”
  • Put a bunch of us nerds together, even complete strangers, into a room, well, Heaven help you, and soon we’ll be talking about Cowboy Bebop or Akira Kurosawa, or NES, Atari, ColecoVision, Ultraman, Kirby, Adams, McFarland, Studio Ghibli, second breakfasts, Plan 9 From Outer Space, Fireball XL5, Scooby-Doo, The Day The Earth Stood Still, Matrix (only the first one), Terminator, Blade Runner, Aliens, Herge, Miller, Robert E. Howard, Harryhausen, Lasseter, good scotch. Has anyone heard Kathleen Kennedy talk about any of those things? Of course not, and I can hear you laughing.

  • That’s a pretty good name check list, though I’d add Robert A. Heinlein and H. P. Lovecraft (among others).

    But it’s an interesting point: Social justice showrunners are woefully ignorant of vast swathes of knowledge held by the fandoms they hold in such withering contempt.

    Lawrence Person’s BattleSwarm Blog

    Generate Laravel migrations from an existing database

    https://leopoletto.com/assets/images/generate-laravel-migrations.png

    One of the common challenges when migrating a legacy PHP application to Laravel is creating database migrations based on the existing database.

    Depending on the size of the database, it can become an exhausting task.
    I had to do it a few times, but recently I stumbled upon a database with over a hundred tables.

    As a programmer, we don’t have the patience to do such a task,
    and we shouldn’t.
    The first thought is how to automate it.
    With that in mind, I searched for an existing solution, found some packages,
    and picked one by kitloong,
    the Laravel migration generator package.

    Practical example using an existing database structure

    Creating the tables

    CREATE TABLE permissions
    (
        id bigint unsigned auto_increment primary key,
        name varchar(255) not null,
        guard_name varchar(255) not null,
        created_at timestamp    null,
        updated_at timestamp    null,
        constraint permissions_name_guard_name_unique
        unique (name, guard_name)
    )
    collate = utf8_unicode_ci;
    
    CREATE TABLE roles
    (
        id bigint unsigned auto_increment primary key,
        team_id bigint unsigned null,
        name varchar(255) not null,
        guard_name varchar(255) not null,
        created_at timestamp null,
        updated_at timestamp null,
        constraint roles_team_id_name_guard_name_unique
        unique (team_id, name, guard_name)
    )
    collate = utf8_unicode_ci;
    
    CREATE TABLE role_has_permissions
    (
        permission_id bigint unsigned not null,
        role_id bigint unsigned not null,
        primary key (permission_id, role_id),
        constraint role_has_permissions_permission_id_foreign
        foreign key (permission_id) references permissions (id)
        on delete cascade,
        constraint role_has_permissions_role_id_foreign
        foreign key (role_id) references roles (id)
        on delete cascade
    )
    collate = utf8_unicode_ci;
    
    CREATE INDEX roles_team_foreign_key_index on roles (team_id);
    

    Installing the package

    composer require --dev kitloong/laravel-migrations-generator
    

    Running the package command that does the magic

    You can specify or ignore the tables you want using --tables= or --ignore= respectively.

    Below is the command I ran for the tables we created above.
    To run for all the tables, don’t add any additional filters.

    php artisan migrate:generate --tables="roles,permissions,role_permissions"
    

    Command output

    Using connection: mysql
    
    Generating migrations for: permissions,role_has_permissions,roles
    
    Do you want to log these migrations in the migrations table? (yes/no) [yes]:
    > yes
    
    Setting up Tables and Index migrations.
    Created: /var/www/html/database/migrations/2023_06_08_132125_create_permissions_table.php
    Created: /var/www/html/database/migrations/2023_06_08_132125_create_role_has_permissions_table.php
    Created: /var/www/html/database/migrations/2023_06_08_132125_create_roles_table.php
    
    Setting up Views migrations.
    
    Setting up Stored Procedures migrations.
    
    Setting up Foreign Key migrations.
    Created: /var/www/html/database/migrations/2023_06_08_132128_add_foreign_keys_to_role_has_permissions_table.php
    
    Finished!
    

    Checking the migration files

    Permissions table: 2023_06_08_132125_create_permissions_table.php

    ...
    
    Schema::create('roles', function (Blueprint $table) {
        $table->bigIncrements('id');
    
        $table->unsignedBigInteger('team_id')
            ->nullable()
            ->index('roles_team_foreign_key_index');
    
        $table->string('name');
        $table->string('guard_name');
        $table->timestamps();
    
        $table->unique(['team_id', 'name', 'guard_name']);
    });
    
    ...
    

    Roles table: 2023_06_08_132125_create_role_has_permissions_table.php

    ...
    
    Schema::create('roles', function (Blueprint $table) {
        $table->bigIncrements('id');
    
        $table->unsignedBigInteger('team_id')
            ->nullable()
            ->index('roles_team_foreign_key_index');
    
        $table->string('name');
        $table->string('guard_name');
        $table->timestamps();
    
        $table->unique(['team_id', 'name', 'guard_name']);
    });
    
    ...
    

    Pivot table: 2023_06_08_132125_create_roles_table.php

    ...
    
    Schema::create('role_has_permissions', function (Blueprint $table) {
        $table->unsignedBigInteger('permission_id');
    
        $table->unsignedBigInteger('role_id')
            ->index('role_has_permissions_role_id_foreign');
    
        $table->primary(['permission_id', 'role_id']);
    });
    
    ...
    

    Add foreign key to the pivot table: 2023_06_08_132128_add_foreign_keys_to_role_has_permissions_table.php

    ...
    
    Schema::table('role_has_permissions', function (Blueprint $table) {
        $table->foreign(['permission_id'])
            ->references(['id'])
            ->on('permissions')
            ->onUpdate('NO ACTION')
            ->onDelete('CASCADE');
    
        $table->foreign(['role_id'])
            ->references(['id'])
            ->on('roles')
            ->onUpdate('NO ACTION')
            ->onDelete('CASCADE');
    });
    
    ...
    

    This is just one of the challenges when migrating a legacy PHP application to Laravel.

    The following post will be about password hashing algorithm incompatibility.

    Join the discussion on Twitter.

    Laravel News Links

    Real Python: Python Basics: Reading and Writing Files

    Files are everywhere in the modern world. They’re the medium in which data is digitally stored and transferred. Chances are, you’ve opened dozens, if not hundreds, of files just today! Now it’s time to read and write files with Python.

    In this video course, you’ll learn how to:

    • Understand the difference between text and binary files
    • Learn about character encodings and line endings
    • Work with file objects in Python
    • Read and write character data in various file modes
    • Use open(), Path.open(), and the with statement
    • Take advantage of the csv module to manipulate CSV data

    This video course is part of the Python Basics series, which accompanies Python Basics: A Practical Introduction to Python 3. You can also check out the other Python Basics courses.

    Note that you’ll be using IDLE to interact with Python throughout this course. If you’re just getting started, then you might want to check out Python Basics: Setting Up Python before diving into this course.


    [ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

    Planet Python

    Ready-to-Use High Availability Architectures for MySQL and PostgreSQL

    https://www.percona.com/blog/wp-content/uploads/2023/06/high-availability-architectures-for-MySQL-and-PostgreSQL-200×113.jpghigh availability architectures for MySQL and PostgreSQL

    When it comes to access to their applications, users demand instant, reliable, and secure interactions — and that means databases must be highly available.

    With database high availability (HA), services are largely uninterrupted, and end users are largely satisfied. Without high availability, there’s more-than-negligible downtime, and end users can become non-users (as in, former customers). A business can also incur reputational damage and face penalties for not meeting Service Level Agreements (SLAs).

    Open source databases provide great foundations for high availability — without the pitfalls of vendor lock-in that can come with proprietary software. However, open source software doesn’t typically include built-in HA solutions. Sure, you can get there with the right extensions and tools, but it can be a long, burdensome, and potentially expensive process. So why not use a proven architecture instead of starting from scratch on your own?

    This blog provides links to such architectures — for MySQL and PostgreSQL software. They’re proven and ready-to-go. You can use these Percona architectures to build highly available PostgreSQL or MySQL environments or have our experts do the heavy lifting for you. Either way, the architectures provide outlines for building databases that keep operations running optimally, even during peak usage or amid technical challenges caused by anything from brief outages to disasters.

    First, let’s quickly examine what’s at stake and discuss standards for protecting those assets.

    Importance of high availability architecture

    As indicated, an HA architecture provides the blueprint for building a database that assures the continuity of critical business operations, even amid crashes, incursions, outages, and other threats. Conversely, choosing a piecemeal approach — one in which you attempt to build a database through the trial and error of various tools and extensions  — can leave your system vulnerable.

    That vulnerability can be costly: A 2022 ITIC survey found that the cost of downtime is greater than $300,000 per hour for 91% of small-, mid-size, and large enterprises. Among just the mid-size and large respondents, 44% percent said a single hour of downtime could potentially cost them more than $1 million.

    The ultimate goal of HA architecture

    So what’s the ultimate goal of using an HA architecture? The obvious answer is this: To achieve high availability. That can mean different things for different businesses, but within IT, 99.999% (“five-nines”) of availability is the gold standard of database availability.

    It really depends on how much downtime you can bear. With streaming services, for example, excessive downtime could result in significant financial and reputational losses for the business. Elsewhere, millions can be at stake for financial institutions, and lives can be at stake in the healthcare industry. Other organizations can tolerate a few minutes of downtime without negatively affecting or irking their end users. (Now, there’s a golden rule to go with the golden standard: Don’t irk the end user!)

    The following table shows the amount of downtime for each level of availability, from “two nines” to “five nines.” You’ll see that doesn’t deliver 100% uptime, but it’s close.high availability

    The immediate (working) goal and requirements of HA architecture

    The more immediate (and “working” goal) of an HA architecture is to bring together a combination of extensions, tools, hardware, software, etc., and package them in a design (blueprint) of a database infrastructure that’s fit to perform optimally — amid demanding conditions. That design will depict an infrastructure of high availability nodes/clusters that work together (or separately, if necessary) so that if one goes down, another one takes over.

    Proven architectures — including those we share in this blog — have met several high availability requirements. When those requirements are met, databases will include:

    • Redundancy: Critical components of a database are duplicated so that if one component fails, the functionality continues by using a redundant component. For example, in a server cluster, multiple servers are used to host the same application so that if one server fails, the application can continue to run on the other servers.
    • Load balancing: Traffic is distributed across multiple servers to prevent any one component from becoming overloaded. Load balancers can detect when a component is not responding and put traffic redirection in motion.
    • No single-point-of-failure (SPOF): This is both an exclusion and an inclusion for the architecture. There cannot be any single-point-of-failure in the database environment, including physical or virtual hardware the database system relies on that would cause it to fail. So there must be multiple components whose function, in part, is ensuring there’s no SPOF.
    • Failure detection: Monitoring mechanisms detect failures or issues that could lead to failures. Alert mechanisms report those failures or issues so that they are addressed immediately.
    • Failover: This involves automatically switching to a redundant component when the primary component fails. If a primary server fails, a backup server can take over and continue to serve requests.
    • Cluster and connection management: This includes software for automated provisioning, configuration, and scaling of database nodes. Clustering solutions typically bundle with a connection manager. However, in asynchronous clusters, deploying a connection manager is mandatory for high availability.
    • Automated backup, continuous archiving, and recovery: This is of extreme importance if any replication delay happens and the replica node isn’t able to work at the primary’s pace. The backed-up and archived files can also be used for point-in-time recovery if any disaster occurs.
    • Scalability: HA architecture should support scalability that enables automated management of increased workloads and data volume. This can be achieved through techniques like sharding, where the data is partitioned and distributed across multiple nodes, or by adding more nodes to the cluster as needed.

    Get going (or get some help) using proven Percona architectures

    Designing and implementing a highly available database environment requires considerable time and expertise. So instead of you having to select, configure, and test those configurations to build a highly available database environment, why not use ours? You can use Percona architectures on your own, call on us as needed, or have us do it all for you.

    High availability MySQL architectures

    Check out Percona architecture and deployment recommendations, along with a technical overview, for a MySQL solution that provides a high level of high availability and assumes the usage of high read/write applications.

    Percona Distribution for MySQL: High Availability with Group Replication

     

    If you need even more high availability in your MySQL database, check out Percona XtraDB Cluster (PXC).

    High availability PostgreSQL architectures

    Access a PostgreSQL architecture description and deployment recommendations, along with a technical overview, of a solution that provides high availability for mixed-workload applications.

     

    Percona Distribution for PostgreSQL: High Availability with Streaming Replication

     

    View a disaster recovery architecture for PostgreSQL, with deployment recommendations based on Percona best practices.

    PostgreSQL: Disaster Recovery

     

    Here are additional links to Percona architectures for high availability PostgreSQL databases:

    Highly Available PostgreSQL From Percona

    Achieving High Availability on PostgreSQL With Open Source Tools

    Highly Availability in PostgreSQL with Patroni

    High Availability MongoDB From Percona

    Percona offers support for MongoDB clusters in any environment. Our experienced support team is available 24x7x365 to ensure continual high performance from your MongoDB systems.

     

    Percona Operator for MongoDB Design Overview

    Percona Database Performance Blog

    TechBeamers Python: Get Started with DataClasses in Python

    Python dataclasses, a powerful feature that simplifies the process of creating classes for storing and manipulating data. Dataclasses are a feature introduced in Python 3.7 as part of the standard library module called dataclasses. We’ll explore the concept step by step with easy-to-understand explanations and coding examples. Dataclasses in Python Dataclasses in Python are closely […]

    The post Get Started with DataClasses in Python appeared first on TechBeamers.

    Planet Python

    Cause and Cure Discovered for a Common Type of High Blood Pressure

    Researchers at a London-based public research university had already discovered that for 5-10% of people with hypertension, the cause is a gene mutation in their adrenal glands. (The mutation results in excessive production of a hormone called aldosterone.) But that was only the beginning, according to a new announcement from the university shared by SciTechDaily:
    Clinicians at Queen Mary University of London and Barts Hospital have identified a gene variant that causes a common type of hypertension (high blood pressure) and a way to cure it, new research published in the journal Nature Genetics shows. The cause is a tiny benign nodule, present in one-in-twenty people with hypertension. The nodule produces a hormone, aldosterone, that controls how much salt is in the body. The new discovery is a gene variant in some of these nodules which leads to a vast, but intermittent, over-production of the hormone. The gene variant discovered today causes several problems which makes it hard for doctors to diagnose some patients with hypertension. Firstly, the variant affects a protein called CADM1 and stops cells in the body from ‘talking’ to each other and saying that it is time to stop making aldosterone. The fluctuating release of aldosterone throughout the day is also an issue for doctors, which at its peak causes salt overload and hypertension. This fluctuation explains why patients with the gene variant can elude diagnosis unless they happen to have blood tests at different times of day. The researchers also discovered that this form of hypertension could be cured by unilateral adrenalectomy — removing one of the two adrenal glands. Following removal, previously severe hypertension despite treatment with multiple drugs disappeared, with no treatment required through many subsequent years of observation. Fewer than 1% of people with hypertension caused by aldosterone are identified because aldosterone is not routinely measured as a possible cause. The researchers are recommending that aldosterone is measured through a 24-hour urine test rather than one-off blood measurements, which will discover more people living with hypertension but going undiagnosed.


    Read more of this story at Slashdot.

    Slashdot

    Firing a Bowling Ball Cannon

    https://theawesomer.com/photos/2023/06/bowling_ball_cannon_t.jpg

    Firing a Bowling Ball Cannon

    Link

    Cannons are generally designed to fire iron cannonballs. Ballistic High-Speed shows us there’s no good reason they can’t fire bowling balls too. In this satisfying slow-motion video, you’ll see what happens when a bowling ball meets various objects at speeds over 300 feet per second. You definitely would not want to be on the business end of this thing.

    The Awesomer