TechBeamers Python: Get Started with DataClasses in Python

Python dataclasses, a powerful feature that simplifies the process of creating classes for storing and manipulating data. Dataclasses are a feature introduced in Python 3.7 as part of the standard library module called dataclasses. We’ll explore the concept step by step with easy-to-understand explanations and coding examples. Dataclasses in Python Dataclasses in Python are closely […]

The post Get Started with DataClasses in Python appeared first on TechBeamers.

Planet Python

Ready-to-Use High Availability Architectures for MySQL and PostgreSQL

https://www.percona.com/blog/wp-content/uploads/2023/06/high-availability-architectures-for-MySQL-and-PostgreSQL-200×113.jpghigh availability architectures for MySQL and PostgreSQL

When it comes to access to their applications, users demand instant, reliable, and secure interactions — and that means databases must be highly available.

With database high availability (HA), services are largely uninterrupted, and end users are largely satisfied. Without high availability, there’s more-than-negligible downtime, and end users can become non-users (as in, former customers). A business can also incur reputational damage and face penalties for not meeting Service Level Agreements (SLAs).

Open source databases provide great foundations for high availability — without the pitfalls of vendor lock-in that can come with proprietary software. However, open source software doesn’t typically include built-in HA solutions. Sure, you can get there with the right extensions and tools, but it can be a long, burdensome, and potentially expensive process. So why not use a proven architecture instead of starting from scratch on your own?

This blog provides links to such architectures — for MySQL and PostgreSQL software. They’re proven and ready-to-go. You can use these Percona architectures to build highly available PostgreSQL or MySQL environments or have our experts do the heavy lifting for you. Either way, the architectures provide outlines for building databases that keep operations running optimally, even during peak usage or amid technical challenges caused by anything from brief outages to disasters.

First, let’s quickly examine what’s at stake and discuss standards for protecting those assets.

Importance of high availability architecture

As indicated, an HA architecture provides the blueprint for building a database that assures the continuity of critical business operations, even amid crashes, incursions, outages, and other threats. Conversely, choosing a piecemeal approach — one in which you attempt to build a database through the trial and error of various tools and extensions  — can leave your system vulnerable.

That vulnerability can be costly: A 2022 ITIC survey found that the cost of downtime is greater than $300,000 per hour for 91% of small-, mid-size, and large enterprises. Among just the mid-size and large respondents, 44% percent said a single hour of downtime could potentially cost them more than $1 million.

The ultimate goal of HA architecture

So what’s the ultimate goal of using an HA architecture? The obvious answer is this: To achieve high availability. That can mean different things for different businesses, but within IT, 99.999% (“five-nines”) of availability is the gold standard of database availability.

It really depends on how much downtime you can bear. With streaming services, for example, excessive downtime could result in significant financial and reputational losses for the business. Elsewhere, millions can be at stake for financial institutions, and lives can be at stake in the healthcare industry. Other organizations can tolerate a few minutes of downtime without negatively affecting or irking their end users. (Now, there’s a golden rule to go with the golden standard: Don’t irk the end user!)

The following table shows the amount of downtime for each level of availability, from “two nines” to “five nines.” You’ll see that doesn’t deliver 100% uptime, but it’s close.high availability

The immediate (working) goal and requirements of HA architecture

The more immediate (and “working” goal) of an HA architecture is to bring together a combination of extensions, tools, hardware, software, etc., and package them in a design (blueprint) of a database infrastructure that’s fit to perform optimally — amid demanding conditions. That design will depict an infrastructure of high availability nodes/clusters that work together (or separately, if necessary) so that if one goes down, another one takes over.

Proven architectures — including those we share in this blog — have met several high availability requirements. When those requirements are met, databases will include:

  • Redundancy: Critical components of a database are duplicated so that if one component fails, the functionality continues by using a redundant component. For example, in a server cluster, multiple servers are used to host the same application so that if one server fails, the application can continue to run on the other servers.
  • Load balancing: Traffic is distributed across multiple servers to prevent any one component from becoming overloaded. Load balancers can detect when a component is not responding and put traffic redirection in motion.
  • No single-point-of-failure (SPOF): This is both an exclusion and an inclusion for the architecture. There cannot be any single-point-of-failure in the database environment, including physical or virtual hardware the database system relies on that would cause it to fail. So there must be multiple components whose function, in part, is ensuring there’s no SPOF.
  • Failure detection: Monitoring mechanisms detect failures or issues that could lead to failures. Alert mechanisms report those failures or issues so that they are addressed immediately.
  • Failover: This involves automatically switching to a redundant component when the primary component fails. If a primary server fails, a backup server can take over and continue to serve requests.
  • Cluster and connection management: This includes software for automated provisioning, configuration, and scaling of database nodes. Clustering solutions typically bundle with a connection manager. However, in asynchronous clusters, deploying a connection manager is mandatory for high availability.
  • Automated backup, continuous archiving, and recovery: This is of extreme importance if any replication delay happens and the replica node isn’t able to work at the primary’s pace. The backed-up and archived files can also be used for point-in-time recovery if any disaster occurs.
  • Scalability: HA architecture should support scalability that enables automated management of increased workloads and data volume. This can be achieved through techniques like sharding, where the data is partitioned and distributed across multiple nodes, or by adding more nodes to the cluster as needed.

Get going (or get some help) using proven Percona architectures

Designing and implementing a highly available database environment requires considerable time and expertise. So instead of you having to select, configure, and test those configurations to build a highly available database environment, why not use ours? You can use Percona architectures on your own, call on us as needed, or have us do it all for you.

High availability MySQL architectures

Check out Percona architecture and deployment recommendations, along with a technical overview, for a MySQL solution that provides a high level of high availability and assumes the usage of high read/write applications.

Percona Distribution for MySQL: High Availability with Group Replication

 

If you need even more high availability in your MySQL database, check out Percona XtraDB Cluster (PXC).

High availability PostgreSQL architectures

Access a PostgreSQL architecture description and deployment recommendations, along with a technical overview, of a solution that provides high availability for mixed-workload applications.

 

Percona Distribution for PostgreSQL: High Availability with Streaming Replication

 

View a disaster recovery architecture for PostgreSQL, with deployment recommendations based on Percona best practices.

PostgreSQL: Disaster Recovery

 

Here are additional links to Percona architectures for high availability PostgreSQL databases:

Highly Available PostgreSQL From Percona

Achieving High Availability on PostgreSQL With Open Source Tools

Highly Availability in PostgreSQL with Patroni

High Availability MongoDB From Percona

Percona offers support for MongoDB clusters in any environment. Our experienced support team is available 24x7x365 to ensure continual high performance from your MongoDB systems.

 

Percona Operator for MongoDB Design Overview

Percona Database Performance Blog

Cause and Cure Discovered for a Common Type of High Blood Pressure

Researchers at a London-based public research university had already discovered that for 5-10% of people with hypertension, the cause is a gene mutation in their adrenal glands. (The mutation results in excessive production of a hormone called aldosterone.) But that was only the beginning, according to a new announcement from the university shared by SciTechDaily:
Clinicians at Queen Mary University of London and Barts Hospital have identified a gene variant that causes a common type of hypertension (high blood pressure) and a way to cure it, new research published in the journal Nature Genetics shows. The cause is a tiny benign nodule, present in one-in-twenty people with hypertension. The nodule produces a hormone, aldosterone, that controls how much salt is in the body. The new discovery is a gene variant in some of these nodules which leads to a vast, but intermittent, over-production of the hormone. The gene variant discovered today causes several problems which makes it hard for doctors to diagnose some patients with hypertension. Firstly, the variant affects a protein called CADM1 and stops cells in the body from ‘talking’ to each other and saying that it is time to stop making aldosterone. The fluctuating release of aldosterone throughout the day is also an issue for doctors, which at its peak causes salt overload and hypertension. This fluctuation explains why patients with the gene variant can elude diagnosis unless they happen to have blood tests at different times of day. The researchers also discovered that this form of hypertension could be cured by unilateral adrenalectomy — removing one of the two adrenal glands. Following removal, previously severe hypertension despite treatment with multiple drugs disappeared, with no treatment required through many subsequent years of observation. Fewer than 1% of people with hypertension caused by aldosterone are identified because aldosterone is not routinely measured as a possible cause. The researchers are recommending that aldosterone is measured through a 24-hour urine test rather than one-off blood measurements, which will discover more people living with hypertension but going undiagnosed.


Read more of this story at Slashdot.

Slashdot

Firing a Bowling Ball Cannon

https://theawesomer.com/photos/2023/06/bowling_ball_cannon_t.jpg

Firing a Bowling Ball Cannon

Link

Cannons are generally designed to fire iron cannonballs. Ballistic High-Speed shows us there’s no good reason they can’t fire bowling balls too. In this satisfying slow-motion video, you’ll see what happens when a bowling ball meets various objects at speeds over 300 feet per second. You definitely would not want to be on the business end of this thing.

The Awesomer

Unleashing the Power of PostgreSQL Event-Based Triggers

https://www.percona.com/blog/wp-content/uploads/2023/03/lucas.speyer_a_postgresql_texture_like_an_elephant_cb75dd38-d342-444a-b3a3-c9b8a348a816-150×150.pngPostgreSQL Event-Based Triggers

PostgreSQL provides a powerful mechanism for implementing event-driven actions using triggers. Triggers on Data Definition Language (DDL) events are a powerful feature of PostgreSQL that allows you to perform additional actions in response to changes to the database schema. DDL events include operations such as CREATE, ALTER, and DROP statements on tables, indexes, and other database objects. In this blog post, we will explore using triggers on DDL events in PostgreSQL to implement custom logic and automate database management tasks.

Creating event-based triggers

To create an event-based trigger in PostgreSQL, first, create a trigger function that defines the logic to be executed when the trigger fires. The trigger function can be written in PL/SQL or PL/Python, or any language supported by PostgreSQL.

Trigger function can be created in the same way as we create any user-defined function except that it returns event_trigger variable unlike returning the normal datatypes:

CREATE OR REPLACE FUNCTION
RETURNS event_trigger AS
$$
DECLARE
-- Declare variables if needed
BEGIN
-- Function body
-- Perform desired actions when the trigger fires
END;
$$
LANGUAGE plpgsql;

Once the trigger function is created, the trigger can be created that is associated with a specific event. Unlike normal triggers (which are executed for INSERT, UPDATE, and DELETE kinds of DML operations) that are created on specific tables, event-based triggers are created for DDL events and not on a particular table.

CREATE EVENT TRIGGER trigger_name
[ ON event_trigger_event ]
[ WHEN filter_condition ]
EXECUTE FUNCTION trigger_function_name();

In this syntax, event_trigger_event can be any of the following events, which are described in more detail in PostgreSQL documentation.

  • ddl_command_start,
  • ddl_command_end,
  • sql_drop and
  • table_rewrite

This syntax will be clearer after seeing the example in the next sections.

Using triggers on DDL events

Triggers on DDL events can be used for a wide range of purposes and database management tasks. Here are some examples of how to use DDL triggers:

  • Log schema changes: One can use DDL triggers to log all schema changes, providing an audit trail of who made the changes and when.
  • Automate database management tasks: One can use DDL triggers to automate routine database management tasks, such as creating indexes or updating views.
  • Enforce naming conventions: One could use a DDL trigger to enforce naming conventions for tables and columns, ensuring that all objects are named consistently.

Let’s create a few triggers that help us understand all the above usages of event triggers.

Before creating the trigger, let’s create the table which will log all the DDL statements:

CREATE TABLE ddl_log (
id integer PRIMARY KEY,
username TEXT,
object_tag TEXT,
ddl_command TEXT,
timestamp TIMESTAMP
);
CREATE SEQUENCE ddl_log_seq;

Let’s create the event trigger function, which will insert the data into the above table:

CREATE OR REPLACE FUNCTION log_ddl_changes()
RETURNS event_trigger AS $$
BEGIN
INSERT INTO ddl_log
(
id,
username,
object_tag,
ddl_command,
Timestamp
)
VALUES
(
nextval('ddl_log_seq'),
current_user,
tg_tag,
current_query(),
current_timestamp
);
END;
$$ LANGUAGE plpgsql;

Let’s finally create the trigger, which will call the trigger function created above:

CREATE EVENT TRIGGER log_ddl_trigger
ON ddl_command_end
EXECUTE FUNCTION log_ddl_changes();

Let’s create a test table and check if we get the entry in the ddl_log table or not:

demo=# create table test (t1 numeric primary key);
CREATE TABLE
demo=# select * from ddl_log;
id | username | object_tag | ddl_command | timestamp
----+----------+--------------+---------------------------------------------+----------------------------
1 | postgres | CREATE TABLE | create table test (t1 numeric primary key); | 2023-06-02 15:24:54.067929
(1 row)
demo=# drop table test;
DROP TABLE
demo=#
demo=# select * from ddl_log;
id | username | object_tag | ddl_command | timestamp
----+----------+--------------+---------------------------------------------+----------------------------
1 | postgres | CREATE TABLE | create table test (t1 numeric primary key); | 2023-06-02 15:24:54.067929
2 | postgres | DROP TABLE | drop table test; | 2023-06-02 15:25:14.590444
(2 rows)

In this way, schema changes can be logged using the above event trigger code.

Even though it is not a rule of thumb, in my experience, it has been seen that mostly foreign key columns are used for the joining condition in queries. If we have indexes on such columns, it automizes the index creation on foreign key columns. In many applications, there are some naming conventions for table names. Let’s see an example that throws an error if the table name does not start with ‘tbl_’. Similar code can be developed for any object. Let’s check the code which will help in achieving this use case.

Common Trigger function for naming conventions and index creation — this code has been custom developed through my experience working in PostgreSQL and researching online.

CREATE OR REPLACE FUNCTION chk_tblnm_crt_indx()
RETURNS event_trigger AS
$$
DECLARE
obj record;
col record;
table_name text;
column_name text;
BEGIN
FOR obj IN SELECT * FROM pg_event_trigger_ddl_commands() WHERE command_tag = 'CREATE TABLE'
LOOP
-- Check if the table name starts with tbl_
table_name := obj.objid::regclass;
IF table_name NOT LIKE 'tbl_%' THEN
RAISE EXCEPTION 'Table name must start with tbl_';
END IF;
-- Check if there is any foreign key then create index
FOR col IN
SELECT a.attname AS column_name
FROM pg_constraint AS c
CROSS JOIN LATERAL unnest(c.conkey) WITH ORDINALITY AS k(attnum, n)
JOIN pg_attribute AS a
ON k.attnum = a.attnum AND c.conrelid = a.attrelid
WHERE c.contype = 'f' AND c.conrelid = obj.objid::regclass
LOOP
EXECUTE format('CREATE INDEX idx_%s_%s ON %s (%s)', table_name,col.column_name, table_name, col.column_name);
RAISE NOTICE 'INDEX idx_%_% ON % (%) has been created on foreign key column', table_name,col.column_name, table_name, col.column_name;
END LOOP;
END LOOP;
END;
$$ LANGUAGE plpgsql;

Let’s finally create the trigger which will call this event trigger function:

CREATE EVENT TRIGGER chk_tblnm_crt_indx_trigger
ON ddl_command_end
EXECUTE FUNCTION chk_tblnm_crt_indx();

Let’s create a table that does not start with ‘tbl_’ and check how it gives an error:

demo=# create table dept (dept_id numeric primary key, dept_name varchar);
ERROR: Table name must start with tbl_
CONTEXT: PL/pgSQL function chk_tblnm_crt_indx() line 21 at RAISE
demo=#
demo=# create table tbl_dept (dept_id numeric primary key, dept_name varchar);
CREATE TABLE

Now, let’s create another table that references the tbl_dept table to check if an index is created automatically for a foreign key column or not.

demo=# create table tbl_emp(emp_id numeric primary key, emp_name varchar, dept_id numeric references tbl_dept(dept_id));
NOTICE: INDEX idx_tbl_emp_dept_id ON tbl_emp (dept_id) has been created on foreign key column
CREATE TABLE
demo=#
demo=# di idx_tbl_emp_dept_id
List of relations
Schema | Name | Type | Owner | Table
--------+---------------------+-------+----------+---------
public | idx_tbl_emp_dept_id | index | postgres | tbl_emp
(1 row)

As per the output of di, we can see that index has been created automatically on the foreign key column.

Conclusion

Event-based triggers are a powerful feature of PostgreSQL that allows the implementation of complex business logic and helps automate database operations. By creating triggers that are associated with specific events, one can execute custom logic automatically when the event occurs, enabling one to perform additional actions and enforce business rules. With event-based triggers, one can build more robust and automated database systems that can help improve the efficiency of the data.

On the other hand, these are good for non-production environments as it might be an overhead in the production environment if the logic is too complex. In my personal opinion, if any table is populated from the application, triggers should not be created on them; such constraints should be implemented from the application side to reduce database load. At the same time, it could act as a boon for the development (or non-production) environment to follow best practices and recommendations like who did what changes, whether proper naming conventions are used or not, and similar industry standards. In my experience, I have extensively used them for audit purposes on development environments to track the changes done by a huge team of hundreds of people.

Percona Distribution for PostgreSQL provides the best and most critical enterprise components from the open-source community, in a single distribution, designed and tested to work together.

 

Download Percona Distribution for PostgreSQL Today!

Percona Database Performance Blog

ListenData: Transformers Agent: AI Tool That Automates Everything

https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUCF3SxehuVWMtcrd4Ir8LM0OMQNVRubaR-dyhcQKJjfQyxs00fzXBbnie6tqQjPeDoEbjrT7XYYYYv6Ndnn1VyXGu280UqdknFTtoadJr207NXHFS5JZR34ddSW9eLu6gLRQ9CmwAe7IdBcDeHKuwYqPkbR5OMR415bf6ojXZ9PhL5zhjhiLEAdDW5g/s1600/transformers_agent.png

We have a new AI tool in the market called Transformers Agent which is so powerful that it can automate just about any task you can think of. It can generate and edit images, video, audio, answer questions about documents, convert speech to text and do a lot of other things.

Hugging Face, a well-known name in the open-source AI world, released Transformers Agent that provides a natural language API on top of transformers. The API is designed to be easy to use. With a single line code, it provides a variety of tools for performing natural language tasks, such as question answering, image generation, video generation, text to speech, text classification, and summarization.

Transformers Agent released by Hugging Face

READ MORE »

This post appeared first on ListenData

Planet Python

Learning How Explosions Work

https://theawesomer.com/photos/2023/06/learning_how_explosions_work_t.jpg

Learning How Explosions Work

Link

There’s data out there that helps scientists simulate what happens after an explosion gets going, but they still don’t fully understand how to simulate the genesis of a blast. Tom Scott visited a team at the UK’s University of Sheffield working on solving this problem, which could improve the safety of handling explosives and bomb disposal.

The Awesomer

Lest we forget

http://img.youtube.com/vi/0wg5x5WaZPo/0.jpg

 

D-day, 1944.

We remember those who gave their lives for freedom on that day.

Peter

Bayou Renaissance Man

Pizza Puzzles

https://theawesomer.com/photos/2023/06/pizza_puzzles_t.jpg

Pizza Puzzles

 | Buy | Link

Each slice of Stellar Factory’s pizza puzzles is a smaller puzzle indicated by patterns on the back of its pieces, making them great fun for cooperative puzzle parties. Each 550-piece, 8-slice puzzle features a wavy edge and is loaded with toppings ranging from delicious to downright disturbing. Choose from pepperoni, veggie supreme, or meat lover’s varieties.

The Awesomer