Sadequl Hussain: 7 Best Practice Tips for PostgreSQL Bulk Data Loading

Sadequl Hussain: 7 Best Practice Tips for PostgreSQL Bulk Data Loading

https://postgr.es/p/4U4

Sometimes, PostgreSQL databases need to import large quantities of data in a single or a minimal number of steps. This process can be sometimes unacceptably slow. In this article, we will cover some best practice tips for bulk importing data into PostgreSQL databases.

Postgresql

via Planet PostgreSQL https://ift.tt/2g0pqKY

September 15, 2020 at 11:21AM

The Mandalorian’s Season 2 Trailer Is Here, and It Brought Baby Yoda

The Mandalorian’s Season 2 Trailer Is Here, and It Brought Baby Yoda

https://ift.tt/3c1qN9L


Trailer FrenzyA special place to find the newest trailers for movies and TV shows you’re craving.

This is the way to more episodes of The Mandalorian. And more Baby Yoda adorableness, of course.

Out of nowhere, Lucasfilm dropped our very first look at The Mandalorian’s sophomore season in action, picking up where season one left off: Din Djarin (Pedro Pascal), our titular bounty hunting hero, and his newly-inducted clanmate “The Child” jetting off on a quest to not just find the little green force-user’s people, but to keep themselves safe from the sinister grip of Imperial Remnant officer Moff Gideon (Giancarlo Esposito).

While the trailer doesn’t give too much away—just like season one’s cryptic footage sneakily hiding tiny Baby Yoda’s massive presence in the show—there have been plenty of rumors hinting to expect tons of familiar faces and major explorations of the Star Wars canon as we know it this season. From teases for the return of Clone Wars favorites like Ahsoka Tano and Mandalorian Death Watch agent Bo-Katan Kryze, there’s also the helmeted elephant in the room: Temuera Morrison’s alleged return as legendary Bounty Hunter Boba Fett.

How will all that factor in? Will Moff Gideon get his grubby hands on the baby? Will Din, indeed, find the way? We won’t have much longer to find out: The Mandalorian will return to Disney+ on October 30th.

G/O Media may get a commission


For more, make sure you’re following us on our Instagram @io9dotcom.

geeky,Tech

via Gizmodo https://gizmodo.com

September 15, 2020 at 10:21AM

Team finds vitamin D deficiency and COVID-19 infection link

Team finds vitamin D deficiency and COVID-19 infection link

https://ift.tt/3klxGFM

A vitamin D gelcap sits on a yellow/orange background

There’s an association between vitamin D deficiency and the likelihood of becoming infected with COVID-19, according to a new retrospective study of people tested for COVID-19.

“Vitamin D is important to the function of the immune system and vitamin D supplements have previously been shown to lower the risk of viral respiratory tract infections,” says David Meltzer, professor of medicine and chief of hospital medicine at University of Chicago Medicine and lead author of the study in JAMA Network Open. “Our statistical analysis suggests this may be true for the COVID-19 infection.”

The research team looked at 489 patients whose vitamin D level had been measured within a year before being tested for COVID-19. Patients who had untreated vitamin D deficiency (defined as less than 20 nanograms per milliliter of blood) were almost twice as likely to test positive for COVID-19 compared to patients who had sufficient levels of the vitamin.

Researchers stress it’s important to note that the study only found the two conditions frequently seen together; it does not prove causation. Meltzer and colleagues plan further clinical trials.

Experts believe half of Americans have a vitamin D deficiency, with much higher rates seen in African Americans, Hispanics, and people living in areas like Chicago where it is difficult to get enough sun exposure in winter.

Research has also shown, however, that some kinds of vitamin D tests don’t detect the form of vitamin D present in a majority of African Americans—which means those tests might falsely diagnose vitamin D deficiencies. The current study accepted either kind of test as criteria.

COVID-19 is also more prevalent among African Americans, older adults, nursing home residents, and health care workers—populations who all have increased risk of vitamin D deficiency.

“Understanding whether treating vitamin D deficiency changes COVID-19 risk could be of great importance locally, nationally, and globally,” Meltzer says. “Vitamin D is inexpensive, generally very safe to take, and can be widely scaled.”

Meltzer and his team emphasize the importance of experimental studies to determine whether vitamin D supplementation can reduce the risk, and potentially severity, of COVID-19. They also highlight the need for studies of what strategies for vitamin D supplementation may be most appropriate in specific populations.

The University of Chicago/Rush University Institute for Translational Medicine Clinical and Translational Science Award and the African American Cardiovascular Pharmacogenetic Consortium funded the work.

Source: Gretchen Rubin for University of Chicago

The post Team finds vitamin D deficiency and COVID-19 infection link appeared first on Futurity.

via Futurity.org https://ift.tt/2p1obR5

September 15, 2020 at 01:35PM

Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

https://ift.tt/3mmOMoM


By David Katz

You’re looking for a gun for everyday carry, a shotgun for hunting season, or perhaps you just want a nice used gun to add to your collection. You also want to find a really good deal and the gun market is tight right now. A private sale might be just the way to go.

Federal law doesn’t prohibit private sales between individuals who reside in the same state, and the vast majority of states do not require that a private sale be facilitated by a federally licensed gun dealer (“FFL”). However, the more you think about it, what would happen to you if you bought a gun that turned out to be lost or stolen? Even worse, what would happen if you purchased a firearm that had been used in a crime?

Unfortunately, these things can happen. Further, there is no practical way for you to ensure a gun you purchase from a stranger is not lost or stolen.

FBI Lost and Stolen Gun Database

When a firearm is lost or stolen, the owner should immediately report it to the police. In fact, if a gun is lost or stolen from an FFL, the law requires the FFL to report the missing firearm to the ATF. These reported firearms are entered into a database maintained by the FBI’s National Crime Information Center.

Unfortunately for purchasers in private sales, only law enforcement agencies are allowed to request a search of the lost and stolen gun database.

Private Databases

While there have been attempts at creating private searchable internet databases where individuals self-report their lost or stolen guns, these usually contain only a fraction of the number of actual stolen guns, and the information is not verifiable.

Some states are exploring or attempting to build a state database of lost or stolen firearms that is searchable by the public, online. For example, the Florida Crime Information Center maintains a website where an individual can search for many stolen or lost items, including cars, boats, personal property, and of course, firearms.

However, even this website warns:

“FDLE cannot represent that this information is current, active, or complete. You should verify that a stolen property report is active with your local law enforcement agency or with the reporting agency.”

Police Checks of Firearms

Having the local police check the federal database continues to be the most accurate way of ascertaining whether or not a used firearm is lost or stolen, but many police departments do not offer this service. And be forewarned: if the gun does come back as lost or stolen, the person who brought it to the police will not get it back. The true owner always has the right to have his or her stolen gun returned.

If you choose to purchase a firearm in a private sale, you should protect yourself. A bill of sale is the best way to accomplish this. If it turns out the firearm was stolen or previously used in a crime, you will need to demonstrate to the police when you came into possession of the firearm and from whom you made the purchase. You don’t want to be answering uncomfortable police questions without documentation to back you up.

On the flip side, if you are the one who happens to be the victim of gun theft, be sure to report it after speaking with an attorney. Because while it may take several years, you never know when a police department may be calling you to return your gun.

 

David Katz is an independent program attorney for US LawShield. 

guns

via The Truth About Guns https://ift.tt/1TozHfp

September 14, 2020 at 03:15PM

Essential Climbing Knots You Should Know and How to Tie Them

Essential Climbing Knots You Should Know and How to Tie Them

https://ift.tt/3hpGUPr

Tying knots is an essential skill for climbing. Whether you’re tying in as a climber, building an anchor, or rappelling, using the right knot will make your climbing experience safer and easier.

Here, we’ll go over how to tie six common knots, hitches, and bends for climbing. Keep in mind, there are plenty of other useful knots.

And while this article can provide a helpful reminder, it’s by no means a substitute for learning from an experienced guide in person. However, this can be a launching point for you to practice some integral and common climbing knots at home.

This article includes:

  • Figure-eight follow-through
  • Overhand on a bight
  • Double fisherman’s bend
  • Clove hitch
  • Girth hitch
  • Prussik hitch

Knot-Tying Terms

Before we get into it, these are a few rope terms you’ll want to know for the rest of the article:

  • Knot — a knot is tied into a single rope or piece of webbing.
  • Bend — a bend joins two ropes together.
  • Hitch — a hitch connects the rope to another object like a carabiner, your harness, or another rope.
  • Bight — a section of rope between the two ends. This is usually folded over to make a loop.
  • Working end — the side of the rope that you’re using for the knot.
  • Standing end — the side of the rope that you’re not using for the knot.

Figure-Eight Follow-Through

This knot, also known as the trace-eight or rewoven figure-eight is one of the first knots every rock climber will learn. It ties you into your harness as a climber.

To make this knot, hold the end of your rope in one hand and measure out from your fist to your opposite shoulder. Make a bight at that point so you have a loop with your working end on top. Wrap your working end around the base of your loop once, then poke the end through your loop from front to back.

Pull this tight and you should have your first figure-eight knot.

For the follow-through, if you’re tying into your harness, thread your working end through both tie-in points on your harness and pull the figure-eight close to you. Then, thread your working end back through the original figure-eight, tracing the original knot.

Once it’s all traced through, you should have five sets of parallel lines in your knot neatly next to each other. Pull all strands tight and make sure you have at least six inches of tail on your working end.

Overhand Knot on a Bight

This knot is great for anchor building, creating a central loop, or as a stopper.

Take a bight on the rope and pinch it into a loop — this loop now essentially becomes your working end.

Loop the bight over your standing strands then bring it under the rope and through the loop you just created. Dress your knot by making sure all strands run parallel and pull each strand tight.

Double Fisherman’s Bend

Use this knot when you need to join two ropes together or make a cord into a loop. The double fisherman’s is basically two double knots next to each other.

To do this knot, line both rope ends next to each other. Hold one rope in your fist with your thumb on top. Wrap the working end of the other rope around your thumb and the first rope twice so it forms an X.

Take your thumb out and thread your working end through your X from the bottom up and pull tight. You should have one rope wrapped twice around the other strand with an X on one side and two parallel lines on the other.

Repeat this process with the working end of the other rope so you have one X and two parallel lines from each rope. Pull the two standing ends tight to bring both knots together.

Clove Hitch

This hitch is great for building anchors with your rope or securing your rope to a carabiner. The clove hitch is strong enough that it won’t move around when it’s weighted, but you can adjust each side to move the hitch around when unweighted.

To make this hitch, make two loops twisting in the same direction. Put your second loop behind the first, then clip your carabiner through both loops. Pull both strands tight and the rope should cinch down on the carabiner.

Girth Hitch

The girth hitch is ideal for attaching your personal anchor (or any sling) directly to your harness. The hitch is not adjustable like the clove hitch, but you can form it around any object as long as you have a loop.

Wrap your loop around the object, then feed the other end through your first loop so the rope or sling creates two strands around the object. Pull your working end tight.

Prusik Hitch

This is the most common friction hitch and is ideal for a rappel backup or ascending the rope. The friction hitch will grip the rope on either end when pulled tight, but can also easily move over a rope when loose.

To make your prusik hitch, you’re essentially making multiple girth hitches.

Put your loop behind the rope then thread the other end of your sling or cord through that loop. Loosely wrap the cord around the rope at least three times, threading through your original loop each time.

Pull the hitch tight around the rope then test it by making sure it successfully grips the rope.

The post Essential Climbing Knots You Should Know and How to Tie Them appeared first on GearJunkie.

Outdoors

via GearJunkie https://gearjunkie.com

September 14, 2020 at 10:15AM

A Step by Step Guide to Take your MySQL Instance to the Cloud

A Step by Step Guide to Take your MySQL Instance to the Cloud

https://ift.tt/33nGXX6

You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag.

Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details:

  • – Export the data – Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course)
  • – Create my MySQL cloud instance.
  • – import the data into the cloud instance.
  • – Redirect the application to the cloud instance.

All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available. 

Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed.

Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/

 Let’s get prepared then:

Main objective: keep application outage time down to minimum.

Preparation:

  • You have an OCI account, and the OCI CLI configuration is in place.
  • MySQL Shell 8.0.21 is installed on the on-premise environment.
  • We create an Object Storage bucket for the data upload.
  • Create our MySQL Database System.
  • We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here.
  • Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS.

So, now for our OCI environment setup. What do I need?

Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.)

What we need is the config file from .oci, which contains the following info:

You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”.

Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it.

The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket.

MySQL Shell 8.0.21 install on-premise.

Make a bucket.

I did this via the OCI console.

This creates a Standard Private bucket.

Click on the bucket name that now appears in the list, to see the details.

You will need to note down the Name and Namespace.

Create our MySQL Database System.

This is where the data will be uploaded to. This is also quite simple.

And hey presto. We have it.

Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons.

On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step.

For example, IP Address is 10.0.0.4.

Create our Endpoint Compute instance.

In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server.

Here we’ll install MySQL Router to be our proxy for external access.

And we’ll also install MySQL Shell to upload the data from our Object Storage bucket.

For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b.

First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned.

Router & Shell install ‘n’ configure

Test connectivity.

Test MySQL Router as our proxy, via MySQL Shell:

$ mysqlsh root@kh01:3306 –sql -e ‘show databases’

Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try:

$ mysqlsh root@<public-ip>:3306 –sql -e ‘show databases’

If you get any issues here, check your ingress rules at your VCN level.

Also, double check your o.s. firewall rules on the freshly created compute instance too.

Preparation is done.

We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment.

Let’s get the data streaming.

MySQL Shell Dump Utility

In effect, it’s here when we’ll be ‘streaming’ data.

This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS.

First of all, I want to check the commands with “dryRun: true”.

util.dumpSchemas dryRun

From our own environment / on-premise installation, we now want to dump / export the data:

$ mysqlsh root@OnPremiseHost:3306

You’ll want to see what options are available and how to use the util.dumpSchemas utility:

mysqlsh> \help util.dumpSchemas

NAME

      dumpSchemas – Dumps the specified schemas to the files in the output

                    directory.

SYNTAX

      util.dumpSchemas(schemas, outputUrl[, options])

WHERE

      schemas: List of schemas to be dumped.

      outputUrl: Target directory to store the dump files.

      options: Dictionary with the dump options.

Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So:

util.dumpSchemas(

["test"], "test",

{dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]

}

)

["test"]               I just want to dump the test schema. I could put a list of                                schemas here.      Careful if you think you can export internal                                      schemas, ‘cos you can’t.

test”                             is the “outputURL target directort”. Watch the prefix of all the                        files being created in the bucket..

options:

dryRun:             Quite obvious. Change it to false to run.

showProgress:                 I want to see the progress of the loading.

threads:              Default is 4 but choose what you like here, according to the                                        resources available.

ocimds:              VERY IMPORTANT! This is to make sure that the                                      environment is “MDS Ready” so when the data gets to the                             cloud, nothing breaks.

osBucketName:   The name of the bucket we created.

osNamespace:                 The namespace of the bucket.

ociConfigFile:    This is what we looked at, right at the beginning. This what makes it easy. 

compatibility:                There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS.

Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try!

I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump:

util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true})

Acquiring global read lock

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Preparing data dump for table `test`.`reviews`

Writing DDL for schema `test`

Writing DDL for table `test`.`reviews`

Data dump for table `test`.`reviews` will be chunked using column `review_id`

(I created the primary key on the review_id column and got rid of the following warning at the end:)

WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file.

Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following:

util.LoadDump dryRun

Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data:

$ mysqlsh root@kh01:3306

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"})

As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory.

Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix=’test’ using 8 threads.

Util.loadDump: Failed opening object ‘@.json’ in READ mode: Not Found (404) (RuntimeError)

This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second.

You want to do some “streaming”?

With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh:

On-premise:

dry run:

util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

real:

util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

OCI Compute host:

dry run:

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

real:

util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time:

On-premise:

At the OCI compute host you can see the waitDumpTimeout take effect with:

NOTE: Dump is still ongoing, data will be loaded as it becomes available.

In the osBucket, we can now see content (which is what the loadDump is reading):

And once it’s all dumped ‘n’ uploaded we have the following output:

If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute)

Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!!

Conclusion

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 13, 2020 at 11:32PM

Mining Firm CEO Resigns After Razing an Australian Indigenous Site

Mining Firm CEO Resigns After Razing an Australian Indigenous Site

https://ift.tt/3hvd3p7


The Rio Tinto building in Brisbane.
Photo: William West/AFP (Getty Images)

Three executives from the mining company that detonated a 46,000-year-old  Indigenous Australian heritage site to expand an iron ore mine—and later insisted that it did nothing wrong—are leaving the company.

Rio Tinto destroyed the Juukan 1 and Juukan 2 rock shelters in the Pilbara region of Western Australia in May 2020, blasting out of existence a site of major cultural importance to the Puutu Kunti Kurrama and Pinikura People (PKKP). Technically, the firm did this in complete compliance with the law, as it secured consent from a minister years earlier under Section 18 of Australia’s Aboriginal Heritage Act. In 2014 Rio Tinto did fund a final archaeological expedition to extract items of importance from the rock shelters, turning up findings the Sydney Morning Herald reported “significance exceeded all expectations” such as grinding and pounding stones, a 28,000-year-old bone tool, and parts of a 4,000-year-old belt made of human hair.

The archaeologists in the expedition recommended that the Juukan 1 and Juukan 2 sites be subject to further exploration. Instead, Rio Tinto commenced with the detonation, claiming at the last minute the charges couldn’t be safely removed. The company then issued a statement claiming it had worked “constructively together with the PKKP people on a range of heritage matters” and to “protect places of cultural significance to the group.” It seemingly apologized in June, but iron ore business head Chris Salisbury later clarified that the company didn’t actually regret blowing up the site, just the “distress the event caused.”

Now out at Rio Tinto, according to CNN, are CEO Jean-Sébastien Jacques, Salisbury, and corporate relations group executive Simone Nivens. Jacques will remain until his successor chosen or at the end of March. Salisbury is stepping down immediately, and both he and Nivens will leave the company entirely at the end of the year. Though the executives collectively will be penalized by around $5 million in bonuses, they will still collect an exit payment including long-term bonuses.

Rio Tinto chairman Simon Thompson told CNN in a statement, “what happened at Juukan was wrong. We are determined to ensure that the destruction of a heritage site of such exceptional archaeological and cultural significance never occurs again at a Rio Tinto operation.”

G/O Media may get a commission

CEO Jamie Lowe of the National Native Title Council, which represents Indigenous groups in Australia, tweeted that while the NTTC “welcomes” the executives’ ousting, “this is not the end.” 

“We cannot and will not allow this type of devastation to occur ever again,” the PKKP Aboriginal Corporation told the New York Times in a statement.

Hesta, a superannuation fund which holds a stake in Rio Tinto, previously demanded a public inquiry and called the executives’ removal inadequate.

“Mining companies that fail to negotiate fairly and in good faith with traditional owners expose the company to reputational and legal risk,” the fund said, according to the Guardian. “These risks increase the longer these agreements are in place. Without an independent review, we cannot adequately assess these risks and understand how they may impact value. We have lost confidence that the company can do this on their own.”

Allan Fels, an economist and lawyer consulted by Hesta, told the Guardian, “there are potential unconscionable conduct issues, both at the legal and ethical level. They need to be investigated independently.”

According to a review conducted by the paper, mining companies have obtained ministerial permission to destroy more than 100 ancient indigenous sites in Western Australia alone. This is far from Rio Tinto’s first rodeo at violating human rights. The company has also been accused of “grossly unethical conduct” by the Norwegian pension fund. Indigenous Australian lawyer and land rights activist Noel Pearson told the Times the resignations were a major step forward and that, “in the past, Indigenous people would have nobody to rely on in the case of vandalism like this.” But University of Queensland sociologist Kristen Lyons told the paper that nothing had changed about structural laws that advantage corporations over Indigenous peoples, nor did the executives’ departures “address the profound inequity in who has rights over decision making.”

geeky,Tech

via Gizmodo https://gizmodo.com

September 11, 2020 at 04:21PM