Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

https://ift.tt/3mmOMoM


By David Katz

You’re looking for a gun for everyday carry, a shotgun for hunting season, or perhaps you just want a nice used gun to add to your collection. You also want to find a really good deal and the gun market is tight right now. A private sale might be just the way to go.

Federal law doesn’t prohibit private sales between individuals who reside in the same state, and the vast majority of states do not require that a private sale be facilitated by a federally licensed gun dealer (“FFL”). However, the more you think about it, what would happen to you if you bought a gun that turned out to be lost or stolen? Even worse, what would happen if you purchased a firearm that had been used in a crime?

Unfortunately, these things can happen. Further, there is no practical way for you to ensure a gun you purchase from a stranger is not lost or stolen.

FBI Lost and Stolen Gun Database

When a firearm is lost or stolen, the owner should immediately report it to the police. In fact, if a gun is lost or stolen from an FFL, the law requires the FFL to report the missing firearm to the ATF. These reported firearms are entered into a database maintained by the FBI’s National Crime Information Center.

Unfortunately for purchasers in private sales, only law enforcement agencies are allowed to request a search of the lost and stolen gun database.

Private Databases

While there have been attempts at creating private searchable internet databases where individuals self-report their lost or stolen guns, these usually contain only a fraction of the number of actual stolen guns, and the information is not verifiable.

Some states are exploring or attempting to build a state database of lost or stolen firearms that is searchable by the public, online. For example, the Florida Crime Information Center maintains a website where an individual can search for many stolen or lost items, including cars, boats, personal property, and of course, firearms.

However, even this website warns:

“FDLE cannot represent that this information is current, active, or complete. You should verify that a stolen property report is active with your local law enforcement agency or with the reporting agency.”

Police Checks of Firearms

Having the local police check the federal database continues to be the most accurate way of ascertaining whether or not a used firearm is lost or stolen, but many police departments do not offer this service. And be forewarned: if the gun does come back as lost or stolen, the person who brought it to the police will not get it back. The true owner always has the right to have his or her stolen gun returned.

If you choose to purchase a firearm in a private sale, you should protect yourself. A bill of sale is the best way to accomplish this. If it turns out the firearm was stolen or previously used in a crime, you will need to demonstrate to the police when you came into possession of the firearm and from whom you made the purchase. You don’t want to be answering uncomfortable police questions without documentation to back you up.

On the flip side, if you are the one who happens to be the victim of gun theft, be sure to report it after speaking with an attorney. Because while it may take several years, you never know when a police department may be calling you to return your gun.

 

David Katz is an independent program attorney for US LawShield. 

guns

via The Truth About Guns https://ift.tt/1TozHfp

September 14, 2020 at 03:15PM

Essential Climbing Knots You Should Know and How to Tie Them

Essential Climbing Knots You Should Know and How to Tie Them

https://ift.tt/3hpGUPr

Tying knots is an essential skill for climbing. Whether you’re tying in as a climber, building an anchor, or rappelling, using the right knot will make your climbing experience safer and easier.

Here, we’ll go over how to tie six common knots, hitches, and bends for climbing. Keep in mind, there are plenty of other useful knots.

And while this article can provide a helpful reminder, it’s by no means a substitute for learning from an experienced guide in person. However, this can be a launching point for you to practice some integral and common climbing knots at home.

This article includes:

  • Figure-eight follow-through
  • Overhand on a bight
  • Double fisherman’s bend
  • Clove hitch
  • Girth hitch
  • Prussik hitch

Knot-Tying Terms

Before we get into it, these are a few rope terms you’ll want to know for the rest of the article:

  • Knot — a knot is tied into a single rope or piece of webbing.
  • Bend — a bend joins two ropes together.
  • Hitch — a hitch connects the rope to another object like a carabiner, your harness, or another rope.
  • Bight — a section of rope between the two ends. This is usually folded over to make a loop.
  • Working end — the side of the rope that you’re using for the knot.
  • Standing end — the side of the rope that you’re not using for the knot.

Figure-Eight Follow-Through

This knot, also known as the trace-eight or rewoven figure-eight is one of the first knots every rock climber will learn. It ties you into your harness as a climber.

To make this knot, hold the end of your rope in one hand and measure out from your fist to your opposite shoulder. Make a bight at that point so you have a loop with your working end on top. Wrap your working end around the base of your loop once, then poke the end through your loop from front to back.

Pull this tight and you should have your first figure-eight knot.

For the follow-through, if you’re tying into your harness, thread your working end through both tie-in points on your harness and pull the figure-eight close to you. Then, thread your working end back through the original figure-eight, tracing the original knot.

Once it’s all traced through, you should have five sets of parallel lines in your knot neatly next to each other. Pull all strands tight and make sure you have at least six inches of tail on your working end.

Overhand Knot on a Bight

This knot is great for anchor building, creating a central loop, or as a stopper.

Take a bight on the rope and pinch it into a loop — this loop now essentially becomes your working end.

Loop the bight over your standing strands then bring it under the rope and through the loop you just created. Dress your knot by making sure all strands run parallel and pull each strand tight.

Double Fisherman’s Bend

Use this knot when you need to join two ropes together or make a cord into a loop. The double fisherman’s is basically two double knots next to each other.

To do this knot, line both rope ends next to each other. Hold one rope in your fist with your thumb on top. Wrap the working end of the other rope around your thumb and the first rope twice so it forms an X.

Take your thumb out and thread your working end through your X from the bottom up and pull tight. You should have one rope wrapped twice around the other strand with an X on one side and two parallel lines on the other.

Repeat this process with the working end of the other rope so you have one X and two parallel lines from each rope. Pull the two standing ends tight to bring both knots together.

Clove Hitch

This hitch is great for building anchors with your rope or securing your rope to a carabiner. The clove hitch is strong enough that it won’t move around when it’s weighted, but you can adjust each side to move the hitch around when unweighted.

To make this hitch, make two loops twisting in the same direction. Put your second loop behind the first, then clip your carabiner through both loops. Pull both strands tight and the rope should cinch down on the carabiner.

Girth Hitch

The girth hitch is ideal for attaching your personal anchor (or any sling) directly to your harness. The hitch is not adjustable like the clove hitch, but you can form it around any object as long as you have a loop.

Wrap your loop around the object, then feed the other end through your first loop so the rope or sling creates two strands around the object. Pull your working end tight.

Prusik Hitch

This is the most common friction hitch and is ideal for a rappel backup or ascending the rope. The friction hitch will grip the rope on either end when pulled tight, but can also easily move over a rope when loose.

To make your prusik hitch, you’re essentially making multiple girth hitches.

Put your loop behind the rope then thread the other end of your sling or cord through that loop. Loosely wrap the cord around the rope at least three times, threading through your original loop each time.

Pull the hitch tight around the rope then test it by making sure it successfully grips the rope.

The post Essential Climbing Knots You Should Know and How to Tie Them appeared first on GearJunkie.

Outdoors

via GearJunkie https://gearjunkie.com

September 14, 2020 at 10:15AM

A Step by Step Guide to Take your MySQL Instance to the Cloud

A Step by Step Guide to Take your MySQL Instance to the Cloud

https://ift.tt/33nGXX6

You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag.

Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details:

  • – Export the data – Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course)
  • – Create my MySQL cloud instance.
  • – import the data into the cloud instance.
  • – Redirect the application to the cloud instance.

All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available. 

Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed.

Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/

 Let’s get prepared then:

Main objective: keep application outage time down to minimum.

Preparation:

  • You have an OCI account, and the OCI CLI configuration is in place.
  • MySQL Shell 8.0.21 is installed on the on-premise environment.
  • We create an Object Storage bucket for the data upload.
  • Create our MySQL Database System.
  • We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here.
  • Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS.

So, now for our OCI environment setup. What do I need?

Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.)

What we need is the config file from .oci, which contains the following info:

You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”.

Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it.

The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket.

MySQL Shell 8.0.21 install on-premise.

Make a bucket.

I did this via the OCI console.

This creates a Standard Private bucket.

Click on the bucket name that now appears in the list, to see the details.

You will need to note down the Name and Namespace.

Create our MySQL Database System.

This is where the data will be uploaded to. This is also quite simple.

And hey presto. We have it.

Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons.

On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step.

For example, IP Address is 10.0.0.4.

Create our Endpoint Compute instance.

In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server.

Here we’ll install MySQL Router to be our proxy for external access.

And we’ll also install MySQL Shell to upload the data from our Object Storage bucket.

For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b.

First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned.

Router & Shell install ‘n’ configure

Test connectivity.

Test MySQL Router as our proxy, via MySQL Shell:

$ mysqlsh root@kh01:3306 –sql -e ‘show databases’

Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try:

$ mysqlsh root@<public-ip>:3306 –sql -e ‘show databases’

If you get any issues here, check your ingress rules at your VCN level.

Also, double check your o.s. firewall rules on the freshly created compute instance too.

Preparation is done.

We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment.

Let’s get the data streaming.

MySQL Shell Dump Utility

In effect, it’s here when we’ll be ‘streaming’ data.

This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS.

First of all, I want to check the commands with “dryRun: true”.

util.dumpSchemas dryRun

From our own environment / on-premise installation, we now want to dump / export the data:

$ mysqlsh root@OnPremiseHost:3306

You’ll want to see what options are available and how to use the util.dumpSchemas utility:

mysqlsh> \help util.dumpSchemas

NAME

      dumpSchemas – Dumps the specified schemas to the files in the output

                    directory.

SYNTAX

      util.dumpSchemas(schemas, outputUrl[, options])

WHERE

      schemas: List of schemas to be dumped.

      outputUrl: Target directory to store the dump files.

      options: Dictionary with the dump options.

Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So:

util.dumpSchemas(

["test"], "test",

{dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]

}

)

["test"]               I just want to dump the test schema. I could put a list of                                schemas here.      Careful if you think you can export internal                                      schemas, ‘cos you can’t.

test”                             is the “outputURL target directort”. Watch the prefix of all the                        files being created in the bucket..

options:

dryRun:             Quite obvious. Change it to false to run.

showProgress:                 I want to see the progress of the loading.

threads:              Default is 4 but choose what you like here, according to the                                        resources available.

ocimds:              VERY IMPORTANT! This is to make sure that the                                      environment is “MDS Ready” so when the data gets to the                             cloud, nothing breaks.

osBucketName:   The name of the bucket we created.

osNamespace:                 The namespace of the bucket.

ociConfigFile:    This is what we looked at, right at the beginning. This what makes it easy. 

compatibility:                There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS.

Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try!

I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump:

util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true})

Acquiring global read lock

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Preparing data dump for table `test`.`reviews`

Writing DDL for schema `test`

Writing DDL for table `test`.`reviews`

Data dump for table `test`.`reviews` will be chunked using column `review_id`

(I created the primary key on the review_id column and got rid of the following warning at the end:)

WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file.

Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following:

util.LoadDump dryRun

Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data:

$ mysqlsh root@kh01:3306

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"})

As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory.

Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix=’test’ using 8 threads.

Util.loadDump: Failed opening object ‘@.json’ in READ mode: Not Found (404) (RuntimeError)

This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second.

You want to do some “streaming”?

With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh:

On-premise:

dry run:

util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

real:

util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

OCI Compute host:

dry run:

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

real:

util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time:

On-premise:

At the OCI compute host you can see the waitDumpTimeout take effect with:

NOTE: Dump is still ongoing, data will be loaded as it becomes available.

In the osBucket, we can now see content (which is what the loadDump is reading):

And once it’s all dumped ‘n’ uploaded we have the following output:

If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute)

Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!!

Conclusion

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 13, 2020 at 11:32PM

Mining Firm CEO Resigns After Razing an Australian Indigenous Site

Mining Firm CEO Resigns After Razing an Australian Indigenous Site

https://ift.tt/3hvd3p7


The Rio Tinto building in Brisbane.
Photo: William West/AFP (Getty Images)

Three executives from the mining company that detonated a 46,000-year-old  Indigenous Australian heritage site to expand an iron ore mine—and later insisted that it did nothing wrong—are leaving the company.

Rio Tinto destroyed the Juukan 1 and Juukan 2 rock shelters in the Pilbara region of Western Australia in May 2020, blasting out of existence a site of major cultural importance to the Puutu Kunti Kurrama and Pinikura People (PKKP). Technically, the firm did this in complete compliance with the law, as it secured consent from a minister years earlier under Section 18 of Australia’s Aboriginal Heritage Act. In 2014 Rio Tinto did fund a final archaeological expedition to extract items of importance from the rock shelters, turning up findings the Sydney Morning Herald reported “significance exceeded all expectations” such as grinding and pounding stones, a 28,000-year-old bone tool, and parts of a 4,000-year-old belt made of human hair.

The archaeologists in the expedition recommended that the Juukan 1 and Juukan 2 sites be subject to further exploration. Instead, Rio Tinto commenced with the detonation, claiming at the last minute the charges couldn’t be safely removed. The company then issued a statement claiming it had worked “constructively together with the PKKP people on a range of heritage matters” and to “protect places of cultural significance to the group.” It seemingly apologized in June, but iron ore business head Chris Salisbury later clarified that the company didn’t actually regret blowing up the site, just the “distress the event caused.”

Now out at Rio Tinto, according to CNN, are CEO Jean-Sébastien Jacques, Salisbury, and corporate relations group executive Simone Nivens. Jacques will remain until his successor chosen or at the end of March. Salisbury is stepping down immediately, and both he and Nivens will leave the company entirely at the end of the year. Though the executives collectively will be penalized by around $5 million in bonuses, they will still collect an exit payment including long-term bonuses.

Rio Tinto chairman Simon Thompson told CNN in a statement, “what happened at Juukan was wrong. We are determined to ensure that the destruction of a heritage site of such exceptional archaeological and cultural significance never occurs again at a Rio Tinto operation.”

G/O Media may get a commission

CEO Jamie Lowe of the National Native Title Council, which represents Indigenous groups in Australia, tweeted that while the NTTC “welcomes” the executives’ ousting, “this is not the end.” 

“We cannot and will not allow this type of devastation to occur ever again,” the PKKP Aboriginal Corporation told the New York Times in a statement.

Hesta, a superannuation fund which holds a stake in Rio Tinto, previously demanded a public inquiry and called the executives’ removal inadequate.

“Mining companies that fail to negotiate fairly and in good faith with traditional owners expose the company to reputational and legal risk,” the fund said, according to the Guardian. “These risks increase the longer these agreements are in place. Without an independent review, we cannot adequately assess these risks and understand how they may impact value. We have lost confidence that the company can do this on their own.”

Allan Fels, an economist and lawyer consulted by Hesta, told the Guardian, “there are potential unconscionable conduct issues, both at the legal and ethical level. They need to be investigated independently.”

According to a review conducted by the paper, mining companies have obtained ministerial permission to destroy more than 100 ancient indigenous sites in Western Australia alone. This is far from Rio Tinto’s first rodeo at violating human rights. The company has also been accused of “grossly unethical conduct” by the Norwegian pension fund. Indigenous Australian lawyer and land rights activist Noel Pearson told the Times the resignations were a major step forward and that, “in the past, Indigenous people would have nobody to rely on in the case of vandalism like this.” But University of Queensland sociologist Kristen Lyons told the paper that nothing had changed about structural laws that advantage corporations over Indigenous peoples, nor did the executives’ departures “address the profound inequity in who has rights over decision making.”

geeky,Tech

via Gizmodo https://gizmodo.com

September 11, 2020 at 04:21PM

Laravel Roles Abilities Tutorial

Laravel Roles Abilities Tutorial

https://ift.tt/2FbGNu2


Authorization is one of laravel security features, it provides a simple way to authorize user actions, in this tutorial we’ll use this feature to implement roles and abilities logic.

Content:

Installation

  • Clone the repository

  • Install composer dependancies

    composer install
    
  • Create .env file

    cp .env.example .env
    
  • Generate application key

    php artisan key:generate
    
  • Set database connection environment variable

  • Run migrations and seeds

    php artisan migrate --seed
    
  • Following are super user default credentials

    email: super@example.com, password: secret

  • Following are demo user defaul credentials

    email: user@example.com, password: secret

Models

Role model will group the abilities that will be granted to related users.

<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Role extends Model
{
    /**
     * The attributes that are mass assignable.
     *
     * @var array
     */
    protected $fillable = [
        'name',
    ];

    /**
     * The users that belong to the role.
     */
    public function users()
    {
        return $this->belongsToMany('App\User');
    }

    /**
     * The abilities that belong to the role.
     */
    public function abilities()
    {
        return $this->belongsToMany('App\Ability');
    }
}

Ability model represent the actions that needs to be authorized.

<?php

namespace App;

use Illuminate\Database\Eloquent\Model;

class Ability extends Model
{
    /**
     * The attributes that are mass assignable.
     *
     * @var array
     */
    protected $fillable = [
        'name',
    ];

    /**
     * The roles that belong to the ability.
     */
    public function roles()
    {
        return $this->belongsToMany('App\Role');
    }
}

Controllers

To authorize controller actions we use authorize helper method which accept the name of the ability needed to perform the action.

UserController and RoleController handles management of users and roles including relating users to roles and roles to abilities, the logic is simply made of crud actions and eloquent relationship manipulation.

Views

To display only the portions of the page that users are authorized to utilize we’ll use @can and @canany blade directives.

Seeders

AbilitySeeder contain an indexed array of strings where each element is an ability, when exceuted it will sync the abilties in the database.

<?php

use Illuminate\Database\Seeder;
use App\Ability;
use Illuminate\Support\Facades\DB;

class AbilitySeeder extends Seeder
{
    public $abilities = [
        'view-any-user', 'view-user', 'create-user', 'update-user', 'delete-user',
        'view-any-role', 'view-role', 'create-role', 'update-role', 'delete-role',
    ];
    
    /**
     * Run the database seeds.
     *
     * @return void
     */
    public function run()
    {
        $removedAbilities = Ability::whereNotIn('name', $this->abilities)->pluck('id');
        DB::table('ability_role')->whereIn('ability_id', $removedAbilities)->delete();
        Ability::whereIn('id', $removedAbilities)->delete();
        $presentAbilities = Ability::whereIn('name', $this->abilities)->get();
        $absentAbilities = $presentAbilities->isEmpty() ? $this->abilities : array_diff($this->abilities, $presentAbilities->pluck('name')->toArray());
        if ($absentAbilities) {
            $absentAbilities = array_map(function ($ability) {
                return ['name' => $ability];
            }, $absentAbilities);
            Ability::insert($absentAbilities);
        }
    }
}

Whenever the abilities are modifed run the following command to sync the database.

php artisan db:seed --class AbilitySeeder

SuperUserSeeder will create a super user using credentials provided in config/auth.php which can be set using AUTH_SUPER_USER_EMAIL and AUTH_SUPER_USER_EMAIL environment variable, super user surpass authorization logic hence he’s granted all abilities.

<?php

use Illuminate\Database\Seeder;
use App\User;
use Illuminate\Support\Facades\Hash;

class SuperUserSeeder extends Seeder
{
    /**
     * Run the database seeds.
     *
     * @return void
     */
    public function run()
    {
        User::where('super', true)->delete();
        User::create([
            'email' => config('auth.super_user.email'),
            'name' => 'super',
            'super' => true,
            'password' => Hash::make(config('auth.super_user.password')),
        ]);
    }
}

Whenever the super user need to be changed, update the correspoding environment variable and run the following command which will delete the current super user and create a new one.

php artisan db:seed --class SuperUserSeeder

Authorization

The authorization take place in AuthServiceProvider, where we use Gate::before method to intercept gate checks then we verify if the user is super or is granted the ability through any of his roles.

use Illuminate\Support\Facades\Gate;
use Illuminate\Database\Eloquent\Builder;

/**
 * Register any authentication / authorization services.
 *
 * @return void
 */
public function boot()
{
    $this->registerPolicies();

    //

    Gate::before(function ($user, $ability) {
        if ($user->super) {
            return true;
        } else {
            return $user
                ->roles()
                ->whereHas('abilities', function (Builder $query) use ($ability) {
                    $query->where('name', $ability);
                })
                ->exists();
        }
    });
}

Conclusion

Laravel has a lot to offer, having a general idea about what’s provided help in finding the best solution, in this tutorial we’ve used Authorization and Seeders as the base of the roles and abilities system.

programming

via Laravel News Links https://ift.tt/2dvygAJ

September 10, 2020 at 01:48PM

Synology announces the DS1621xs+, a high-end network attached storage device

Synology announces the DS1621xs+, a high-end network attached storage device

https://ift.tt/2DNHTeJ


Expand your network storage to up to 96 terabytes with ultra-fast read and write speeds with Synology’s new DS1621xs+ network-attached storage device.

The DS1621xs+ was designed to meet the growing need of at-home workers. The small size fits in well in nearly any workspace, and the quad-core Xeon processor, user-upgradeable ECC memory, and onboard 10-gigabit Ethernet paired with two Gigabit Ethernet ports provide high-performance data storage, container solutions like Docker, and file management.

It boasts 3.1Gbps read and 1.8 Gbps write speeds, making it a perfect solution for power users or larger data sets from multiple users. A pair of M.2 slots provide for fast caching.

Beyond the PCI-E x8 slot internal to the device, three USB 3.1 type A ports allow for external expansion.

Inside the DS1621xs+ are six internal 3.5-inch bays, which allow for up to 96 terabytes of storage. Should users need more space, it can expand to 256 terabytes of storage by adding on two additional DX517 expansion units for a total of 16 bays.

The DS1621xs+ is available from B&H Photo for $1599.99 with no drives, and is expected to ship within two weeks.

AppleInsider has previously reviewed a Synology NAS — the DS-1618+ — and gave it a 4 out of 5, praising its impressive power to price ratio.

macintosh

via AppleInsider https://ift.tt/3dGGYcl

September 10, 2020 at 02:48PM

How to Clean a Smith and Wesson M&P

How to Clean a Smith and Wesson M&P

https://ift.tt/3k33Ndb

In the attached video, you will see two variations of this firearm. The first is the M&P Pro Series with the extended barrel as well as fiber-optic sights.

The second is a modified duty-length weapon with a replacement slide and barrel, both by Faxon, as well as an Apex Forward Reset trigger group.

Neither of these modifications changes the method of cleaning. The only thing, as you will see in the video, is the tread protector on the threaded barrel may stick and require a tool to remove.

Seeing as I had a second gun, I simply showed the process on the non-threaded barreled gun. This process also works across all the different calibers of the Smith and Wesson M&P.

The first rule of

gun cleaning

is: be safe. As such, each gun must have the magazine removed and the chamber checked before any further work is done.

That being said, here are the steps involved in cleaning a Smith and Wesson M&P:

Step 1: Takedown

The next step, after assuring the gun is unloaded, is to lock the slide back and to use a tool to manipulate the takedown wire inside the magwell.

With this moved into place, the takedown lever must be moved into the vertical position, then the slide can be removed.

 

Disassembly of Smith and Wesson M&P handgun
The Smith and Wesson M&P has a special tab you must depress to remove the slide from the frame.

With the slide off, the captive recoil spring is removed next. Then the barrel will easily slip out as well.

This is the extent to which the gun should be broken down for routine cleaning. There is no need to break down the frame components or the trigger assembly unless a failure has happened. The firearm can be cleaned and lubricated like this.

Step 2: Cleaning

The barrel is the largest area in need of cleaning. The chamber area and the lands and grooves often are the most caked with carbon. With this in mind, I run a wet patch or mop over those areas with my cleaner or carbon cutter.

In the video, I use Kroil, as I find it to be a quality cleaner, especially for routine work.

I prefer drip or soak applicators that are designed for cleaning, as opposed to items like G-96 that act as a cleaner, lubricant and protector rolled into one.

For deep cleaning, single-purpose cleaners are better. For solid lubrication, dedicated lubricants are also better.

CLP products like G-96 are great for light cleaning, as well as for things like a carry gun that is not shot a lot, but needs frequent removal of dust and reapplication of lubricant and sweat protection.

They are also great when an aerosol is needed to reach the recesses of a firearm or to blast away accumulated fuzz and dust.

This may be overkill, but it works for me. Another point that many will see as overkill is my unwillingness to run anything other than a patch or mop in the opposite direction of bullet travel.

In the video, I pointed this out and even did so with the wet patch. I do not always honor this with a wet patch, as the point is to ensure the surface is wet and there is no damage potential from grinding of grit or the brush across the rifling.

With brushes (even nylon), I only push them through in the direction the bullet travels. This greatly decreases potential wear from the brush or the grit embedded in it.

It also keeps all dirt and debris moving away from the action.

 

Cleaning Smith and Wesson M&P Barrel
Go with the rifling when cleaning the barrel of your Smith and Wesson M&P to prevent damage.

When I use a wet patch and it does not come out terribly dirty, I will use that patch for cleaning the exterior of the barrel, the recoil spring and other areas until it accumulates too much carbon or dirt.

I am frugal, and patches and cleaner are not free. Additional wet patches can be used, as needed.  One for the barrel and one for the frame is common on a lightly-shot gun.

Using several per major component is not uncommon for a well-fouled firearm.

A specific area many people miss on the Smith and Wesson M&P is the spring inside the magwell. A quick pass with a cleaner-soaked patch will loosen any grime.

Just be sure to lubricate it after the cleaning is done. This is best done with another wet patch soaked in lube.

As mentioned earlier, I start by soaking the inside of the barrel first. After I have cleaned the rest of the firearm, I return with the brush to work on the inside of the barrel.

This allows the cleaner time to act on deposits and simplifies the cleaning process. The fewer strokes taken with a brush, the less likely you are to damage the rifling.

Also, why work hard when you can work smart. It is also useful to use a roller-bearing rod for use with the brush. This allows the brush to follow, instead of fight, the rifling.

By following the rifling, you get a better cleaning action as well as reduce wear.

The last step of cleaning is to remove the cleaner, which will pick up any debris missed by the previous passes. I always remove the cleaner prior to applying the lubricant, as the cleaner will dilute the utility of the lubricant if left in place.

Step 3: Lubrication

Lubricants vary in their purpose. Some are very light and evaporate quickly. Some are designed to be thicker and last longer. The first type is great for frequent reapplications, like CLP products.

Many of the second type stay around longer, but tend to attract dust and grit if used on high-use items like carry guns.

I have found a product that has low evaporative qualities, great adherence (it stays even when wiped off) and low attraction to dust and grime when applied thinly.

This product is AWT Extreme Force Lube. It can be applied thicker in guns that like to be run wet (AR’s) without too much run or creep, and as a full-synthetic, it is very good in high-temperature environments.

I like to use a needle applicator so I can limit the film depth and to get into the recessed areas like the trigger springs and the firing pin assembly.

 

Applying oil to Smith and Wesson M&P handgun
It is important to apply lubrication to the slide rails of the Smith and Wesson M&P to keep it functioning properly and prevent wear.

On the Smith and Wesson M&P, lube needs to be applied to the entire exterior of the barrel. It should also be applied to the rails and groves of the slide, the recoil spring and the above-mentioned areas.

The lube should not be left “wet”. The gun requires a light film; so, after applying the lube, a wipe with a clean patch is great to spread it out and leave a thin layer.

I apply drops to the slide rail areas and cycle the action to distribute the lube there.

Step 4: Reassembly and Function Check

Reassembly of the gun is done in the reverse order of takedown. The barrel is mounted into the slide. The captive recoil spring is fitted to the barrel and slide.

The slide is slipped back onto the frame rails and put back to the slide lock location. The takedown lever is moved into the horizontal position.

Then, after dropping the slide lock, cycle the action several times to ensure proper function. Dry fire the gun, install a magazine and make sure a round will chamber.

If both the dry fire and the chambering work, the gun is back to being functional and clean.

How do you clean your firearms? Let us know in the comments section below!

guns

via The Shooter’s Log https://ift.tt/2VbyLGM

September 10, 2020 at 08:32AM

Varbox – FREE Laravel admin panel for businesses & freelancers

Varbox – FREE Laravel admin panel for businesses & freelancers

https://varbox.io


  • Is there a free version of the Varbox platform?

    Yes!
    To get started you just need to download a free release.
    The free version is under the MIT license so it can be used for commercial purposes.

  • I’m stuck! Where can I ask questions?

    If you have Varbox specific problems or questions, you can use our Github Issues repository, or search for help on Stackoverflow

  • What do I get by purchasing a license?

    A unique license code.
    Using that license code will give your project legal permission to use the paid version of Varbox for commercial purposes.
    Your invoice will contain the unique license code and instructions on where to place it, after payment.

  • Can I get discounts for buying licenses?

    Yes!
    Depending on your requirements, you should get in touch with us for volume discounts or a lifetime commercial license for unlimited projects.

  • How long does one Varbox license last?

    The Varbox commercial license lasts for ever!
    Also, you can update to any release of Varbox for that project without any additional costs.

  • Does it cover local / testing / staging environments?

    Yes!
    For your staging server, please use the same licese code you’ll be using on your production domain.
    On localhost you don’t need a license code at all.

  • What is considered to be a “project”?

    A project is a single installed instance of Varbox.
    You may not use a single Varbox license for multi-tenant websites where there’s multiple copies of Varbox running.
    Each of those are considered separate projects and require their own license.

programming

via Laravel News Links https://ift.tt/2dvygAJ

September 9, 2020 at 02:06PM