Laravel 8 step by step CRUD

Laravel 8 step by step CRUD

https://ift.tt/2RBK73T


If you are new in Laravel 8 and looking for a step by step tutorial for Laravel 8 CRUD example app then this post will help you to learn how to make a complete CRUD application using Laravel 8. Before starting we have to check our system requirement is okay to use Laravel 8. The Laravel 8 minimum requirements are listed below so that you can confirm either your system is okay or not to install Laravel 8 project for making a Laravel 8 CRUD application.

Laravel 8 requirements

  • PHP >= 7.3
  • BCMath PHP Extension
  • Ctype PHP Extension
  • Fileinfo PHP Extension
  • JSON PHP Extension
  • Mbstring PHP Extension
  • OpenSSL PHP Extension
  • PDO PHP Extension
  • Tokenizer PHP Extension
  • XML PHP Extension

 

Steps for making Laravel 8 CRUD

  • Step 01: Install Laravel 8
  • Step 02: Database Configuration
  • Step 03: Make model & migration
  • Step 04: Make controller
  • Step 05: Define routes
  • Step 06: Make views

 

Step 01: Install Laravel 8

First, install a fresh new Laravel 8 project. To do that, open your command prompt and run the artisan command below. This command will install and create a new Laravel 8 project for you. Before running this command make sure you have a stable internet connection. This command will take some times depends on your internet connection speed.

composer create-project laravel/laravel laravel8-project 8.0

N.B: Replace the laravel8-project with your project name. According to this name, a folder will create in your project directory.

 

Step 02: Database Configuration

Now create a database in MySQL via phpMyAdmin or other MySQL clients which you. Now open .env file from Laravel 8 project and update the database details.

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=laravel8
DB_USERNAME=root
DB_PASSWORD=

Here laravel8 is our database name. If your database name different then update it and save. Our project creation finished and the database is ready to use.

 

Step 03: Make model & migration

We will make a contact list Laravel 8 CRUD example application. So that we need a contacts table in our database. Here we do not create the table manually. Here we use Laravel migration. When we’ll run our migration that will make the table for us. Run the command in your terminal.

php artisan make:model Contact -m

This command will make a Contact.php model class file into the app/Models directory of our Laravel 8 project and a migration file will be created into the database migrations directory.

 

Now open the migration file from database/migrations directory of your Laravel 8 project and replace the code with below.

<?php

use Illuminate\Support\Facades\Schema;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Database\Migrations\Migration;

class CreateContactsTable extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('contacts', function (Blueprint $table) {
            $table->bigIncrements('id');
            $table->string('name');
            $table->string('email');
            $table->string('phone');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::dropIfExists('contacts');
    }
} 

Our migration file is ready. Now run the migration with this command. This command will create our tables in our database.

php artisan migrate

 

 

Step 04: Make controller

In our controller, all our business login will be coded to make Laravel 8 CRUD system. To make the controller run the command.

php artisan make:controller ContactController

By this command, a file will be created in app/Http/Controllers name with ContactController.php. Write the code below in the ContactController.php

<?php namespace App\Http\Controllers;

use App\Contact;
use Response;
use App\Http\Requests;
use App\Http\Controllers\Controller;
use Illuminate\Http\Request;

class ContactController extends Controller
{

    public function index()
    {
        $data = Contact::orderBy('id','desc')->paginate(10)->setPath('contacts');
        return view('admin.contacts.index',compact(['data']));
    }

    public function create()
    {
        return view('admin.contacts.create');
    }

    public function store(Request $request)
    {
        $request->validate([
         'name' => 'required',
         'email' => 'required|email',
         'phone' => 'required'
        ]);

        Contact::create($request->all());
        return redirect()->back()->with('success','Create Successfully');
    }

    public function show($id)
    {
       $data =  Contact::find($id);
       return view('admin.contacts.show',compact(['data']));
    }

    public function edit($id)
    {
       $data = Contact::find($id);
       return view('admin.contacts.edit',compact(['data']));
    }

    public function update(Request $request, $id)
    {
        $request->validate([
         'name' => 'required',
         'email' => 'required|email',
         'phone' => 'required'
        ]);

        Contact::where('id',$id)->update($request->all());
        return redirect()->back()->with('success','Update Successfully');
        
    }

    public function destroy($id)
    {
        Contact::where('id',$id)->delete();
        return redirect()->back()->with('success','Delete Successfully');
    }

}

 

Step 05: Define routes

Open the web.php file from routes folder and write the routes like below.

Route::resource('contacts','ContactController');

Here we are using the Laravel resource route which will make all our required routes that are needed for Laravel 8 CURD example app.

 

Step 06: Make views

Here is the final part, We need some forms and HTML markup to show our records and data insert, update. Let’s make those views. Create a folder inside views folder name with contacts so that all views are related to contact CRUD will be in the same folder and organized.

We need the Laravel H package for making HTML form easily. Install it by the composer.

composer require haruncpi/laravel-h

Create an index.blade.php to show all our records from the database.

@extends('layout') 
@section('content')
<div class="col-md-12">

    <div class="table-responsive">
        <table class="table table-bordered table-condensed table-striped">
            <thead>

                <th>ID</th>
                <th>NAME</th>
                <th>EMAIL</th>
                <th>PHONE</th>
                <th>ACTION</th>
            </thead>

            <tbody>
                @foreach($data as $row)
                <tr>

                    <td></td>
                    <td></td>
                    <td></td>
                    <td></td>

                    <td>
                        <a href="" class="btn btn-primary">Edit</a>

                        <form action="" method="post">
                            @csrf @method('DELETE')
                            <button class="btn btn-danger" type="submit">Delete</button>
                        </form>

                    </td>
                </tr>
                @endforeach
            </tbody>

        </table>
    </div>
    <div>
        <?php echo $data->render(); ?>
    </div>
</div>

@endsection

Create a create.blade.php file for insert data.

@extends('layout')

@section('content')
{!! F::open(['action' =>'ContactController@store', 'method' => 'POST'])!!}
    
    <div class="col-md-6">
        
			 <div class="form-group required">
				{!! F::label("NAME") !!}
				{!! F::text("name", null ,["class"=>"form-control","required"=>"required"]) !!}
			</div>

			 <div class="form-group required">
				{!! F::label("EMAIL") !!}
				{!! F::text("email", null ,["class"=>"form-control","required"=>"required"]) !!}
			</div>

			 <div class="form-group required">
				{!! F::label("PHONE") !!}
				{!! F::text("phone", null ,["class"=>"form-control","required"=>"required"]) !!}
			</div>

   
        <div class="well well-sm clearfix">
            <button class="btn btn-success pull-right" title="Save" type="submit">Create</button>
        </div>
    </div>
 
{!! Form::close() !!}
@endsection

Create an edit.blade.php file to edit data.

@extends('layout')

@section('content')
    {!! F::open(['action' =>['ContactController@update',$data->id], 'method' => 'PUT'])!!}
    
        <div class="col-md-6">

            
			 <div class="form-group required">
				{!! F::label("NAME") !!}
				{!! F::text("name", $data->name ,["class"=>"form-control","required"=>"required"]) !!}
			</div>

			 <div class="form-group required">
				{!! F::label("EMAIL") !!}
				{!! F::text("email", $data->email ,["class"=>"form-control","required"=>"required"]) !!}
			</div>

			 <div class="form-group required">
				{!! F::label("PHONE") !!}
				{!! F::text("phone", $data->phone ,["class"=>"form-control","required"=>"required"]) !!}
			</div>



            <div class="well well-sm clearfix">
                <button class="btn btn-success pull-right" title="Save" type="submit">Update</button>
            </div>
        </div>
        
    {!! Form::close() !!}
@endsection

 

Now our Laravel 8 CRUD app is ready to use. To test the Laravel 8 CRUD app operation first, run the server by php artisan serve command and then open your browser and browse http://localhost:8000/contacts

Hope this step by step tutorial on Laravel 8 CRUD app will help you to make your won CRUD system using Laravel 8. If you find this tutorial helpful then please share this with others.

programming

via Laravel News Links https://ift.tt/2dvygAJ

September 15, 2020 at 08:15PM

How to fix ‘Target class does not exist’ in Laravel 8

How to fix ‘Target class does not exist’ in Laravel 8

https://ift.tt/2E5om9N


How do I fix this?

The problem here is that Laravel has no idea where to look for your controller, so all we have to do is let it know! There are 3 ways you can accomplish this:

  • Add the namespace back manually so you can use it as you did in Laravel 7.x and before
  • Use the full namespace in your route files when using the string-syntax
  • Use the action syntax (recommended)

Adding the namespace manually

This is fairly simple. Go into your RoutesServiceProvider.php file and you’ll see the following:

All you need to do is add the following three lines to this file and Laravel will go back to using the default namespace as in Laravel 7.x:

What did we just do? We declared the $namespace variable with the default Namespace for our controllers and told laravel to use that for our web and api routes.

If you try to run your app again, everything should be working.

Using the full namespace

This one involves changing all your route declarations, but the idea is simple: prepend your controller names with their namespace. See the following example for our PostsController inside the app/Http/Controllers folder.

If you try again, everything should be running smoothly.

Using the Action Syntax

This is the alternative I personally recommend as I find it more typo-proof and in my experience provides better IDE support as we are explicitly telling the code which class to use. Instead of using our usual string syntax, we can use the action syntax where we specify the class and method to use in an array:

Notice here we are not passing the PostsController within quotes but rather PostsController::class, which internally will return ‘App\Http\Controllers\PostsController’. The second value in the array is the method to call within that controller, meaning: “In PostsController.php call the ‘all’ method.

Again, if you try to run your app again, everything should be up and running.

Closing Remarks

By now, your app should be up and running again. If not, please feel free to ask for help. Everyone in the community is eager to give a hand.

Whether you added the namespace manually, specified the full namespace in your routes, or went with the action syntax, what you just did is tell Laravel in which namespace your controllers actually are, so now it actually knows where to look.

If you liked what you read or want to learn more cool stuff related to Laravel, you can follow me on Twitter, where I post about coding, entrepreneurship, and living a better life.

programming

via Laravel News Links https://ift.tt/2dvygAJ

September 15, 2020 at 08:15PM

Sadequl Hussain: 7 Best Practice Tips for PostgreSQL Bulk Data Loading

Sadequl Hussain: 7 Best Practice Tips for PostgreSQL Bulk Data Loading

https://postgr.es/p/4U4

Sometimes, PostgreSQL databases need to import large quantities of data in a single or a minimal number of steps. This process can be sometimes unacceptably slow. In this article, we will cover some best practice tips for bulk importing data into PostgreSQL databases.

Postgresql

via Planet PostgreSQL https://ift.tt/2g0pqKY

September 15, 2020 at 11:21AM

The Mandalorian’s Season 2 Trailer Is Here, and It Brought Baby Yoda

The Mandalorian’s Season 2 Trailer Is Here, and It Brought Baby Yoda

https://ift.tt/3c1qN9L


Trailer FrenzyA special place to find the newest trailers for movies and TV shows you’re craving.

This is the way to more episodes of The Mandalorian. And more Baby Yoda adorableness, of course.

Out of nowhere, Lucasfilm dropped our very first look at The Mandalorian’s sophomore season in action, picking up where season one left off: Din Djarin (Pedro Pascal), our titular bounty hunting hero, and his newly-inducted clanmate “The Child” jetting off on a quest to not just find the little green force-user’s people, but to keep themselves safe from the sinister grip of Imperial Remnant officer Moff Gideon (Giancarlo Esposito).

While the trailer doesn’t give too much away—just like season one’s cryptic footage sneakily hiding tiny Baby Yoda’s massive presence in the show—there have been plenty of rumors hinting to expect tons of familiar faces and major explorations of the Star Wars canon as we know it this season. From teases for the return of Clone Wars favorites like Ahsoka Tano and Mandalorian Death Watch agent Bo-Katan Kryze, there’s also the helmeted elephant in the room: Temuera Morrison’s alleged return as legendary Bounty Hunter Boba Fett.

How will all that factor in? Will Moff Gideon get his grubby hands on the baby? Will Din, indeed, find the way? We won’t have much longer to find out: The Mandalorian will return to Disney+ on October 30th.

G/O Media may get a commission


For more, make sure you’re following us on our Instagram @io9dotcom.

geeky,Tech

via Gizmodo https://gizmodo.com

September 15, 2020 at 10:21AM

Team finds vitamin D deficiency and COVID-19 infection link

Team finds vitamin D deficiency and COVID-19 infection link

https://ift.tt/3klxGFM

A vitamin D gelcap sits on a yellow/orange background

There’s an association between vitamin D deficiency and the likelihood of becoming infected with COVID-19, according to a new retrospective study of people tested for COVID-19.

“Vitamin D is important to the function of the immune system and vitamin D supplements have previously been shown to lower the risk of viral respiratory tract infections,” says David Meltzer, professor of medicine and chief of hospital medicine at University of Chicago Medicine and lead author of the study in JAMA Network Open. “Our statistical analysis suggests this may be true for the COVID-19 infection.”

The research team looked at 489 patients whose vitamin D level had been measured within a year before being tested for COVID-19. Patients who had untreated vitamin D deficiency (defined as less than 20 nanograms per milliliter of blood) were almost twice as likely to test positive for COVID-19 compared to patients who had sufficient levels of the vitamin.

Researchers stress it’s important to note that the study only found the two conditions frequently seen together; it does not prove causation. Meltzer and colleagues plan further clinical trials.

Experts believe half of Americans have a vitamin D deficiency, with much higher rates seen in African Americans, Hispanics, and people living in areas like Chicago where it is difficult to get enough sun exposure in winter.

Research has also shown, however, that some kinds of vitamin D tests don’t detect the form of vitamin D present in a majority of African Americans—which means those tests might falsely diagnose vitamin D deficiencies. The current study accepted either kind of test as criteria.

COVID-19 is also more prevalent among African Americans, older adults, nursing home residents, and health care workers—populations who all have increased risk of vitamin D deficiency.

“Understanding whether treating vitamin D deficiency changes COVID-19 risk could be of great importance locally, nationally, and globally,” Meltzer says. “Vitamin D is inexpensive, generally very safe to take, and can be widely scaled.”

Meltzer and his team emphasize the importance of experimental studies to determine whether vitamin D supplementation can reduce the risk, and potentially severity, of COVID-19. They also highlight the need for studies of what strategies for vitamin D supplementation may be most appropriate in specific populations.

The University of Chicago/Rush University Institute for Translational Medicine Clinical and Translational Science Award and the African American Cardiovascular Pharmacogenetic Consortium funded the work.

Source: Gretchen Rubin for University of Chicago

The post Team finds vitamin D deficiency and COVID-19 infection link appeared first on Futurity.

via Futurity.org https://ift.tt/2p1obR5

September 15, 2020 at 01:35PM

Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

Buying A Gun In A Private Sale? Is There a Way to Check If It’s Stolen?

https://ift.tt/3mmOMoM


By David Katz

You’re looking for a gun for everyday carry, a shotgun for hunting season, or perhaps you just want a nice used gun to add to your collection. You also want to find a really good deal and the gun market is tight right now. A private sale might be just the way to go.

Federal law doesn’t prohibit private sales between individuals who reside in the same state, and the vast majority of states do not require that a private sale be facilitated by a federally licensed gun dealer (“FFL”). However, the more you think about it, what would happen to you if you bought a gun that turned out to be lost or stolen? Even worse, what would happen if you purchased a firearm that had been used in a crime?

Unfortunately, these things can happen. Further, there is no practical way for you to ensure a gun you purchase from a stranger is not lost or stolen.

FBI Lost and Stolen Gun Database

When a firearm is lost or stolen, the owner should immediately report it to the police. In fact, if a gun is lost or stolen from an FFL, the law requires the FFL to report the missing firearm to the ATF. These reported firearms are entered into a database maintained by the FBI’s National Crime Information Center.

Unfortunately for purchasers in private sales, only law enforcement agencies are allowed to request a search of the lost and stolen gun database.

Private Databases

While there have been attempts at creating private searchable internet databases where individuals self-report their lost or stolen guns, these usually contain only a fraction of the number of actual stolen guns, and the information is not verifiable.

Some states are exploring or attempting to build a state database of lost or stolen firearms that is searchable by the public, online. For example, the Florida Crime Information Center maintains a website where an individual can search for many stolen or lost items, including cars, boats, personal property, and of course, firearms.

However, even this website warns:

“FDLE cannot represent that this information is current, active, or complete. You should verify that a stolen property report is active with your local law enforcement agency or with the reporting agency.”

Police Checks of Firearms

Having the local police check the federal database continues to be the most accurate way of ascertaining whether or not a used firearm is lost or stolen, but many police departments do not offer this service. And be forewarned: if the gun does come back as lost or stolen, the person who brought it to the police will not get it back. The true owner always has the right to have his or her stolen gun returned.

If you choose to purchase a firearm in a private sale, you should protect yourself. A bill of sale is the best way to accomplish this. If it turns out the firearm was stolen or previously used in a crime, you will need to demonstrate to the police when you came into possession of the firearm and from whom you made the purchase. You don’t want to be answering uncomfortable police questions without documentation to back you up.

On the flip side, if you are the one who happens to be the victim of gun theft, be sure to report it after speaking with an attorney. Because while it may take several years, you never know when a police department may be calling you to return your gun.

 

David Katz is an independent program attorney for US LawShield. 

guns

via The Truth About Guns https://ift.tt/1TozHfp

September 14, 2020 at 03:15PM

Essential Climbing Knots You Should Know and How to Tie Them

Essential Climbing Knots You Should Know and How to Tie Them

https://ift.tt/3hpGUPr

Tying knots is an essential skill for climbing. Whether you’re tying in as a climber, building an anchor, or rappelling, using the right knot will make your climbing experience safer and easier.

Here, we’ll go over how to tie six common knots, hitches, and bends for climbing. Keep in mind, there are plenty of other useful knots.

And while this article can provide a helpful reminder, it’s by no means a substitute for learning from an experienced guide in person. However, this can be a launching point for you to practice some integral and common climbing knots at home.

This article includes:

  • Figure-eight follow-through
  • Overhand on a bight
  • Double fisherman’s bend
  • Clove hitch
  • Girth hitch
  • Prussik hitch

Knot-Tying Terms

Before we get into it, these are a few rope terms you’ll want to know for the rest of the article:

  • Knot — a knot is tied into a single rope or piece of webbing.
  • Bend — a bend joins two ropes together.
  • Hitch — a hitch connects the rope to another object like a carabiner, your harness, or another rope.
  • Bight — a section of rope between the two ends. This is usually folded over to make a loop.
  • Working end — the side of the rope that you’re using for the knot.
  • Standing end — the side of the rope that you’re not using for the knot.

Figure-Eight Follow-Through

This knot, also known as the trace-eight or rewoven figure-eight is one of the first knots every rock climber will learn. It ties you into your harness as a climber.

To make this knot, hold the end of your rope in one hand and measure out from your fist to your opposite shoulder. Make a bight at that point so you have a loop with your working end on top. Wrap your working end around the base of your loop once, then poke the end through your loop from front to back.

Pull this tight and you should have your first figure-eight knot.

For the follow-through, if you’re tying into your harness, thread your working end through both tie-in points on your harness and pull the figure-eight close to you. Then, thread your working end back through the original figure-eight, tracing the original knot.

Once it’s all traced through, you should have five sets of parallel lines in your knot neatly next to each other. Pull all strands tight and make sure you have at least six inches of tail on your working end.

Overhand Knot on a Bight

This knot is great for anchor building, creating a central loop, or as a stopper.

Take a bight on the rope and pinch it into a loop — this loop now essentially becomes your working end.

Loop the bight over your standing strands then bring it under the rope and through the loop you just created. Dress your knot by making sure all strands run parallel and pull each strand tight.

Double Fisherman’s Bend

Use this knot when you need to join two ropes together or make a cord into a loop. The double fisherman’s is basically two double knots next to each other.

To do this knot, line both rope ends next to each other. Hold one rope in your fist with your thumb on top. Wrap the working end of the other rope around your thumb and the first rope twice so it forms an X.

Take your thumb out and thread your working end through your X from the bottom up and pull tight. You should have one rope wrapped twice around the other strand with an X on one side and two parallel lines on the other.

Repeat this process with the working end of the other rope so you have one X and two parallel lines from each rope. Pull the two standing ends tight to bring both knots together.

Clove Hitch

This hitch is great for building anchors with your rope or securing your rope to a carabiner. The clove hitch is strong enough that it won’t move around when it’s weighted, but you can adjust each side to move the hitch around when unweighted.

To make this hitch, make two loops twisting in the same direction. Put your second loop behind the first, then clip your carabiner through both loops. Pull both strands tight and the rope should cinch down on the carabiner.

Girth Hitch

The girth hitch is ideal for attaching your personal anchor (or any sling) directly to your harness. The hitch is not adjustable like the clove hitch, but you can form it around any object as long as you have a loop.

Wrap your loop around the object, then feed the other end through your first loop so the rope or sling creates two strands around the object. Pull your working end tight.

Prusik Hitch

This is the most common friction hitch and is ideal for a rappel backup or ascending the rope. The friction hitch will grip the rope on either end when pulled tight, but can also easily move over a rope when loose.

To make your prusik hitch, you’re essentially making multiple girth hitches.

Put your loop behind the rope then thread the other end of your sling or cord through that loop. Loosely wrap the cord around the rope at least three times, threading through your original loop each time.

Pull the hitch tight around the rope then test it by making sure it successfully grips the rope.

The post Essential Climbing Knots You Should Know and How to Tie Them appeared first on GearJunkie.

Outdoors

via GearJunkie https://gearjunkie.com

September 14, 2020 at 10:15AM

A Step by Step Guide to Take your MySQL Instance to the Cloud

A Step by Step Guide to Take your MySQL Instance to the Cloud

https://ift.tt/33nGXX6

You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag.

Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details:

  • – Export the data – Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course)
  • – Create my MySQL cloud instance.
  • – import the data into the cloud instance.
  • – Redirect the application to the cloud instance.

All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available. 

Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed.

Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/

 Let’s get prepared then:

Main objective: keep application outage time down to minimum.

Preparation:

  • You have an OCI account, and the OCI CLI configuration is in place.
  • MySQL Shell 8.0.21 is installed on the on-premise environment.
  • We create an Object Storage bucket for the data upload.
  • Create our MySQL Database System.
  • We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here.
  • Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS.

So, now for our OCI environment setup. What do I need?

Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.)

What we need is the config file from .oci, which contains the following info:

You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”.

Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it.

The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket.

MySQL Shell 8.0.21 install on-premise.

Make a bucket.

I did this via the OCI console.

This creates a Standard Private bucket.

Click on the bucket name that now appears in the list, to see the details.

You will need to note down the Name and Namespace.

Create our MySQL Database System.

This is where the data will be uploaded to. This is also quite simple.

And hey presto. We have it.

Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons.

On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step.

For example, IP Address is 10.0.0.4.

Create our Endpoint Compute instance.

In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server.

Here we’ll install MySQL Router to be our proxy for external access.

And we’ll also install MySQL Shell to upload the data from our Object Storage bucket.

For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b.

First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned.

Router & Shell install ‘n’ configure

Test connectivity.

Test MySQL Router as our proxy, via MySQL Shell:

$ mysqlsh root@kh01:3306 –sql -e ‘show databases’

Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try:

$ mysqlsh root@<public-ip>:3306 –sql -e ‘show databases’

If you get any issues here, check your ingress rules at your VCN level.

Also, double check your o.s. firewall rules on the freshly created compute instance too.

Preparation is done.

We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment.

Let’s get the data streaming.

MySQL Shell Dump Utility

In effect, it’s here when we’ll be ‘streaming’ data.

This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS.

First of all, I want to check the commands with “dryRun: true”.

util.dumpSchemas dryRun

From our own environment / on-premise installation, we now want to dump / export the data:

$ mysqlsh root@OnPremiseHost:3306

You’ll want to see what options are available and how to use the util.dumpSchemas utility:

mysqlsh> \help util.dumpSchemas

NAME

      dumpSchemas – Dumps the specified schemas to the files in the output

                    directory.

SYNTAX

      util.dumpSchemas(schemas, outputUrl[, options])

WHERE

      schemas: List of schemas to be dumped.

      outputUrl: Target directory to store the dump files.

      options: Dictionary with the dump options.

Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So:

util.dumpSchemas(

["test"], "test",

{dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]

}

)

["test"]               I just want to dump the test schema. I could put a list of                                schemas here.      Careful if you think you can export internal                                      schemas, ‘cos you can’t.

test”                             is the “outputURL target directort”. Watch the prefix of all the                        files being created in the bucket..

options:

dryRun:             Quite obvious. Change it to false to run.

showProgress:                 I want to see the progress of the loading.

threads:              Default is 4 but choose what you like here, according to the                                        resources available.

ocimds:              VERY IMPORTANT! This is to make sure that the                                      environment is “MDS Ready” so when the data gets to the                             cloud, nothing breaks.

osBucketName:   The name of the bucket we created.

osNamespace:                 The namespace of the bucket.

ociConfigFile:    This is what we looked at, right at the beginning. This what makes it easy. 

compatibility:                There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS.

Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try!

I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump:

util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true})

Acquiring global read lock

All transactions have been started

Locking instance for backup

Global read lock has been released

Writing global DDL files

Preparing data dump for table `test`.`reviews`

Writing DDL for schema `test`

Writing DDL for table `test`.`reviews`

Data dump for table `test`.`reviews` will be chunked using column `review_id`

(I created the primary key on the review_id column and got rid of the following warning at the end:)

WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file.

Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following:

util.LoadDump dryRun

Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data:

$ mysqlsh root@kh01:3306

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"})

As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory.

Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix=’test’ using 8 threads.

Util.loadDump: Failed opening object ‘@.json’ in READ mode: Not Found (404) (RuntimeError)

This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second.

You want to do some “streaming”?

With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh:

On-premise:

dry run:

util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

real:

util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]})

OCI Compute host:

dry run:

util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

real:

util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180})

They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time:

On-premise:

At the OCI compute host you can see the waitDumpTimeout take effect with:

NOTE: Dump is still ongoing, data will be loaded as it becomes available.

In the osBucket, we can now see content (which is what the loadDump is reading):

And once it’s all dumped ‘n’ uploaded we have the following output:

If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute)

Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!!

Conclusion

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 13, 2020 at 11:32PM