MySQL 101: Tuning MySQL After Upgrading Memory

MySQL 101: Tuning MySQL After Upgrading Memory

https://ift.tt/30jNwcH

Tuning MySQL After Upgrading Memory

Tuning MySQL After Upgrading MemoryIn this post, we will discuss what to do when you add more memory to your instance. Adding memory to a server where MySQL is running is common practice when scaling resources.

First, Some Context

Scaling resources is just adding more resources to your environment, and this can be split in two main ways: vertical scaling and horizontal scaling.

Vertical scaling is increasing hardware capacity for a given instance, thus having a more powerful server, while horizontal scaling is adding more servers, a pretty standard approach for load balancing and sharding.

As traffic grows, working datasets are getting bigger, and thus we start to suffer because the data that doesn’t fit into memory has to be retrieved from disk. This is a costly operation, even with modern NVME drives, so at some point, we will need to deal with either of the scaling solutions we mentioned.

In this case, we will discuss adding more RAM, which is usually the fastest and easiest way to scale hardware vertically, and also having more memory is probably the main benefit for MySQL.

How to Calculate Memory Utilization

First of all, we need to be clear about what variables allocate memory during MySQL operations, and we will cover only commons ones as there are a bunch of them. Also, we need to know that some variables will allocate memory globally, and others will do a per-thread allocation.

For the sake of simplicity, we will cover this topic considering the usage of the standard storage engine: InnoDB.

We have globally allocated variables:

key_buffer_size: MyISAM setting should be set to 8-16M, and anything above that is just wrong because we shouldn’t use MyISAM tables unless for a particular reason. A typical scenario is MyISAM being used by system tables only, which are small (this is valid for versions up to 5.7), and in MySQL 8 system tables were migrated to the InnoDB engine. So the impact of this variable is negligible.

query_cache_size: 0 is default and removed in 8.0, so we won’t consider it.

innodb_buffer_pool_size: which is the cache where InnoDB places pages to perform operations. The bigger, the better. 🙂

Of course, there are others, but their impact is minimal when running with defaults.

Also, there are other variables that are allocated on each thread (or open connection):
read_buffer_size, read_rnd_buffer_size, sort_buffer_size, join_buffer_size and tmp_table_size, and few others. All of them, by default, work very well as allocation is small and efficient. Hence, the main potential issue becomes where we allocate many connections that can hold these buffers for some time and add extra memory pressure. The ideal situation is to control how many connections are being opened (and used) and try to reduce that number to a sufficient number that doesn’t hurt the application.

But let’s not lose the focus, we have more memory, and we need to know how to tune it properly to make the best usage.

The most memory-impacting setting we need to focus on is innodb_buffer_pool_size, as this is where almost all magic happens and is usually the more significant memory consumer. There is an old rule of thumb that says, “size of this setting should be set around 75% of available memory”, and some cloud vendors setup this value to total_memory*0.75.

I said “old” because that rule was good when running instances with 8G or 16G of RAM was common, so allocating roughly 6G out of 8G or 13G out of 16G used to be logical.

But what if we run into an instance with 100G or even 200G? It’s not uncommon to see this type of hardware nowadays, so we will use 80G out of 100G or 160G out of 200G? Meaning, will we avoid allocating something between 20G to 40G of memory and leave that for filesystem cache operations? While these filesystem operations are not useless, I don’t see OS needing more than 4G-8G for this purpose on a dedicated DB. Also, it is recommended to use the O_DIRECT flushing method for InnoDB to bypass the filesystem cache.

Example

Now that we understand the primary variables allocating memory let’s check a good use case I’m currently working on. Assuming this system:

$ free -m
      total      used     free    shared    buff/cache    available
Mem: 385625    307295    40921         4         37408        74865

So roughly 380G of RAM, a nice amount of memory. Now let’s check what is the maximum potential allocation considering max used connections.

*A little disclaimer here, while this query is not entirely accurate and thus it can diverge from real results, we can have a sense of what is potentially going to be allocated, and we can take advantage of performance_schema database, but this may require enabling some instruments disabled by default:

mysql > show global status like 'max_used_connections';
+----------------------+-------+
| Variable_name        | Value |
+----------------------+-------+
| Max_used_connections |    67 |
+----------------------+-------+
1 row in set (0.00 sec)

So with a maximum of 67 connections used, we can get:

mysql > SELECT ( @@key_buffer_size
-> + @@innodb_buffer_pool_size
-> + 67 * (@@read_buffer_size
-> + @@read_rnd_buffer_size
-> + @@sort_buffer_size
-> + @@join_buffer_size
-> + @@tmp_table_size )) / (1024*1024*1024) AS MAX_MEMORY_GB;
+---------------+
| MAX_MEMORY_GB |
+---------------+
| 316.4434      |
+---------------+
1 row in set (0.00 sec)

So far, so good, we are within memory ranges, now let’s see how big the innodb_buffer_pool_size is and if it is well sized:

mysql > SELECT (@@innodb_buffer_pool_size) / (1024*1024*1024) AS BUFFER_POOL_SIZE;
+------------------+
| BUFFER_POOL_SIZE |
+------------------+
| 310.0000         |
+------------------+
1 row in set (0.01 sec)

So the buffer pool is 310G, roughly 82% of total memory, and total usage so far was around 84% which leaves us around 60G of memory not being used. Well, being used by filesystem cache, which, in the end, is not used by InnoDB.

Ok now, let’s get to the point, how to properly configure memory to be used effectively by MySQL. From pt-mysql-summary we know that the buffer pool is fully filled:

Buffer Pool Size | 310.0G
Buffer Pool Fill | 100%

Does this mean we need more memory? Maybe, so let’s check how many disk operations we have in an instance we know with a working dataset that doesn’t fit in memory (the very reason why we increased memory size) using with this command:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99857480858|
| Innodb_buffer_pool_reads         | 598600690  |
| Innodb_buffer_pool_read_requests | 274985     |
| Innodb_buffer_pool_reads         | 1602       |
| Innodb_buffer_pool_read_requests | 267139     |
| Innodb_buffer_pool_reads         | 1562       |
| Innodb_buffer_pool_read_requests | 270779     |
| Innodb_buffer_pool_reads         | 1731       |
| Innodb_buffer_pool_read_requests | 287594     |
| Innodb_buffer_pool_reads         | 1567       |
| Innodb_buffer_pool_read_requests | 282786     |
| Innodb_buffer_pool_reads         | 1754       |

Innodb_buffer_pool_read_requests: page reads satisfied from memory (good)
Innodb_buffer_pool_reads: page reads from disk (bad)

As you may notice, we still get some reads from the disk, and we want to avoid them, so let’s increase the buffer pool size to 340G (90% of total memory) and check again:

mysqladmin -r -i 1 -c 60 extended-status | egrep "Innodb_buffer_pool_read_requests|Innodb_buffer_pool_reads"
| Innodb_buffer_pool_read_requests | 99937722883 |
| Innodb_buffer_pool_reads         | 599056712   |
| Innodb_buffer_pool_read_requests | 293642      |
| Innodb_buffer_pool_reads         | 1           |
| Innodb_buffer_pool_read_requests | 296248      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 294409      |
| Innodb_buffer_pool_reads         | 0           |
| Innodb_buffer_pool_read_requests | 296394      |
| Innodb_buffer_pool_reads         | 6           |
| Innodb_buffer_pool_read_requests | 303379      |
| Innodb_buffer_pool_reads         | 0           |

Now we are barely going to disk, and IO pressure was released; this makes us happy –  right?

Summary

If you increase the memory size of a server, you mostly need to focus on innodb_buffer_pool_size, as this is the most critical variable to tune. Allocating 90% to 95% of total available memory on big systems is not bad at all, as OS requires only a few GB to run correctly, and a few more for memory swap should be enough to run without problems.

Also, check your maximum connections required (and used,) as this is a common mistake causing memory issues, and if you need to run with 1000 connections opened, then allocating 90% of the memory of the buffer pool may not be possible, and some additional actions may be required (i.e., adding a proxy layer or a connection pool).

From MySQL 8, we have a new variable called innodb_dedicated_server, which will auto-calculate the memory allocation. While this variable is really useful for an initial approach, it may under-allocate some memory in systems with more than 4G of RAM as it sets the buffer pool size = (detected server memory * 0.75), so in a 200G server, we have only 150 for the buffer pool.

Conclusion

Vertical scaling is the easiest and fastest way to improve performance, and it is also cheaper – but not magical. Tuning variables properly requires analysis and understanding of how memory is being used. This post focused on the essential variables to consider when tuning memory allocation, specifically innodb_buffer_pool_size and max_connections. Don’t over-tune when it’s not necessary and be cautious of how these two affect your systems.

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 30, 2020 at 11:34AM

2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders

2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders

https://ift.tt/36jlIZt

The 2019 FBI crime statistics just released via its annual report, “Crime in the United States.” And once again, without fail, it highlights the overwhelming hypocrisy of gun control. Rifles of any sort—let alone “scary” black rifles—were responsible for less murders than several other categories, including “blunt objects” like hammers and clubs.

FBI Report Crime, kittery trading post rifles

RELATED STORY

FBI Report: Crime in the US Shows Rifles Rarely Used to Murder

2019 FBI Crime Stastics

In all, 10,258 murders took place using a firearm in 2019. Predictably, handguns led the way, accounting for 6,368 murders. But murderers used rifles only 364 times. Meanwhile, shotguns accounted for just 200 wrongful deaths.

Those numbers stand out when compared to other categories, ones not regulated or facing the mob of gun control zealots. Knives (cutting instruments), for example, counted for 1,476 murders in 2019. Blunt objects wracked up another 397. Personal weapons, clarified as hands, fists, feet, etc., accounted for 600 murders.

So while the left wants to take our guns, the actual FBI data proves the dishonesty of the narrative. Fists, Feet, knives and clubs all pose a more imminent source of danger. Yet, black guns, specifically rifles with magazines of more than 10 rounds, face the mob.

The 2019 FBI crime statistics showed crime decreased compared to 2018.

Interestingly, all the while, crime continues to fall. In some categories, numbers approach two decades of constant decline, as gun ownership swells across the country.

Recently released FBI crime statistics for 2019 shows violent crime decreasing for the third consecutive year. Violent crime dropped 0.5 percent compared to 2018.

Property crime also dropped 4.1 percent over 2018. The figure marked the seventeenth consecutive year the collective estimates declined. The 2019 statistics further shows the estimated rate of violent crime occurred 366.7 offenses per 100,000 inhabitants. Meanwhile, property crime occurred 2,109.9 offenses per 100,000 inhabitants.

The FBI data shows 2019 as a great year, with property crime, burglary and violent crime among other categories falling compared to 2018. Sadly, 2020 will most assuredly prove differently.

Crime and civil unrest continue to rise. And we expect next year’s FBI data to tell a much different tale — except on guns; we fully expect those stats to remain solid.

The post 2019 FBI Crime Statistics Show Hammers, Clubs Again Outrank Rifle Murders appeared first on Personal Defense World.

guns

via Personal Defense World https://ift.tt/2Arq2GB

September 29, 2020 at 03:06PM

Starships Size Comparison 2.0

Starships Size Comparison 2.0

https://ift.tt/3kY5KYV

Starships Size Comparison 2.0

Link

MetaBallStudios revisits their earlier starship size comparison with significantly more ships from the worlds of science fiction and fantasy. Just about every imaginable class of ship is represented, from the teensy Hocotate ship flown by the Pikmin to the Planet Express ship from Futurama to the gargantuan Ringworld.

fun

via The Awesomer https://theawesomer.com

September 29, 2020 at 04:15PM

MySQL: Import CSV, not using LOAD DATA

MySQL: Import CSV, not using LOAD DATA

https://ift.tt/339yFDb

All over the Internet people are having trouble getting LOAD DATA and LOAD DATA LOCAL to work. Frankly, do not use them, and especially not the LOCAL variant. They are insecure, and even if you get them to work, they are limited and unlikely to do what you want. Write a small data load program as shown below.

Not using LOAD DATA LOCAL

The fine manual says:

The LOCAL version of LOAD DATA has two potential security issues:

  • Because LOAD DATA LOCAL is an SQL statement, parsing occurs on the server side, and transfer of the file from the client host to the server host is initiated by the MySQL server, which tells the client the file named in the statement. In theory, a patched server could tell the client program to transfer a file of the server’s choosing rather than the file named in the statement. Such a server could access any file on the client host to which the client user has read access. (A patched server could in fact reply with a file-transfer request to any statement, not just LOAD DATA LOCAL, so a more fundamental issue is that clients should not connect to untrusted servers.)

  • In a Web environment where the clients are connecting from a Web server, a user could use LOAD DATA LOCAL to read any files that the Web server process has read access to (assuming that a user could run any statement against the SQL server). In this environment, the client with respect to the MySQL server actually is the Web server, not a remote program being run by users who connect to the Web server.

The second issue in reality means that if the web server has a suitable SQL injection vulnerability, the attacker may use that to read any file the web server has access to, bouncing this through the database server.

In short, never use (or even enable) LOAD DATA LOCAL.

  • local_infile is disabled in the server config, and you should keep it that way.
  • client libraries are by default compiled with ENABLED_LOCAL_INFILE set to off. It can still be enabled using a call to the mysql_options() C-API, but never do that.
  • 8.0.21+ places additional restrictions on this, to prevent you from being stupid (that is, actually enabling this anywhere).

Not using LOAD DATA

The LOAD DATA variant of the command assumes that you place a file on the database server, into a directory in the file system of the server, and load it from there. In the age of “MySQL as a service” this is inconvenient to impossible, so forget about this option, too.

If you were able to do place files onto the system where your mysqld lives,

  • your user needs to have FILE as a privilege, a global privilege (GRANT FILE TO ... ON *.*)
  • the server variable secure_file_priv needs to be set to a directory name, and that directory needs to be world-readable. LOAD DATA and SELECT INTO OUTFILE work only with filenames below this directory. Setting this variable requires a server restart, this is not a dynamic variable (on purpose).

Note that the variable can be NULL (this is secure in the sense that LOAD DATA is disabled) or empty (this is insecure in that there are no restrictions).

There is nothing preventing you from setting the variable to /var/lib/mysql or other dumb locations which would expose vital system files to load and save operations. Do not do this.

Also, a location such as /tmp or any other world-writeable directory would be dumb: Use a dedicated directory that is writeable by the import user only, and make sure that it is world-readable in order to make the command work.

Better: Do not use this command at all (and set secure_file_priv to NULL).

Using data dump and load programs instead

We spoke about dumping a schema into CSV files in Export the entire database to CSV already.

To complete the discussion we need to provide a way to do the inverse and load data from a CSV file into a table.

The full code is in load.py.

The main idea is to open a .csv file with csv.reader, and then iterate over the rows. For each row, we execute an INSERT statement, and every few rows we also COMMIT.

In terms of dependencies, we rely on MySQLdb and csv:

import MySQLdb
import csv

We need to know the name of a table, and the column names of that table (in the order in which they appear).

We should also make sure we can change the delimiter and quoting character used by the CSV, and make the commit interval variable.

Finally, we need to be able to connect to the database.

# table to load into
table = "data"

# column names to load into
columns = [
    "id",
    "d",
    "e",
]

# formatting options
delimiter = ","
quotechar = '"'

# commit every commit_interval lines
commit_interval = 1000

# connect to database, set mysql_use_results mode for streaming
db_config = dict(
    host="localhost",
    user="kris",
    passwd="geheim",
    db="kris",
)

From this, we can build a database connection and an INSERT statement, using the table name and column names:

db = MySQLdb.connect(**db_config)

# build a proper insert command
cmd = f"insert into {table} ( "
cmd += ", ".join(columns)
cmd += ") values ("
cmd += "%s," * len(columns)
cmd = cmd[:-1] + ")"
print(f"cmd = {cmd}")

The actual code is then rather simple: Open the CSV file, named after the table, and create a csv.reader(). Using this, we iterate over the rows.

For each row, we execute the insert statement.

Every commit_interval rows we commit, and for good measure we also commit after finishing, to make sure any remaining rows also get written out.

with open(f"{table}.csv", "r") as csvfile:
    reader = csv.reader(csvfile, delimiter=delimiter, quotechar=quotechar)

    c = db.cursor()
    counter = 0

    # insert the rows as we read them
    for row in reader:
        c.execute(cmd, row)

        # ever commit_interval we issue a commit
        counter += 1
        if (counter % commit_interval) == 0:
            db.commit()

    # final commit to the remainder
    db.commit()

And that it. That’s all the code.

  • No FILE privilege,
  • No special permissions besides insert_priv into the target table.
  • No config in the database.
  • No server restart to set up the permissions.

And using Python’s multiprocessing, you could make it load multiple tables in parallel or chunk a very large table and load that in parallel – assuming you have database hardware that could profit from any of this.

In any case – this is simpler, more secure and less privileged than any of the broken LOAD DATA variants.

Don’t use them, write a loader program.

Let’s run it. First we generate some data, using the previous example from the partitions tutorial:

(venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py setup-tables
(venv) kris@server:~/Python/mysql$ mysql-partitions/partitions.py start-processing
create p2 reason: not enough partitions
cmd = alter table data add partition ( partition p2 values less than ( 20000))
create p3 reason: not enough partitions
cmd = alter table data add partition ( partition p3 values less than ( 30000))
create p4 reason: not enough partitions
cmd = alter table data add partition ( partition p4 values less than ( 40000))
create p5 reason: not enough partitions
cmd = alter table data add partition ( partition p5 values less than ( 50000))
create p6 reason: not enough empty partitions
cmd = alter table data add partition ( partition p6 values less than ( 60000))
counter = 1000
counter = 2000
counter = 3000
counter = 4000
^CError in atexit._run_exitfuncs: ...

We then dump the data, truncate the table, and reload the data. We count the rows to be sure we get all of them back.

(venv) kris@server:~/Python/mysql$ mysql-csv/dump.py
table = data
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+
| count(*) |
+----------+
| 4511 |
+----------+
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'truncate table data'
mysql: [Warning] Using a password on the command line interface can be insecure.
(venv) kris@server:~/Python/mysql$ mysql-csv/load.py
cmd = insert into data ( id, d, e) values (%s,%s,%s)
(venv) kris@server:~/Python/mysql$ mysql -u kris -pgeheim kris -e 'select count(*) from data'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+
| count(*) |
+----------+
| 4511 |
+----------+

technology

via Planet MySQL https://ift.tt/2iO8Ob8

September 28, 2020 at 02:09PM

The next Laravel Worldwide Meetup is tomorrow

The next Laravel Worldwide Meetup is tomorrow

https://ift.tt/3j9AHZG

The next Laravel Worldwide Meetup is coming tomorrow, at 18:00 UTC, and you’ll be able to watch it live on the YouTube Channel.

The post The next Laravel Worldwide Meetup is tomorrow appeared first on Laravel News.


Join the Laravel Newsletter to get Laravel articles like this directly in your inbox.

programming

via Laravel News https://ift.tt/14pzU0d

September 28, 2020 at 10:25AM

Making a Flywheel Trebuchet

Making a Flywheel Trebuchet

https://ift.tt/3mT83yp

Making a Flywheel Trebuchet

Link

Engineer Tom Stanton is fascinated by the way in which flywheels can store up energy as they’re spun up to speed. In this clip, he combines a flywheel mechanism with a sturdy aluminum trebuchet, creating a durable machine that can toss a tennis ball at fast as 180 mph.

fun

via The Awesomer https://theawesomer.com

September 28, 2020 at 11:00AM

Laravel 8 Jetstream Livewire CRUD with Tailwind CSS Tutorial

Laravel 8 Jetstream Livewire CRUD with Tailwind CSS Tutorial

https://ift.tt/3n53H7x


Now, let’s see article of laravel 8 livewire crud example. This tutorial will give you simple example of laravel 8 livewire crud with jetstream & tailwind css. i would like to show you laravel 8 livewire crud with modal. We will look at example of laravel 8 jetstream livewire crud application example.

In this example i will show you step by step laravel 8 livewire crud app with modal tailwind css.

Laravel 8 jetstream designed by Tailwind CSS and they provide auth using livewire and Inertia. i will show you how to create module with livewire on default jetstream auth in laravel 8.

Here, bellow i written step by step, so you can easily start simple post master with your existing step up of laravel 8 jetstream auth with tailwind css. you just need to follow few bellow step and you will get layout as like bellow:

Preview:

List View:

Create View:

Update View:

Step 1: Install Laravel 8

here, we need to install laravel 8 application using composer command.

composer create-project --prefer-dist laravel/laravel blog

Step 2: Create Auth with Jetstream Livewire

Now, in this step, we need to use composer command to install jetstream, so let’s run bellow command and install bellow library.

composer require laravel/jetstream

now, we need to create authentication using bellow command. you can create basic login, register and email verification. if you want to create team management then you have to pass addition parameter. you can see bellow commands:

php artisan jetstream:install livewire

Now, let’s node js package:

npm install

let’s run package:

npm run dev

now, we need to run migration command to create database table:

php artisan migrate

Step 3: Create Migration and Model

Here, we need create database migration for files table and also we will create model for files table.

php artisan make:migration create_posts_table

Migration:

<?php

use Illuminate\Database\Migrations\Migration;

use Illuminate\Database\Schema\Blueprint;

use Illuminate\Support\Facades\Schema;

class CreatePostsTable extends Migration

{

/**

* Run the migrations.

*

* @return void

*/

public function up()

{

Schema::create('posts', function (Blueprint $table) {

$table->id();

$table->string('title');

$table->text('body');

$table->timestamps();

});

}

/**

* Reverse the migrations.

*

* @return void

*/

public function down()

{

Schema::dropIfExists('posts');

}

}

php artisan migrate

now we will create Post model by using following command:

php artisan make:model Post

App/Models/Post.php

<?php

namespace App\Models;

use Illuminate\Database\Eloquent\Factories\HasFactory;

use Illuminate\Database\Eloquent\Model;

class Post extends Model

{

use HasFactory;

/**

* The attributes that are mass assignable.

*

* @var array

*/

protected $fillable = [

'title', 'body'

];

}

Step 3: Create Post Component

Now here we will create livewire component using their command. so run bellow command to create post crud application component.

php artisan make:livewire posts

Now they created fies on both path:

app/Http/Livewire/Posts.php

resources/views/livewire/posts.blade.php

Step 4: Update Component File

Here, we will write render(), create(), openModal(), closeModal(), resetInputFields(), store(), edit() and delete() method for our crud app.

So, let, update following file.

app/Http/Livewire/Post.php

<?php

namespace App\Http\Livewire;

use Livewire\Component;

use App\Models\Post;

class Posts extends Component

{

public $posts, $title, $body, $post_id;

public $isOpen = 0;

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function render()

{

$this->posts = Post::all();

return view('livewire.posts');

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function create()

{

$this->resetInputFields();

$this->openModal();

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function openModal()

{

$this->isOpen = true;

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function closeModal()

{

$this->isOpen = false;

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

private function resetInputFields(){

$this->title = '';

$this->body = '';

$this->post_id = '';

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function store()

{

$this->validate([

'title' => 'required',

'body' => 'required',

]);

Post::updateOrCreate(['id' => $this->post_id], [

'title' => $this->title,

'body' => $this->body

]);

session()->flash('message',

$this->post_id ? 'Post Updated Successfully.' : 'Post Created Successfully.');

$this->closeModal();

$this->resetInputFields();

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function edit($id)

{

$post = Post::findOrFail($id);

$this->post_id = $id;

$this->title = $post->title;

$this->body = $post->body;

$this->openModal();

}

/**

* The attributes that are mass assignable.

*

* @var array

*/

public function delete($id)

{

Post::find($id)->delete();

session()->flash('message', 'Post Deleted Successfully.');

}

}

Step 5: Update Blade Files

Here, we will update following list of files for our listing page, create page.

So, let’s update all the files as bellow:

resources/views/livewire/posts.blade.php

<x-slot name="header">

<h2 class="font-semibold text-xl text-gray-800 leading-tight">

Manage Posts (Laravel 8 Livewire CRUD with Jetstream & Tailwind CSS - ItSolutionStuff.com)

</h2>

</x-slot>

<div class="py-12">

<div class="max-w-7xl mx-auto sm:px-6 lg:px-8">

<div class="bg-white overflow-hidden shadow-xl sm:rounded-lg px-4 py-4">

@if (session()->has('message'))

<div class="bg-teal-100 border-t-4 border-teal-500 rounded-b text-teal-900 px-4 py-3 shadow-md my-3" role="alert">

<div class="flex">

<div>

<p class="text-sm"></p>

</div>

</div>

</div>

@endif

<button wire:click="create()" class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded my-3">Create New Post</button>

@if($isOpen)

@include('livewire.create')

@endif

<table class="table-fixed w-full">

<thead>

<tr class="bg-gray-100">

<th class="px-4 py-2 w-20">No.</th>

<th class="px-4 py-2">Title</th>

<th class="px-4 py-2">Body</th>

<th class="px-4 py-2">Action</th>

</tr>

</thead>

<tbody>

@foreach($posts as $post)

<tr>

<td class="border px-4 py-2"></td>

<td class="border px-4 py-2"></td>

<td class="border px-4 py-2"></td>

<td class="border px-4 py-2">

<button wire:click="edit()" class="bg-blue-500 hover:bg-blue-700 text-white font-bold py-2 px-4 rounded">Edit</button>

<button wire:click="delete()" class="bg-red-500 hover:bg-red-700 text-white font-bold py-2 px-4 rounded">Delete</button>

</td>

</tr>

@endforeach

</tbody>

</table>

</div>

</div>

</div>

resources/views/livewire/create.blade.php

<div class="fixed z-10 inset-0 overflow-y-auto ease-out duration-400">

<div class="flex items-end justify-center min-h-screen pt-4 px-4 pb-20 text-center sm:block sm:p-0">

<div class="fixed inset-0 transition-opacity">

<div class="absolute inset-0 bg-gray-500 opacity-75"></div>

</div>

<!-- This element is to trick the browser into centering the modal contents. -->

<span class="hidden sm:inline-block sm:align-middle sm:h-screen"></span>​

<div class="inline-block align-bottom bg-white rounded-lg text-left overflow-hidden shadow-xl transform transition-all sm:my-8 sm:align-middle sm:max-w-lg sm:w-full" role="dialog" aria-modal="true" aria-labelledby="modal-headline">

<form>

<div class="bg-white px-4 pt-5 pb-4 sm:p-6 sm:pb-4">

<div class="">

<div class="mb-4">

<label for="exampleFormControlInput1" class="block text-gray-700 text-sm font-bold mb-2">Title:</label>

<input type="text" class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline" id="exampleFormControlInput1" placeholder="Enter Title" wire:model="title">

@error('title') <span class="text-red-500"></span>@enderror

</div>

<div class="mb-4">

<label for="exampleFormControlInput2" class="block text-gray-700 text-sm font-bold mb-2">Body:</label>

<textarea class="shadow appearance-none border rounded w-full py-2 px-3 text-gray-700 leading-tight focus:outline-none focus:shadow-outline" id="exampleFormControlInput2" wire:model="body" placeholder="Enter Body"></textarea>

@error('body') <span class="text-red-500"></span>@enderror

</div>

</div>

</div>

<div class="bg-gray-50 px-4 py-3 sm:px-6 sm:flex sm:flex-row-reverse">

<span class="flex w-full rounded-md shadow-sm sm:ml-3 sm:w-auto">

<button wire:click.prevent="store()" type="button" class="inline-flex justify-center w-full rounded-md border border-transparent px-4 py-2 bg-green-600 text-base leading-6 font-medium text-white shadow-sm hover:bg-green-500 focus:outline-none focus:border-green-700 focus:shadow-outline-green transition ease-in-out duration-150 sm:text-sm sm:leading-5">

Save

</button>

</span>

<span class="mt-3 flex w-full rounded-md shadow-sm sm:mt-0 sm:w-auto">

<button wire:click="closeModal()" type="button" class="inline-flex justify-center w-full rounded-md border border-gray-300 px-4 py-2 bg-white text-base leading-6 font-medium text-gray-700 shadow-sm hover:text-gray-500 focus:outline-none focus:border-blue-300 focus:shadow-outline-blue transition ease-in-out duration-150 sm:text-sm sm:leading-5">

Cancel

</button>

</span>

</form>

</div>

</div>

</div>

</div>

Step 6: Add Route

In third step, we will create routes for multiple file upload. so create two route with GET and POST route example.

routes/web.php

<?php

use Illuminate\Support\Facades\Route;

use App\Http\Livewire\Posts;

/*

|--------------------------------------------------------------------------

| Web Routes

|--------------------------------------------------------------------------

|

| Here is where you can register web routes for your application. These

| routes are loaded by the RouteServiceProvider within a group which

| contains the "web" middleware group. Now create something great!

|

*/

Route::get('post', Posts::class);

Now you can run your application by bellow command:

php artisan serve

now you can run app and login as user, then you can open bellow url:

//localhost:8000/post

now it’s works…

I hope it can help you…

programming

via Laravel News Links https://ift.tt/2dvygAJ

September 27, 2020 at 07:12PM

An alternative to the Death Penalty I would not mind.

An alternative to the Death Penalty I would not mind.

https://ift.tt/2EDCur6

Reader It’s Just Boris made this comment:

On the other hand, I also don’t believe a government should routinely be in the business of deliberately and purposefully putting its own citizens to death.

And what if there was a way that we could deal with human predators deserving the death penalty that did not involve the government killing or the families affected? I believe I have the answer: Alaska.

I believe that the green areas are probably millions upon millions of acres of pure and pristine nature, untouched by civilization. My idea is to grab all those on death row and during the Spring in Alaska, drop them in the middle of the north green area with clothes on their back, a box of waterproof matches and a knife.  The idea is to let the human predator make his living among nature’s predators and Mother Nature itself. The convict will have a nice huge tattoo across his forehead as a warning label for others that he is a bad guy and the admonition that if my the miracle of God, they manage to reach hamlet, village, town or city, the Federal government will pay a bounty of $100,000 for his dead ass.

We don’t have to kill them, but sure as hell we don’t have to give them a roof, 3 hot ones plus dental and medical.

Just an idea.

 

guns

via https://gunfreezone.net

September 25, 2020 at 10:47AM

Selling a startup can come with an emotional cost

Selling a startup can come with an emotional cost

https://ift.tt/3hX8iES

Every founder dreams of building a substantial company. For those who make it through the myriad challenges, it typically results in an exit. If it’s through an acquisition, that can mean cashing in your equity, paying back investors and rewarding long-time employees, but it also usually results in a loss of power and a substantially reduced role.

Some founders hang around for a while before leaving after an agreed-upon time period, while others depart right away because there is simply no role left for them. However it plays out, being acquired can be an emotional shock: The company you spent years building is no longer under your control,

We spoke to a couple of startup founders who went through this experience to learn what the acquisition process was like, and how it feels to give up something after pouring your heart and soul into building it.

Knowing when it’s time to sell

There has to be some impetus to think about selling: Perhaps you’ve reached a point where growth stalls, or where you need to raise a substantial amount of cash to take you to the next level.

For Tracy Young, co-founder and former CEO at PlanGrid, the forcing event was reaching a point where she needed to raise funds to continue.

After growing a company that helped digitize building plans into a $100 million business, Young ended up selling it to Autodesk for $875 million in 2018. It was a substantial exit, but Young said it was more of a practical matter because the path to further growth was going to be an arduous one.

“When we got the offer from Autodesk, literally we would have had to execute flawlessly and the world had to stay good for the next three years for us to have the same outcome,” she said at a panel on exiting at TechCrunch Disrupt last week.

“As CEO, [my] job is to choose the best path forward for all stakeholders of the company — for our investors, for our team members, for our customers — and that was the path we chose.”

For Rami Essaid, who founded bot mitigation platform Distil Networks in 2011, slowing growth encouraged him to consider an exit. The company had reached around $25 million run rate, but a lack of momentum meant that shifting to a broader product portfolio would have been too heavy a lift.

technology

via TechCrunch https://techcrunch.com

September 23, 2020 at 01:28PM

Todoist’s new Boards feature is a better-looking Trello

Todoist’s new Boards feature is a better-looking Trello

https://ift.tt/3iRVGA9

Todoist is consistently one of our favorite to-do apps. The company is pretty good about releasing useful updates, and today it’s adding a Boards feature that should make it even easier to organize and visualize your tasks and projects.

The Boards format makes Todoist look a whole lot like Trello, another one of our favorite project management apps. The feature lets you view tasks as cards, and you can drag the cards horizontally into customizable categories. If you’re not totally sold on the format, you can switch projects back and forth between cards and traditional lists as you work.

Todoist Boards
Todoist

Todoist’s Boards are rolling out now. If you don’t see the option immediately, it should appear in a day or two. Todoist hopes the new format will help you better visualize projects and collaborate with team members. The feature could be especially useful for teams working remotely.

geeky,Tech,Database

via Engadget http://www.engadget.com

September 23, 2020 at 12:57PM