https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/5f53aedd460bbd775cd21bbfe1b778cd.jpg
Gizmodo
Just another WordPress site
https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/5f53aedd460bbd775cd21bbfe1b778cd.jpg
Gizmodo
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/01/muo-diy-raspberry-pi-minecraft-bedrock-server-featured.jpg
Playing Minecraft with friends and family requires either putting up with split screen mode, or using multiple devices. For the best results, these should connect to a Minecraft server.
But paying for a Minecraft server is expensive. Why not build your own? It is now possible to run Minecraft Bedrock Server on a Raspberry Pi.
Over the years, Minecraft has evolved beyond the original Java game. As of 2016, Minecraft Bedrock Edition has been the main version, released on PC, consoles, and mobile.
While this brings new features, improved graphics, and better stability to the game, Minecraft Bedrock Edition is not compatible with the old desktop and mobile Java version. As such, if you had installed Minecraft server on a Raspberry Pi, you would only be able to connect from the corresponding Java version (whether on a PC or another Pi).
As there is now a (Java-based) Minecraft Bedrock-compatible server for Raspberry Pi, you can use it to host games played on any device running Bedrock. This gives you the advantage of being fully in control of the server, from setting invites and assigning access rights to installing mods and backing up the world.
For this project you have a choice of the Raspberry Pi 3 or Raspberry Pi 4. Naturally the Pi 4 with its 2GB, 4GB, and 8GB variants is the best option. However, you should be able to run Minecraft Bedrock Edition server on a Raspberry Pi 3.
To test this project, I used a Raspberry Pi 3 B+. This device has a 1.4GHz 64-bit quad-core processor and 1GB of RAM. Initial setup was over Wi-Fi, using SSH, but a better response and lower latency can be enjoyed with an Ethernet connection to your router.
Anything lower than a Raspberry Pi 3 should be avoided.
To host the server software, you will need an operating system. For optimum performance, opt for a lightweight OS–Raspberry Pi OS Lite is probably the best option here.
See our guide to installing an operating system on the Raspberry Pi before proceeding. It is recommended that you configure the installation to automatically connect to your Wi-Fi network (if you’re using one), and have SSH enabled on the Raspberry Pi. If you’re not using SSH, you’ll need a keyboard and display set up and connected.
You will also need to install:
Follow the steps below to install these and configure your Minecraft Bedrock server.
Before you can install the server software, you will need to configure the Raspberry Pi. These steps assume you have already installed Raspberry Pi OS.
Start by ensuring the operating system is up-to-date:
sudo apt update && sudo apt upgrade
Next, open the Raspberry Pi configuration tool, raspi-config:
sudo raspi-config
Use the arrow keys to select System Options > GPU Memory and the GPU to 16. This ensures the majority of system resources are dedicated to the server. Hit Tab to select OK.
If you haven’t already enabled SSH at this point, do so by selecting Interfacing Options > SSH press Tab to select Yes and press Enter to confirm.
Next, hit Tab to select Finish, then Enter to reboot the Raspberry Pi.
With the Raspberry Pi restarted, install Git
sudo apt install git
This software allows you to clone a GitHub repository to your computer, and is required for installing Minecraft Bedrock server.
You can now install Java.
sudo apt install default-jdk
This installs the default (current) version of Java. You can check which version by entering
java -version
(Note that to install a specific Java release, use a specific version name, such as sudo apt install openjdk-8-jdk.)
At the time of writing, the default-jdk version was 11.0.16.
You’re not ready to install the server. Begin by entering
git clone https:
Wait while this completes, then switch to the Nukkit directory
cd Nukkit
Here, update the submodule:
git submodule update –init
That will take a while to complete. When done, change permissions on mvnw
chmod +x mvnw
Finally:
./mvnw clean package
This final command is the longest part of the process. It’s a good opportunity to boot Minecraft Bedrock Edition on your PC, mobile, or console in readiness.
When ready, change directory:
cd target
Here, launch the server software:
java -jar nukkit-1.0-SNAPSHOT.jar
You’ll initially be instructed to enter your preferred server language.
Once that is done, Nukkit starts, the server properties are imported, and the game environment is launched. This begins with the default gamemode set to Survival, but you can switch that later.
Once everything appears to be running, enter
status
This will display various facts such as memory use, uptime, available memory, load, and number of players.
You can also use the help command (or hit ?) to check what instructions can be used to administer the server. These can either be input directly into the Pi with a keyboard, via SSH, or from the Minecraft game’s chat console (remember to precede each command with “/” in the console).
With everything set up, you’re ready to connect to your server. To do this
A moment later, you should be in the Minecraft server world. On the server side, this will be recorded:
While a few steps are required to enable the Bedrock Edition server on Raspberry Pi, the end results are good. Our test device, you will recall, was a Raspberry Pi 3B+, more than adequate for 2-5 players. A Raspberry Pi 4 will probably perform better for a greater number of players.
Using a Raspberry Pi is just one of many ways you can create a Minecraft server for free.
MUO – Feed
https://fajarwz.com/blog/create-reusable-query-with-laravel-query-scope/featured-image_hu08b343a7518dff605423dadb51eafa9d_255305_1200x630_fill_q75_bgffffff_box_smart1_3.jpg
One of the best feature Laravel have is Laravel Eloquent ORM. Using Eloquent ORM is easy and it can makes our query simpler, cleaner than Query Builder but sometimes it can also be so long, and maybe we need to reuse the query too.
Is it possible to create a reusable query in Laravel Eloquent ORM?
We can use query scope to achieve the above situation. Here i will explain about the Laravel Local Scope.
For example if we want to get only the post that is currently marked as non-draft or published, we can use this query:
$publishedPosts = Post::where('is_draft', false)->get();
But I need to use this query in some places, not just in a place. And I don’t want to write this everytime I need it because it is quite long. Then we need to create a query scope for this.
Create a method inside your model and insert above query to it, name the method with scope prefix like so
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
class Post extends Model
{
public function scopePublished($query)
{
return $query->where('is_draft', false);
}
}
Now we have created the scope. However, we should not include the scope prefix when calling the method.
use App\Models\Post;
$publishedPosts = Post::published()->get();
Now it is more readable, shorter, and reusable. Even you can chain calls scopes for example if you have a popular scope you can chain it like so
use App\Models\Post;
$popularPublishedPosts = Post::published()->popular()->get();
Read also:
We can also create a scope that accepts parameters like so
class User extends Model {
public function scopeActive($query, $value)
{
return $query->where('is_active', $value);
}
}
Now we can use the scope dynamically
// Get active users
$activeUsers = User::active(true)->get();
// Get inactive users
$inactiveUsers = User::active(false)->get();
And that’s it, we have tried our own local scope. Now we know how to create and run a local scope.
Laravel News Links
A tutorial on how to use the rap2hpoutre/fast-excel package to export data from a collection or model in Excel xlsx, ods or csv format and import the data from a spreadsheet into a Laravel project.
The French version of this tutorial : Laravel : importer et exporter une collection en Excel avec Fast Excel
Fast Excel or fast-excel is a Laravel package that allows to read and write spreadsheet files (CSV, XLSX and ODS). It offers the following features:
A collection (Illuminate\Support\Collection) in a Laravel project, is a wrapper that provides methods to efficiently manipulate arrays of data.
This guide will show you how to install and use Fast Excel or rap2hpoutre/fast-excel to perfore the above operations.
At the time of writing this article, I am using version 9.42.2 of Laravel.
To install the rap2hpoutre/fast-excel package in a Laravel project, run the following composer command:
composer require rap2hpoutre/fast-excel
This command will download fast-excel and its dependencies into the /vendor directory of your Laravel project.
The rap2hpoutre/fast-excel package uses the box/spout library to read and write spreadsheet files.
Once fast-excel is downloaded in your project, you can initialize it directly in a controller and access the methods of the Rap2hpoutre\FastExcel\FastExcel class:
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Rap2hpoutre\FastExcel\FastExcel; // The FastExcel class
class FastExcelController extends Controller
{
public function index () {
$data = collect(); // A collection or a model
$fastexcel = new FastExcel($data); // The FastExcel instance
dd($fastexcel);
}
}
FastExcel also provides the global helper fastexcel() which allows direct access to its methods anywhere in your project:
$data = collect(); // A collection or a model
$fastexcel = fastexcel($data); // The FastExcel instance
If importing the Rap2hpoutre\FastExcel\FastExcel class or using the global helper fastexcel() does not suit you, you can also register the FastExcel facade in the $aliases array of your /config/app.php file:
'aliases' => Facade::defaultAliases()->merge([
"FastExcel" => Rap2hpoutre\FastExcel\Facades\FastExcel::class
])->toArray()
Next, initialize Fast Excel by passing data to the method data($data) where $data represents a collection or model:
// A collection or a model
$data = User::first();
// The FastExcel instance
$fastexcel = \FastExcel::data($data);
The export($file) method of FastExcel where $file represents the name of the file followed by the extension “.xlsx”, “.ods” or “.csv” allows you to export the data of a collection or a model to the directory /public :
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\User;
class FastExcelController extends Controller
{
public function index () {
// A collection of "App\Models\User"
$users = User::all();
// Export to file "/public/users.xlsx"
$path = (fastexcel($users))->export("users.xlsx");
// Export to file "/public/users.csv"
// $path = (fastexcel($users))->export("users.csv");
// Export to file "/public/users.ods"
// $path = (fastexcel($users))->export("users.ods");
}
}
In this example, $path contains the absolute path of the users.xlsx file created. Example : C:\laragon\www\laravel-fastexcel\public\users.xlsx
If you want to select the columns to be exported, rearrange the data or apply processing to them, you can use a callback after the file name in the export() method:
// Callback "function($user) { ... }" in export()
$path = (fastexcel($users))->export("users.xlsx", function ($user) {
return [
"Full Name" => ucfirst($user['name']),
"E-mail address" => $user['email']
];
});
Instead of using the export($file) method to save the .xlsx, .csv or .ods files in the /public directory, you can use the download($file) method to start the download:
// Collection of "App\Models\User";
$users = User::select('id', 'name', 'email')->get();
// Download file "users.xlsx"
return fastexcel($users)->download('users.xlsx');
Fast Excel allows you to export multiple collections or models to different spreadsheets in an Excel workbook using the SheetCollection class.
Let’s take a look at an example that exports data from the User, Post and Product models to the file users-posts-products.xlsx :
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
// Importing SheetCollection
use Rap2hpoutre\FastExcel\SheetCollection;
use App\Models\Post;
use App\Models\Product;
use App\Models\User;
class FastExcelController extends Controller
{
public function index () {
// Collection "App\Models\User";
$users = User::select('id', 'name', 'email')->get();
// Collection "App\Models\Post"
$posts = Post::orderBy("created_at")->get();
// Collection "App\Models\Product"
$products = Product::select('id', "name", "description")->get();
// Collection of spreadsheets (SheetCollection)
$sheets = new SheetCollection([
"Users" => $users,
"Posts" => $posts,
"Products" => $products
]);
// Exporting spreadsheets to "/public/users-posts-products.xlsx"
$path = (fastexcel($sheets))->export("users-posts-products.xlsx");
}
}
If you open the file users-posts-products.xlsx in Microsoft Excel, you will find the spreadsheets “Users”, “Posts” and “Products”.
If you have a collection that exports a large amount of data, 1M+ rows for example, you can use a generator function to avoid the memory_limit problem:
use App\Models\Client;
// A generator function
function clients () {
foreach (Client::cursor() as $client) {
yield $client;
}
};
// Export to "clients.xlsx
fastexcel(iterator_to_array(clients()))->export("clients.xlsx");
You can use FastExcel’s import($file) method, where $file represents the path to an .xlsx or .csv file, to import the entries (rows) in $file into a collection (Illuminate\Support\Collection):
// Importing the file "/public/users.xlsx"
$data = fastexcel()->import("users.xlsx");
// $data contains a collection
dd($data);
FastExcel allows you to browse the lines of a file and insert them into the database using a callback after the file name in the import() method:
use App\Models\Client;
// Callback "function ($line) { ... }" in "import"
$data = fastexcel()->import("clients.xlsx", function ($line) {
return Client::create([
'name' => $line['name'],
'email' => $line['email'],
'phone' => $line['phone'],
'address' => $line['address'],
]);
});
$line['name'] specifies the column named “name” in the clients.xlsx file.
FastExcel’s importSheets($file) method imports spreadsheet entries (rows) from $file into a collection:
// Import of the file "/public/users-posts-products.xlsx"
$data = fastexcel()->importSheets("users-posts-products.xlsx");
// $data contains a collection of 3 arrays
dd($data);
To import a specific spreadsheet, you can specify its number or position in the workbook using the sheet($number) method:
// Import of the 2nd spreadsheet from the file "/public/users-posts-products.xlsx"
$data = fastexcel()->sheet(2)->import("users-posts-products.xlsx");
We have just seen how to use the Fast Excel package to export data from a collection or model to an Excel file in .xlsx, .csv or .ods format, and import data from a spreadsheet as a collection.
The Fast Excel documentation also shows how to apply styles (text color, font, background, …) to columns and rows of a spreadsheet.
Let’s summarize the fast-excel methods seen :
fastexcel($data) : the global FastExcel helper allows to initialize it with the $data of a collection or a modelimport($file) : import lines from a $file .xlsx, .csv or .ods file into the collectionexport($file) : export data from a collection or a model to a $file .xlsx, .csv or .ods fileimportSheets($file) : import spreadsheets from $file
sheet($number) : import a specific spreadsheet $number
download($file) : start downloading the file $file
Be well! 😉
Laravel News Links
https://repository-images.githubusercontent.com/577150459/bd2b80fc-9c1e-4591-85c3-ba9ec687a951
Laravel Grapes is a library for laravel framework, that offer cms drag and drop page builder for frontend which support all Laravel functionality and help user to change all frontend and content just in simple clicks.
Laravel Grapes Comes With A Pro Version Will Be Available On Code Canyon SOON !.
| Feature | Regular Version | Pro Version |
|---|---|---|
| Laravel CSRF | yes | yes |
| Laravel Auth User Condition | yes | yes |
| Laravel Auth Dynamic Guard | yes | yes |
| Multilingual | yes | yes |
| Dynamic Laravel Shortcode widgets | 1 | unlimted |
| Dynamic Routes /{id} | No | yes |
composer require msa/laravel-grapes
php artisan vendor:publish --provider="MSA\LaravelGrapes\LaravelGrapesServiceProvider" --tag="*"
<?php return [ // routes configurations 'builder_prefix' => 'hello', // prefix for builder 'middleware' => null, // middleware for builder 'frontend_prefix' => '', // prefix for frontend /* Define additional translation languages. */ 'languages' => [ 'ar', 'es', ], ];`
The builder by default come with route route('website.builder') which consists of your-domain.com/hello/front-end-builder.
you can change the builder prefix to hi so now the builder load with route prefix hi instead of hello.
Assign any middleware you want to the builder for example auth:admin.
The frontend prefix by default it comes empty that mean that any generated front end page builder it load directly with your slug that created by you so if you need to set prefix for your generated frontend so change it to your prefix that you want.
Now laravel grapes is working.
Navigate to builder route your-domain.com/builder_prefix/front-end-builder.
The Controll Panel Consists Of 3 Panels :-
4) Customize Builder Style Sheet
The options panel consists of 11 buttons :-
The view component button show grid lines for all components droped in the canvas, this help to to select each component individual for example take a look on the screenshot below.
The preview button help you to show page without pannels like screenshot below
The full screen mode button hide all browser utils and show only the builder.
The view code button show you the html and css code of the page like sceenshot below
The create new page button at topbar when you press on it, the popup modal open with new page form, so fill page name and slug and if you need the page become a home page type slug / .
After submit the form will receive toast notification that page has been created successfully, so select the new page throw select page input on the top bar to start modifying the page.
Don’t forget to remove the default route in routes/web.php becaues it will conflict with home page, you don’t need web.php for frontend routes because laravel grapes come with it own route file
<?php use Illuminate\Support\Facades\Route; /* |-------------------------------------------------------------------------- | Web Routes |-------------------------------------------------------------------------- | | Here is where you can register web routes for your application. These | routes are loaded by the RouteServiceProvider within a group which | contains the "web" middleware group. Now create something great! | */ // Route::get('/', function () { // return view('welcome'); // });
The edit code button it will open a popup code editor modal that hold page code including html and css.
So you can edit the html and css code from the code editor popup, for editing syles you will find page style inside tag <style></style>.
The Component Manager button will open a popup hold all custome components that has been saved to reused on another page to let you edit name of the component or delete it.
The page manager button will open a popup hold all pages and let you to edit page name and slug.
The clear canvas button will remove all components from the canvas.
Laravel Grapes let you to save any custome component for reuse it on other pages all you need to select the component and click on Save Component Button.
The save changes button update the page content and if you check the page slug you will find that page content has been changed.
The options panel consists of 2 select input :-
The select page input let you to select page that you need to modify it.
The select device input let you to modify page html and styles on different screens with the following sizes
The View Panel consists of 4 buttons :-
The Block Manager Comes with Bootstrap Components :-
Layout which holds
Components which holds
Typography which holds
Templates which holds
Saved which holds
Another utility tool you might find useful when working with web elements is the layer manger. It’s a tree overview of the structure nodes and enables you to manage it easier.
Each component come with it’s own settings you can modify it for example, if you select from the canvas link element and got to component settings you will find the following:
The Style manager is composed by sectors, which group inside different types of CSS properties. So you can add, for instance, a Dimension sector for width and height, and another one as Typography for font-size and color and more. So it’s up to you decide how organize sectors.
Classes
General
Flex Options
Dimension Options
Typography Options
Decorations Options
Extra
Go to public/css/laravel-grapes.css and start Customizing Laravel Grapes Builder style sheet As You Wish.
Each text component have translation input trait for your languages that you were defined in config/lg.php, In the example below you will find Ar Local and Es Local .
MIT © Mohamed Allam
Laravel News Links
https://res.cloudinary.com/dwinzyahj/image/upload/v1672415188/posts/xkhhfalm6lkqa96hvfid.jpg
I recently built custom ecommerce sites for a couple of clients, and I decided to make a tutorial from my experience using Laravel to build those sites.
In this tutorial, we will build a fully active Laravel Ecommerce site for a mobile phone dealership called Appleplug.Store. Appleplug sells mobile phones and accessories, and they wanted to upgrade their current ecommerce site powered by WordPress to a custom solution. View their live website at appleplug.store
In this tutorial, we will take on this project and build the ecommerce site.
This is going to be an ongoing series, starting with this introduction. In this part of the series, we will do a basic project setup and install the required tools.
By the end of this series, you will have
Built a working Ecommerce website deployed in production
Learned to do Test Driven Development in Laravel
Understand Laravel beyond basic crud
Learned to use Laravel’s Background jobs
Learned to handle authorization
And some other cool stuff
In order to follow along this tutorial, you will need:
A few things to note, we will be using
Bootstrap CSS for our styling, you’re, however, welcome to use any CSS framework of your choice
The Hotwired Stack (Turbo, Stimulus)
I’m assuming you already have composer installed, let’s start by creating a project in Laravel
composer create-project laravel/laravel --prefer-dist ecommerce
or using the laravel binary
laravel new ecommerce
This will create a new project and install all the dependencies in the ecommerce directory.
Next, let us set up our database
sudo mysql
In the MySQL console, create a user and a database and grant rights on the database to our newly created user.
create database ecommerce;
create user laravel@localhost identified by 'secure password';
grant all on ecommerce.* to laravel@localhost;
After granting rights, open the project folder in your favorite text editor, I’m using JetBrains PhpStorm, If you’re interested in using PhpStorm also checkout Jeffry Way’s video on Laracasts about how to set it up.
In your text editor, open the .env file and edit DB_XX to match the user and database we just created.
Next, open the terminal in the working directory and run our first migrations with
php artisan migrate
Next, let’s install other tools we will be using throughout the development of this application.
First, this package allows us to use Turbo within our application.
composer require tonysm/turbo-laravel
After installing, execute the turbo:install Artisan command, which will add a couple JS dependencies to your package.json file.
Next, let’s install another package to let use Stimulus in Laravel
composer require tonysm/stimulus-laravel
After installing, execute the stimulus:install Artisan command to stimulus as dependency and basic scaffolding.
Last, let’s install some blade helper functions to use with stimulus
composer require flixtechs-labs/turbo-laravel-helpers
Now that our basic setup is done, let’s install the dependencies by running yarn or npm install and then start the dev server with
php artisan serve
In the next blog post, we will begin to actually build our ecommerce site. We will set up user authentication authorization.
Subscribe to the newsletter and get notified when I post the next tutorial
Laravel News Links
https://www.percona.com/blog/wp-content/uploads/2023/01/MySQL-Performance-Schema.png
Recently I was working with a customer wherein our focus was to carry out a performance audit of their multiple MySQL database nodes. We started looking into the stats of the performance schema. While working, the customer raised two interesting questions: how can he make complete use of the performance schema, and how can he find what he requires? I realized that it is important to understand the insights of the performance schema and how we can make effective use of it. This blog should make it easier to understand for everyone.
The performance schema is an engine in MySQL which can easily be checked whether enabled or not using SHOW ENGINES. It is entirely built upon various sets of instruments (also can be called event names) each serving different purposes.
Instruments are the main part of the performance schema. It is useful when I want to investigate a problem and its root causes. Some of the examples are listed below (but not limited to) :
1. Which IO operation is causing MySQL to slow down?
2. Which file a process/thread is mostly waiting for?
3. At which execution stage is a query taking time, or how much time will an alter command will take?
4. Which process is consuming most of the memory or how to identify the cause of memory leakage?
Instruments are a combination of different sets of components like wait, io, sql, binlog, file, etc. If we combine these components, they become a meaningful tool to help us troubleshoot different issues. For example, wait/io/file/sql/binlog is one of the instruments providing information regarding the wait and I/O details on binary log files. Instruments are being read from left and then components will be added with delimiter “/”. The more components we add to the instrument, the more complex or more specific it becomes, i.e. the more lengthy the instrument is, the more complex it goes.
You can locate all instruments available in your MySQL version under table setup_instruments. It is worth noting that every version of MySQL has a different number of instruments.
select count(1) from performance_schema.setup_instruments; +----------+ | count(1) | +----------+ | 1269 | +----------+
For easy understanding, instruments can be divided into seven different parts as shown below. The MySQL version I am using here is 8.0.30. In earlier versions, we used to have only four, so expect to see different types of instruments in case you are using different/lower versions.
select distinct(substring_index(name,'/',1)) from performance_schema.setup_instruments; +-------------------------------+ | (substring_index(name,'/',1)) | +-------------------------------+ | wait | | idle | | stage | | statement | | transaction | | memory | | error | +-------------------------------+ 7 rows in set (0.01 sec)
The total number of instruments for these seven components is listed below. You can identify these instruments starting with these names only.
select distinct(substring_index(name,'/',1)) as instrument_name,count(1) from performance_schema.setup_instruments group by instrument_name; +-----------------+----------+ | instrument_name | count(1) | +-----------------+----------+ | wait | 399 | | idle | 1 | | stage | 133 | | statement | 221 | | transaction | 1 | | memory | 513 | | error | 1 | +-----------------+----------+
I do remember that a customer asked me since there are thousands of instruments available, how can he find out which one he requires. As I mentioned before that instruments are being read from left to right, we can find out which instrument we require and then find its respective performance.
For example – I need to observe the performance of redo logs (log files or WAL files) of my MySQL instance and need to check if threads/connections need to wait for the redo log files to be flushed before further writing and if so then how much.
select * from setup_instruments where name like '%innodb_log_file%'; +-----------------------------------------+---------+-------+------------+------------+---------------+ | NAME | ENABLED | TIMED | PROPERTIES | VOLATILITY | DOCUMENTATION | +-----------------------------------------+---------+-------+------------+------------+---------------+ | wait/synch/mutex/innodb/log_files_mutex | NO | NO | | 0 | NULL | | wait/io/file/innodb/innodb_log_file | YES | YES | | 0 | NULL | +-----------------------------------------+---------+-------+------------+------------+---------------+
Here you see that I have two instruments for redo log files. One is for the mutex stats on the redo log files and the second is for the IO wait stats on the redo log files.
Example two – You need to find out those operations or instruments for which you can calculate the time required i.e. how much time a bulk update will take. Below are all the instruments that help you to locate the same.
select * from setup_instruments where PROPERTIES='progress'; +------------------------------------------------------+---------+-------+------------+------------+---------------+ | NAME | ENABLED | TIMED | PROPERTIES | VOLATILITY | DOCUMENTATION | +------------------------------------------------------+---------+-------+------------+------------+---------------+ | stage/sql/copy to tmp table | YES | YES | progress | 0 | NULL | | stage/sql/Applying batch of row changes (write) | YES | YES | progress | 0 | NULL | | stage/sql/Applying batch of row changes (update) | YES | YES | progress | 0 | NULL | | stage/sql/Applying batch of row changes (delete) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (end) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (flush) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (insert) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (log apply index) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (log apply table) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (merge sort) | YES | YES | progress | 0 | NULL | | stage/innodb/alter table (read PK and internal sort) | YES | YES | progress | 0 | NULL | | stage/innodb/alter tablespace (encryption) | YES | YES | progress | 0 | NULL | | stage/innodb/buffer pool load | YES | YES | progress | 0 | NULL | | stage/innodb/clone (file copy) | YES | YES | progress | 0 | NULL | | stage/innodb/clone (redo copy) | YES | YES | progress | 0 | NULL | | stage/innodb/clone (page copy) | YES | YES | progress | 0 | NULL | +------------------------------------------------------+---------+-------+------------+------------+---------------+
The above instruments are the ones for which progress can be tracked.
To take advantage of these instruments, they need to be enabled first to make the performance schema log-related data. In addition to logging the information of running threads, it is also possible to maintain the history of such threads (statement/stages or any particular operation). Let’s see, by default, how many instruments are enabled in the version I am using. I have not enabled any other instrument explicitly.
select count(*) from setup_instruments where ENABLED='YES'; +----------+ | count(*) | +----------+ | 810 | +----------+ 1 row in set (0.00 sec)
The below query lists the top 30 enabled instruments for which logging will take place in the tables.
select * from performance_schema.setup_instruments where enabled='YES' limit 30; +---------------------------------------+---------+-------+------------+------------+---------------+ | NAME | ENABLED | TIMED | PROPERTIES | VOLATILITY | DOCUMENTATION | +---------------------------------------+---------+-------+------------+------------+---------------+ | wait/io/file/sql/binlog | YES | YES | | 0 | NULL | | wait/io/file/sql/binlog_cache | YES | YES | | 0 | NULL | | wait/io/file/sql/binlog_index | YES | YES | | 0 | NULL | | wait/io/file/sql/binlog_index_cache | YES | YES | | 0 | NULL | | wait/io/file/sql/relaylog | YES | YES | | 0 | NULL | | wait/io/file/sql/relaylog_cache | YES | YES | | 0 | NULL | | wait/io/file/sql/relaylog_index | YES | YES | | 0 | NULL | | wait/io/file/sql/relaylog_index_cache | YES | YES | | 0 | NULL | | wait/io/file/sql/io_cache | YES | YES | | 0 | NULL | | wait/io/file/sql/casetest | YES | YES | | 0 | NULL | | wait/io/file/sql/dbopt | YES | YES | | 0 | NULL | | wait/io/file/sql/ERRMSG | YES | YES | | 0 | NULL | | wait/io/file/sql/select_to_file | YES | YES | | 0 | NULL | | wait/io/file/sql/file_parser | YES | YES | | 0 | NULL | | wait/io/file/sql/FRM | YES | YES | | 0 | NULL | | wait/io/file/sql/load | YES | YES | | 0 | NULL | | wait/io/file/sql/LOAD_FILE | YES | YES | | 0 | NULL | | wait/io/file/sql/log_event_data | YES | YES | | 0 | NULL | | wait/io/file/sql/log_event_info | YES | YES | | 0 | NULL | | wait/io/file/sql/misc | YES | YES | | 0 | NULL | | wait/io/file/sql/pid | YES | YES | | 0 | NULL | | wait/io/file/sql/query_log | YES | YES | | 0 | NULL | | wait/io/file/sql/slow_log | YES | YES | | 0 | NULL | | wait/io/file/sql/tclog | YES | YES | | 0 | NULL | | wait/io/file/sql/trigger_name | YES | YES | | 0 | NULL | | wait/io/file/sql/trigger | YES | YES | | 0 | NULL | | wait/io/file/sql/init | YES | YES | | 0 | NULL | | wait/io/file/sql/SDI | YES | YES | | 0 | NULL | | wait/io/file/sql/hash_join | YES | YES | | 0 | NULL | | wait/io/file/mysys/proc_meminfo | YES | YES | | 0 | NULL | +---------------------------------------+---------+-------+------------+------------+---------------+
As I mentioned previously, it is also possible to maintain the history of the events. For example, if you are running a load test and want to analyze the performance of queries post its completion, you need to activate the below consumers if not activated yet.
select * from performance_schema.setup_consumers; +----------------------------------+---------+ | NAME | ENABLED | +----------------------------------+---------+ | events_stages_current | YES | | events_stages_history | YES | | events_stages_history_long | YES | | events_statements_cpu | YES | | events_statements_current | YES | | events_statements_history | YES | | events_statements_history_long | YES | | events_transactions_current | YES | | events_transactions_history | YES | | events_transactions_history_long | YES | | events_waits_current | YES | | events_waits_history | YES | | events_waits_history_long | YES | | global_instrumentation | YES | | thread_instrumentation | YES | | statements_digest | YES | +----------------------------------+---------+
Note – The top 15 records in the above rows are self-explanatory, but the last one for digest means to allow the digest text for SQL statements. By digest I mean, grouping similar queries and showing their performance. This is being done by hashing algorithms.
Let’s say, you want to analyze the stages of a query that is spending most of the time in, you need to enable the respective logging using the below query.
MySQL> update performance_schema.setup_consumers set ENABLED='YES' where NAME='events_stages_current'; Query OK, 1 row affected (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0
Now that we know what instruments are, how to enable them, and the amount of data we want to store in, it’s time to understand how to make use of these instruments. To make it easier to understand I have taken the output of a few instruments from my test cases as it won’t be possible to cover all as there are more than a thousand instruments.
Please note that to generate the fake load, I used sysbench (if you are not familiar with it, read about it here) to create read and write traffic using the below details :
lua : oltp_read_write.lua Number of tables : 1 table_Size : 100000 threads : 4/10 rate - 10
As an example, think about a case when you want to find out where memory is getting utilized. To find out this, let’s execute the below query in the table related to the memory.
select * from memory_summary_global_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 3\G; *************************** 1. row *************************** EVENT_NAME: memory/innodb/buf_buf_pool COUNT_ALLOC: 24 COUNT_FREE: 0 SUM_NUMBER_OF_BYTES_ALLOC: 3292102656 SUM_NUMBER_OF_BYTES_FREE: 0 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 24 HIGH_COUNT_USED: 24 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 3292102656 HIGH_NUMBER_OF_BYTES_USED: 3292102656 *************************** 2. row *************************** EVENT_NAME: memory/sql/THD::main_mem_root COUNT_ALLOC: 138566 COUNT_FREE: 138543 SUM_NUMBER_OF_BYTES_ALLOC: 2444314336 SUM_NUMBER_OF_BYTES_FREE: 2443662928 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 23 HIGH_COUNT_USED: 98 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 651408 HIGH_NUMBER_OF_BYTES_USED: 4075056 *************************** 3. row *************************** EVENT_NAME: memory/sql/Filesort_buffer::sort_keys COUNT_ALLOC: 58869 COUNT_FREE: 58868 SUM_NUMBER_OF_BYTES_ALLOC: 2412676319 SUM_NUMBER_OF_BYTES_FREE: 2412673879 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 1 HIGH_COUNT_USED: 13 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 2440 HIGH_NUMBER_OF_BYTES_USED: 491936 Above are the top three records, showing where the memory is getting mostly utilized.
Instrument memory/innodb/buf_buf_pool is related to the buffer pool which is utilizing 3 GB and we can fetch this information from SUM_NUMBER_OF_BYTES_ALLOC. Another data that is also important for us to consider is CURRENT_COUNT_USED which tells us how many blocks of data have been currently allocated and once work is done, the value of this column will be modified. Looking at the stats of this record, consumption of 3GB is not a problem since MySQL uses a buffer pool quite frequently ( for example, while writing data, loading data, modifying data, etc.). But the problem rises, when you have memory leakage issues or the buffer pool is not getting used. In such cases, this instrument is quite useful to analyze.
Looking at the second instrument memory/sql/THD::main_mem_root which is utilizing 2G, is related to the sql (that’s how we should read it from the very left). THD::main_mem_root is one of the thread classes. Let us try to understand this instrument:
THD represent thread
main_mem_root is a class of mem_root. MEM_ROOT is a structure being used to allocate memory to threads while parsing the query, during execution plans, during execution of nested queries/sub-queries and other allocations while query execution. Now, in our case we want to check which thread/host is consuming memory so that we can further optimize the query. Before digging down further, let’s understand the 3rd instrument first which is an important instrument to look for.
memory/sql/filesort_buffer::sort_keys – As I mentioned earlier, instrument names should be read starting from left. In this case, it is related to memory allocated to sql. The next component in this instrument is filesort_buffer::sort_keys which is responsible for sorting the data (it can be a buffer in which data is stored and needs to be sorted. Various examples of this can be index creation or normal order by clause)
It’s time to dig down and analyze which connection is using this memory. To find out this, I have used table memory_summary_by_host_by_event_name and filtered out the record coming from my application server.
select * from memory_summary_by_host_by_event_name where HOST='10.11.120.141' order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 2\G; *************************** 1. row *************************** HOST: 10.11.120.141 EVENT_NAME: memory/sql/THD::main_mem_root COUNT_ALLOC: 73817 COUNT_FREE: 73810 SUM_NUMBER_OF_BYTES_ALLOC: 1300244144 SUM_NUMBER_OF_BYTES_FREE: 1300114784 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 7 HIGH_COUNT_USED: 39 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 129360 HIGH_NUMBER_OF_BYTES_USED: 667744 *************************** 2. row *************************** HOST: 10.11.120.141 EVENT_NAME: memory/sql/Filesort_buffer::sort_keys COUNT_ALLOC: 31318 COUNT_FREE: 31318 SUM_NUMBER_OF_BYTES_ALLOC: 1283771072 SUM_NUMBER_OF_BYTES_FREE: 1283771072 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 0 HIGH_COUNT_USED: 8 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 0 HIGH_NUMBER_OF_BYTES_USED: 327936
Event name memory/sql/THD::main_mem_root has consumed more than 1G memory ( sum ) by the host 11.11.120.141 which is my application host at the time of executing this query. Now since we know that this host is consuming memory, we can dig down further to find out the queries like nested or subquery and then try to optimize it.
Similarly, if we see the memory allocation by filesort_buffer::sort_keys is also more than 1G (total) at the time of execution. Such instruments signal us to refer to any queries using sorting i.e. order by clause.
Let’s try to find out the culprit thread in one of the cases where most of the memory is being utilized by the file sort. The first query helps us in finding the host and event name (instrument):
select * from memory_summary_by_host_by_event_name order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 1\G; *************************** 1. row *************************** HOST: 10.11.54.152 EVENT_NAME: memory/sql/Filesort_buffer::sort_keys COUNT_ALLOC: 5617297 COUNT_FREE: 5617297 SUM_NUMBER_OF_BYTES_ALLOC: 193386762784 SUM_NUMBER_OF_BYTES_FREE: 193386762784 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 0 HIGH_COUNT_USED: 20 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 0 HIGH_NUMBER_OF_BYTES_USED: 819840
Ahan, this is my application host, and let’s find out which user is executing and its respective thread id.
select * from memory_summary_by_account_by_event_name where HOST='10.11.54.152' order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 1\G; *************************** 1. row *************************** USER: sbuser HOST: 10.11.54.152 EVENT_NAME: memory/sql/Filesort_buffer::sort_keys COUNT_ALLOC: 5612993 COUNT_FREE: 5612993 SUM_NUMBER_OF_BYTES_ALLOC: 193239513120 SUM_NUMBER_OF_BYTES_FREE: 193239513120 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 0 HIGH_COUNT_USED: 20 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 0 HIGH_NUMBER_OF_BYTES_USED: 819840 select * from memory_summary_by_thread_by_event_name where EVENT_NAME='memory/sql/Filesort_buffer::sort_keys' order by SUM_NUMBER_OF_BYTES_ALLOC desc limit 1\G; *************************** 1. row *************************** THREAD_ID: 84 EVENT_NAME: memory/sql/Filesort_buffer::sort_keys COUNT_ALLOC: 565645 COUNT_FREE: 565645 SUM_NUMBER_OF_BYTES_ALLOC: 19475083680 SUM_NUMBER_OF_BYTES_FREE: 19475083680 LOW_COUNT_USED: 0 CURRENT_COUNT_USED: 0 HIGH_COUNT_USED: 2 LOW_NUMBER_OF_BYTES_USED: 0 CURRENT_NUMBER_OF_BYTES_USED: 0 HIGH_NUMBER_OF_BYTES_USED: 81984
Now, we have the complete details of the user and its thread id. Let’s see which sort of queries are being executed by this thread.
select * from events_statements_history where THREAD_ID=84 order by SORT_SCAN desc\G; *************************** 1. row *************************** THREAD_ID: 84 EVENT_ID: 48091828 END_EVENT_ID: 48091833 EVENT_NAME: statement/sql/select SOURCE: init_net_server_extension.cc:95 TIMER_START: 145083499054314000 TIMER_END: 145083499243093000 TIMER_WAIT: 188779000 LOCK_TIME: 1000000 SQL_TEXT: SELECT c FROM sbtest2 WHERE id BETWEEN 5744223 AND 5744322 ORDER BY c DIGEST: 4f764af1c0d6e44e4666e887d454a241a09ac8c4df9d5c2479f08b00e4b9b80d DIGEST_TEXT: SELECT `c` FROM `sbtest2` WHERE `id` BETWEEN ? AND ? ORDER BY `c` CURRENT_SCHEMA: sysbench OBJECT_TYPE: NULL OBJECT_SCHEMA: NULL OBJECT_NAME: NULL OBJECT_INSTANCE_BEGIN: NULL MYSQL_ERRNO: 0 RETURNED_SQLSTATE: NULL MESSAGE_TEXT: NULL ERRORS: 0 WARNINGS: 0 ROWS_AFFECTED: 0 ROWS_SENT: 14 ROWS_EXAMINED: 28 CREATED_TMP_DISK_TABLES: 0 CREATED_TMP_TABLES: 0 SELECT_FULL_JOIN: 0 SELECT_FULL_RANGE_JOIN: 0 SELECT_RANGE: 1 SELECT_RANGE_CHECK: 0 SELECT_SCAN: 0 SORT_MERGE_PASSES: 0 SORT_RANGE: 0 SORT_ROWS: 14 SORT_SCAN: 1 NO_INDEX_USED: 0 NO_GOOD_INDEX_USED: 0 NESTING_EVENT_ID: NULL NESTING_EVENT_TYPE: NULL NESTING_EVENT_LEVEL: 0 STATEMENT_ID: 49021382 CPU_TIME: 185100000 EXECUTION_ENGINE: PRIMARY
I have pasted one record only as per rows_scan (which refers to the table scan) here but you can find similar other queries in your case and then try to optimize it either by creating an index or some other suitable solution.
Example Two
Let’s try to find out the situation of table locking i.e. which lock i.e. read lock, write lock, etc., has been put on the user table and for what duration (displayed in pico seconds).
Lock a table with write lock :
mysql> lock tables sbtest2 write; Query OK, 0 rows affected (0.00 sec)
mysql> show processlist; +----+--------+---------------------+--------------------+-------------+--------+-----------------------------------------------------------------+------------------+-----------+-----------+---------------+ | Id | User | Host | db | Command | Time | State | Info | Time_ms | Rows_sent | Rows_examined | +----+--------+---------------------+--------------------+-------------+--------+-----------------------------------------------------------------+------------------+-----------+-----------+---------------+ | 8 | repl | 10.11.139.171:53860 | NULL | Binlog Dump | 421999 | Source has sent all binlog to replica; waiting for more updates | NULL | 421998368 | 0 | 0 | | 9 | repl | 10.11.223.98:51212 | NULL | Binlog Dump | 421998 | Source has sent all binlog to replica; waiting for more updates | NULL | 421998262 | 0 | 0 | | 25 | sbuser | 10.11.54.152:38060 | sysbench | Sleep | 65223 | | NULL | 65222573 | 0 | 1 | | 26 | sbuser | 10.11.54.152:38080 | sysbench | Sleep | 65222 | | NULL | 65222177 | 0 | 1 | | 27 | sbuser | 10.11.54.152:38090 | sysbench | Sleep | 65223 | | NULL | 65222438 | 0 | 0 | | 28 | sbuser | 10.11.54.152:38096 | sysbench | Sleep | 65223 | | NULL | 65222489 | 0 | 1 | | 29 | sbuser | 10.11.54.152:38068 | sysbench | Sleep | 65223 | | NULL | 65222527 | 0 | 1 | | 45 | root | localhost | performance_schema | Sleep | 7722 | | NULL | 7722009 | 40 | 348 | | 46 | root | localhost | performance_schema | Sleep | 6266 | | NULL | 6265800 | 16 | 1269 | | 47 | root | localhost | performance_schema | Sleep | 4904 | | NULL | 4903622 | 0 | 23 | | 48 | root | localhost | performance_schema | Sleep | 1777 | | NULL | 1776860 | 0 | 0 | | 54 | root | localhost | sysbench | Sleep | 689 | | NULL | 688740 | 0 | 1 | | 58 | root | localhost | NULL | Sleep | 44 | | NULL | 44263 | 1 | 1 | | 59 | root | localhost | sysbench | Query | 0 | init | show processlist | 0 | 0 | 0 | +----+--------+---------------------+--------------------+-------------+--------+-----------------------------------------------------------------+------------------+-
Now, think of a situation wherein you are not aware of this session and you are trying to read this table and thus waiting for the meta data locks. In this situation, we need to take the help of instruments (to find out which session is locking this table) related to the lock i.e. wait/table/lock/sql/handler (table_handles is the table responsible for table lock instruments) :
mysql> select * from table_handles where object_name='sbtest2' and OWNER_THREAD_ID is not null; +-------------+---------------+-------------+-----------------------+-----------------+----------------+---------------+----------------+ | OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | OBJECT_INSTANCE_BEGIN | OWNER_THREAD_ID | OWNER_EVENT_ID | INTERNAL_LOCK | EXTERNAL_LOCK | +-------------+---------------+-------------+-----------------------+-----------------+----------------+---------------+----------------+ | TABLE | sysbench | sbtest2 | 140087472317648 | 141 | 77 | NULL | WRITE EXTERNAL | +-------------+---------------+-------------+-----------------------+-----------------+----------------+---------------+----------------+
mysql> select * from metadata_locks; +---------------+--------------------+------------------+-------------+-----------------------+----------------------+---------------+-------------+-------------------+-----------------+----------------+ | OBJECT_TYPE | OBJECT_SCHEMA | OBJECT_NAME | COLUMN_NAME | OBJECT_INSTANCE_BEGIN | LOCK_TYPE | LOCK_DURATION | LOCK_STATUS | SOURCE | OWNER_THREAD_ID | OWNER_EVENT_ID | +---------------+--------------------+------------------+-------------+-----------------------+----------------------+---------------+-------------+-------------------+-----------------+----------------+ | GLOBAL | NULL | NULL | NULL | 140087472151024 | INTENTION_EXCLUSIVE | STATEMENT | GRANTED | sql_base.cc:5534 | 141 | 77 | | SCHEMA | sysbench | NULL | NULL | 140087472076832 | INTENTION_EXCLUSIVE | TRANSACTION | GRANTED | sql_base.cc:5521 | 141 | 77 | | TABLE | sysbench | sbtest2 | NULL | 140087471957616 | SHARED_NO_READ_WRITE | TRANSACTION | GRANTED | sql_parse.cc:6295 | 141 | 77 | | BACKUP TABLES | NULL | NULL | NULL | 140087472077120 | INTENTION_EXCLUSIVE | STATEMENT | GRANTED | lock.cc:1259 | 141 | 77 | | TABLESPACE | NULL | sysbench/sbtest2 | NULL | 140087471954800 | INTENTION_EXCLUSIVE | TRANSACTION | GRANTED | lock.cc:812 | 141 | 77 | | TABLE | sysbench | sbtest2 | NULL | 140087673437920 | SHARED_READ | TRANSACTION | PENDING | sql_parse.cc:6295 | 142 | 77 | | TABLE | performance_schema | metadata_locks | NULL | 140088117153152 | SHARED_READ | TRANSACTION | GRANTED | sql_parse.cc:6295 | 143 | 970 | | TABLE | sysbench | sbtest1 | NULL | 140087543861792 | SHARED_WRITE | TRANSACTION | GRANTED | sql_parse.cc:6295 | 132 | 156 | +---------------+--------------------+------------------+-------------+-----------------------+----------------------+---------------+-------------+-------------------+-----------------+----------------+
From here we know that thread id 141 is holding the lock “SHARED_NO_READ_WRITE” on sbtest2 and thus we can take the corrective step i.e. either commit the session or kill it, once we realize its requirement. We need to find the respective processlist_id from the threads table to kill it.
mysql> kill 63; Query OK, 0 rows affected (0.00 sec)
mysql> select * from table_handles where object_name='sbtest2' and OWNER_THREAD_ID is not null; Empty set (0.00 sec)
Example Three
In some situations, we need to find out where our MySQL server is spending most of the time waiting so that we can take further steps :
mysql> select * from events_waits_history order by TIMER_WAIT desc limit 2\G; *************************** 1. row *************************** THREAD_ID: 88 EVENT_ID: 124481038 END_EVENT_ID: 124481038 EVENT_NAME: wait/io/file/sql/binlog SOURCE: mf_iocache.cc:1694 TIMER_START: 356793339225677600 TIMER_END: 420519408945931200 TIMER_WAIT: 63726069720253600 SPINS: NULL OBJECT_SCHEMA: NULL OBJECT_NAME: /var/lib/mysql/mysqld-bin.000009 INDEX_NAME: NULL OBJECT_TYPE: FILE OBJECT_INSTANCE_BEGIN: 140092364472192 NESTING_EVENT_ID: 124481033 NESTING_EVENT_TYPE: STATEMENT OPERATION: write NUMBER_OF_BYTES: 683 FLAGS: NULL *************************** 2. row *************************** THREAD_ID: 142 EVENT_ID: 77 END_EVENT_ID: 77 EVENT_NAME: wait/lock/metadata/sql/mdl SOURCE: mdl.cc:3443 TIMER_START: 424714091048155200 TIMER_END: 426449252955162400 TIMER_WAIT: 1735161907007200 SPINS: NULL OBJECT_SCHEMA: sysbench OBJECT_NAME: sbtest2 INDEX_NAME: NULL OBJECT_TYPE: TABLE OBJECT_INSTANCE_BEGIN: 140087673437920 NESTING_EVENT_ID: 76 NESTING_EVENT_TYPE: STATEMENT OPERATION: metadata lock NUMBER_OF_BYTES: NULL FLAGS: NULL 2 rows in set (0.00 sec)
In the above example, bin log file has waited most of the time (timer_wait in pico seconds) to perform IO operations in mysqld-bin.000009. It may be because of several reasons, for example, storage is full. The next records show the details of example two I explained previously.
To make life more convenient and easy to monitor these instruments, Percona Monitoring and Management (PMM) plays an important role. For example, see the below snapshots.


We can configure almost all instruments and instead of querying, we can just make use of these graphs. For getting familiar, check the PMM demo.
Obviously, knowing about performance schema helps us a lot but also enabling all of them incurs additional costs and impacts performance. Hence, in many cases, Percona Toolkit is helpful without impacting the DB performance. For example, pt-index-usage, pt-online schema change, pt-query-digest.
Some important points
Performance schemas are a great help while troubleshooting the behavior of your MySQL server. You need to find out which instrument you need. Should you be still struggling with the performance, please don’t hesitate to reach us and we will be more than happy to help you.
Planet MySQL
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/12/picture-of-a-dashboard-showing-statistical-data.jpg
Sometimes when you try to update your system or install new software, you may find that it takes way too long. In such situations, speed testing your internet can help determine if the issue lies on your end or is a server-side issue.
Let’s learn how you can easily speed-test your internet from the Linux terminal.
Speed testing, as the name hints, is the process of testing the speed of your internet connection. Your computer sends a few packets to a remote server. The number of packets sent per second, and each transfer’s latency is then benchmarked.
Speed testing your internet tells you whether your ISP provides the internet speed promised in your subscription. It can also sometimes be useful in troubleshooting networking problems in applications, as speed testing tells you if a certain app is having connection issues or your internet connection is running slow.
speedtest.net by Ookla is a popular internet speed testing website. You probably have used it every time you needed to test your internet.
Did you know that it has an official CLI application that does everything the website can do but from the comfort of your Linux terminal? Well, now you do. Testing internet speed from the Linux terminal is a quick and easy process that can be done using a few simple commands.
As a preliminary step, update your system using the package manager on your distro.
On Ubuntu/Debian derivatives, run:
sudo apt update && sudo apt upgrade
On Arch-based systems, run:
sudo pacman -Syu
On Fedora, CentOS, and RHEL, issue the following command:
sudo dnf update
Now that your system has been updated, proceed with the installation of the speedtest-cli package using the package manager on your distribution.
On Ubuntu/Debian derivatives, type in:
sudo apt install speedtest-cli
On Arch-based systems, run:
sudo pacman -S speedtest-cli
To install Speedtest CLI on Fedora, CentOS, and RHEL, issue the following command:
sudo dnf install speedtest-cli
Now Speedtest CLI has been installed on your system. To test your internet speed, simply type in speedtest-cli and hit Enter.
The tool should automatically find the optimal server for speed testing and return desired results, including your internet speed in megabits per second (Mbps). Along with basic internet speed tracking, Speedtest CLI offers a few extra options worth checking out.
If you want to test your internet speed with a specific server, you can use the –server flag followed by the server ID. Here’s an example:
speedtest-cli
You can specify the number of bytes to transfer during the speed test using the –bytes flag. For example:
speedtest-cli
To save the speed test results to a file, you can use the –output flag followed by the filename. Here’s how the command should look:
speedtest-cli --output results.txt
To get a comprehensive guide of all the features of the speedtest-cli tool, use the man speedtest command to read through its manual page. Or, check out a web-based alternative to the man command.
Speed testing your internet helps diagnose network issues and track the network performance in general. With Speedtest CLI you can easily test your internet directly from the terminal without opening a browser. This can be quite useful when working with headless servers or command line-dependent systems, which, in most cases, are servers.
If you need to install a graphical user interface on your server, you can do that too.
MUO – Feed
https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpg
A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.
People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.
Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.
“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.
“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.
“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”
For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.
“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.
“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”
The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.
Source: Washington University in St. Louis
The post Dry eye changes how injured cornea heals itself appeared first on Futurity.
Futurity
https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpg
A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.
People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.
Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.
“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.
“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.
“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”
For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.
“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.
“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”
The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.
Source: Washington University in St. Louis
The post Dry eye changes how injured cornea heals itself appeared first on Futurity.
Futurity