The Ultimate Guide to Database Corruption: Part 2 – B-Tree Index Corruption

https://www.percona.com/blog/wp-content/uploads/2023/01/B-tree-Index-Corruption.jpgB-tree Index Corruption

B-tree Index CorruptionThis blog is in continuation of my previous blog on the basic understanding of corruption with the subject line The Ultimate Guide to Database Corruption: Part 1 – An Overview. If you have not already gone through it, I encourage you to read it to understand data corruption.

Introduction

This blog focuses on B-tree indexes and various corruption-related issues that occur with them. Understanding how Postgres implements the B-tree index is needed before one can begin understanding index corruption. Once we have a proper understanding of the index structure, and how the b-tree indexes are accessed, it would be easy to understand how the corruption affects the data retrieval operation, and various methods to resolve them.

Some case studies will also be covered in this blog.

B-tree index structure

In PostgreSQL, a B-tree index structure is implemented using Lehman and Yao’s high-concurrency B-tree algorithm. Logically, it comprises four types of pages organized in a hierarchical structure.

  • The meta page
  • The root page
  • Internal pages
  • Leaf pages

Meta page:

This is the first page of an index that basically contains metadata information, such as a type of index, and is also called the page-zero. The location of the root page can be obtained from this page. If this page is corrupted or inaccessible, it is impossible to access the index.

Root pages and internal pages:

The root page is the very first page that contains links to the pages called internal pages and leaf pages. In terms of storage, internal pages are no different than the root page; they also store pointers to other internal pages. The only difference is that there is only one root page in every index, while there may be a number of internal pages. 

These pages do not contain any information to access table records.

Leaf pages:

These pages lie at the last level and cannot be spawned further. They contain actual information to access table data. The data stored in these pages are values and CTIDs from the table where the particular value lies.

A typical structure of a B-tree index is as below.

B-tree index

As narrated in the diagram above, the root page and internal pages have tuples that are linked with the internal pages or leaf pages. Every internal and leaf page should have equal or greater values than the value linked from the previous page and the next higher value than the associated value in that page. The first tuple in every root and internal page is blank; it points to the page that contains all the values lower than its immediate right-hand neighbor.

For example, in the image above, 

  • The root page has three tuples (blank, 40, 80), and the “internal page 1” is associated with the tuple whose value is blank, and the next higher value in the root page is 40. So here, the “internal page 1” may contain values less than 40. 
  • While the “internal page 2” contains values greater than or equal to 40 and less than 80. 

Case studies

In the case of index corruption, it gives different results as per the position where the data is corrupted. Here, corruption may exist in any page (root, internal, and leaf). But, while studying it carefully, one may understand that corruption may not always reveal its existence. On occasions, it misguides users as well. If the page header or format is corrupt, it throws an error while executing the query itself. But, when the actual data inside pages are corrupted instead of format, it cannot detect it as corrupted, but it returns some results.

In this blog, I have deliberately chosen an index without any internal pages, so that readers may understand that internal pages may not necessarily be present in every B-tree index. The index used here for example is a primary key index.

This section deals with cases where queries return incorrect results due to corruption. Here, we delve into data corruption with leaf pages and internal pages.

Case 1 –  Data gets corrupted and becomes unsearchable

This is a classical case of corruption where Postgres tries to find a record, but it cannot be discovered because the value does not belong to its current parent node. In this case, in one of the leaf pages, two adjacent bits get exchanged with each other. Details are as below.

Here, we randomly pick a number from a table and prepare a case on it. Below is the record and details related to it. The record (id = 612) described in the snapshot below will be our target to describe corruption.

We have an index on the id column of corruption_test, and as described below, positions of underlined bits (10th and 11th) get exchanged in an index. Due to this, the actual value becomes 1124 from 612; however, the value in the associated table is still the same.

Table name: – corruption_test

Index name: – test_tbl_pkey

Corruption in page:- Leaf page 2

Associated CTID in table:- (101,6)

Actual value in an index tuple:- 00000010 01100100 (612)

Value in an index tuple after corruption:- 00000100 01100100 (1124)

The below image delineates what actually transpired in the test_tbl_pkey index.

test_tbl_pkey index.

Now, we will see how queries respond when we request to retrieve the data for values associated with this case.

Querying a table using the pristine number:

As described above, the actual (non-corrupted) value is 612. Let’s see what result we receive by querying using “id = 612” predicate.

Now, what if we search data using previously extracted CTID in the table?

This is sufficient to take you by surprise as the record actually exists in the table, but still, it fails to retrieve it when we directly query it using a particular value.

This happens because 612 was replaced by 1124 after corruption in the index. Here, the search performs the index scan to obtain the record easily, and the index is unable to locate the desired record; hence, it is unable to show any data. The below describes this situation.

Querying a table using a corrupted number:

As we have understood that 612 does not exist in the index, it may not be gleaned from the index. But, we know that 1124 exists in the index. Curious to know what happens when querying that 1124.

Let us dive a little deeper to know why we did not get any record upon querying corrupted data.

Here, as per a valid structure of an index, 1124 ought to be there in Leaf page 3, but it may not be found there as it does not exist in either the page or the table. While Leaf page 2 contains details pertaining to 1124, PostgreSQL will not search 1124 in Leaf page 2 as it does not logically satisfy to explore the value 1124 in that page; hence, the value will not be found.

The image below narrates the situation.

Case 2 –  Incorrect data pointers (CTID) stored against the value

In this case, PostgreSQL tries to find a record from the index, and it can find the record, but shows the false one. This happens because false CTID values are stored in an index. Due to this, it shows a completely different record.

Here, one bit of CTID gets changed, which gives a fresh impetus to corruption. We randomly pick a record (id = 245) from a table as a subject to describe the test case. The snapshot below is the record details.

Table name: – corruption_test

Index name: – test_tbl_pkey

Corruption in page:- Leaf page 1

Associated CTID in table:- (40,5)

Value at the CTID:- 245

Actual CTID in an index tuple :- (40,5) – (00101000,00000101)

CTID in an index tuple after corruption:- (56,5) – (00111000,00000101)

As described above, due to a change of value in the 5th bit, it stores a different CTID. The case is as described in the below.

Now, we will observe how different queries behave in this kind of situation.

Querying a table using an in-doubt number:

As described above, the value in doubt here is 245. Let’s see what result we receive by querying using the “id = 245” predicate.

This is shocking and perplexing as the data returned is completely different than we expected. No need to explain what its implication could be on day-to-day business.

Here, in an index, we can observe that the CTID stored against 245 points to a different value. So, the query returns different values here. The below image describes what actually happened internally.

Querying a table by selecting id only:

If we select only the id column in a query, it shows the correct result.

The reason to get the correct result is that it performs an “index-only scan” because the index exists on the id column. It is therefore not required to visit table data to show records as the index contains all the required records.

Case 3 –  Incorrect index format

Every database object has a pre-defined format. PostgreSQL (or any other database) is programmed to read and write in a specific format that is just a sequence of characters (or bytes). If one or more characters get changed, the object becomes unreadable. In such a corruption case, the query instantly returns an error, and the error texts are not the same every time.

Let’s look into various cases of errors.

Corrupted meta page:

As described above, the meta page is a core page of any index; it stores the metadata of an index. So, if any format-related corruption is there in the meta page, the index becomes unrecognizable. Due to this, the query returns no result. 

The below snapshot shows the result when querying a particular record.

The below query shows the result when the whole table is queried.

Here is a catch! It is understandable not to receive data if a query performs an index scan. But, it is perturbing when a query cannot deliver because of an index corruption when it is not expected to use an index. But, this happens due to a planner.

Corrupted non-meta page:

If format-related corruption exists in a non-meta page (root, internal, or leaf), it returns a different error. However, it does not affect queries that perform sequential scans.

Causes

After reading the above sections, one may understand corruption is nothing but storing inappropriate bytes in database files, which leads to anomalies in data. But, the question here is how it occurs. Although many events leave corruption, it is difficult to create any reproducible test case for corruption because there is no surety of what actually caused it. The only option left to us is to speculate around them. The majority of issues are caused by hardware failure or hardware issues, and the following are the most probable reasons.

  • Faulty RAID disks or RAID controllers
  • Defective disk or RAM
  • Disks without power failure protection
  • Overloaded CPU
  • PostgreSQL/OS bug

Additionally, there are some problems caused by users. Some of them are as below.

  • Killing a connection with signal 9
  • Abruptly stopping PostgreSQL
  • Copying file for backup without pg_start_backup
  • Executing pg_resetwal

Corruption is not a frequent affair; however, it becomes routine in case of faulty hardware or a bug.

Detection

As already mentioned in the case studies section, it is difficult (albeit there are exceptions) to detect corruption when data inside an index is corrupted. This is because data cannot be scanned by a database product; however, this is not true with the format.

When a B-tree index’s format is corrupted, we can use functions of amcheck and pageinspect extensions to verify its format. Here is an example of verifying an index page using the bt_page_items function of the amcheck extension.

We can iterate through all the pages of an index and list out what pages are corrupted. However, this hardly makes sense as we can not change data inside an index.

Repairing

Removing corruption from a B-tree index does not require a thorough analysis; it is just a matter of re-creating an index. As we know, an index is a subset of table data to enable quicker data retrieval. As long as there is no issue in the table data, we can rebuild the index. It can be performed by the following options.

  1. Drop and recreate an index
  2. Use reindex command
  3. Rebuild using pg_repack utility

Out of all of these, pg_repack is the most viable option as it starts creating an index in parallel so that running operations do not get affected. For more information, kindly visit the pg_repack blog.

Anticipation

After reviewing various scenarios related to corruption, we can surely say that it is really scary as it may spook us by showing unwanted and unexpected results. Data is a valuable asset, hence, it is needless to describe its business impact; we can surely reckon the same. Well, how about nipping it in the bud?

Yes, it is possible. We can get corruption recorded in the database using a checksum feature. While performing initdb, we can use the -k option to enable checksum and later need to keep the data_checksums parameter enabled. This will record every corruption pg_stat_database view. Here, if we already know where the corruption exists, it is possible to take necessary action before it is propagated to users.

Summary

A robust structure of B-tree indexes is designed for faster data access. However, when certain bits in bit sequences change, it becomes unreadable or starts returning false data, which has an impact on day-to-day business. Unequivocally, this is a horrible situation. However, it is possible to detect certain kinds of corruption, and it is recommended to take preemptive measures to tackle such a situation.

Please feel free to post your queries or suggestions in the comments section.

Percona Database Performance Blog

John Ludhi/nbshare.io: Pandas Datareader To Download Stocks Data From Google And Yahoo Finance

Pandas Datareader To Download Stocks Data From Google And Yahoo Finance

Datareader package can be used to access data from multiple sources such as yahoo and google finance…

To install the package, use pip

In [ ]:
!pip install pandas_datareader

Use DataReader to access data from yahoo finance

In [ ]:
import numpy as np
import pandas as pd

from pandas_datareader import data as wb
aapl = wb.DataReader('AAPL', data_source='yahoo', start='1995-1-1')

At the time of writing this notebook, there is a bug in DataReader because of which You might run in to following error

TypeError: string indices must be integers

How To Fix DataReader TypeError: string indices must be integers

Use the following snippet to work around the above error.

In [9]:
import numpy as np
import pandas as pd
import yfinance as yf
yf.pdr_override()
from pandas_datareader import data as wb
[*********************100%***********************]  1 of 1 completed
In [11]:
aapl = wb.DataReader('AAPL', data_source='yahoo', start='1995-1-1')
[*********************100%***********************]  1 of 1 completed
In [15]:
aapl
Out[15]:
Open High Low Close Adj Close Volume
Date
1995-01-03 0.347098 0.347098 0.338170 0.342634 0.288771 103868800
1995-01-04 0.344866 0.353795 0.344866 0.351563 0.296296 158681600
1995-01-05 0.350446 0.351563 0.345982 0.347098 0.292533 73640000
1995-01-06 0.371652 0.385045 0.367188 0.375000 0.316049 1076622400
1995-01-09 0.371652 0.373884 0.366071 0.367885 0.310052 274086400
2023-01-09 130.470001 133.410004 129.889999 130.149994 130.149994 70790800
2023-01-10 130.259995 131.259995 128.119995 130.729996 130.729996 63896200
2023-01-11 131.250000 133.509995 130.460007 133.490005 133.490005 69458900
2023-01-12 133.880005 134.259995 131.440002 133.410004 133.410004 71379600
2023-01-13 132.029999 134.919998 131.660004 134.759995 134.759995 57758000

7059 rows × 6 columns

Use DataReader to access data from google finance

In [13]:
googl = wb.DataReader('GOOGL', data_source='googl', start='1995-1-1')
[*********************100%***********************]  1 of 1 completed
In [14]:
googl
Out[14]:
Open High Low Close Adj Close Volume
Date
2004-08-19 2.502503 2.604104 2.401401 2.511011 2.511011 893181924
2004-08-20 2.527778 2.729730 2.515015 2.710460 2.710460 456686856
2004-08-23 2.771522 2.839840 2.728979 2.737738 2.737738 365122512
2004-08-24 2.783784 2.792793 2.591842 2.624374 2.624374 304946748
2004-08-25 2.626627 2.702703 2.599600 2.652653 2.652653 183772044
2023-01-09 88.360001 90.050003 87.860001 88.019997 88.019997 29003900
2023-01-10 85.980003 88.669998 85.830002 88.419998 88.419998 30467800
2023-01-11 89.180000 91.599998 89.010002 91.519997 91.519997 26862000
2023-01-12 91.480003 91.870003 89.750000 91.129997 91.129997 30258100
2023-01-13 90.849998 92.190002 90.129997 92.120003 92.120003 26309900

4634 rows × 6 columns

Note – The pandas_datareader library does not have a built-in function for retrieving options data. However, there are other libraries and APIs you can use to obtain options data, such as yfinance or alpha_vantage for Yahoo Finance and Alpha Vantage, iex for IEX Cloud API.

Check out following tutorial if you want to use yfinance
How To Use yfinance Package

Planet Python

How to Install a Minecraft Bedrock Server on Raspberry Pi

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/01/muo-diy-raspberry-pi-minecraft-bedrock-server-featured.jpg

Playing Minecraft with friends and family requires either putting up with split screen mode, or using multiple devices. For the best results, these should connect to a Minecraft server.

But paying for a Minecraft server is expensive. Why not build your own? It is now possible to run Minecraft Bedrock Server on a Raspberry Pi.

Why Use Minecraft Bedrock Server?

Over the years, Minecraft has evolved beyond the original Java game. As of 2016, Minecraft Bedrock Edition has been the main version, released on PC, consoles, and mobile.

While this brings new features, improved graphics, and better stability to the game, Minecraft Bedrock Edition is not compatible with the old desktop and mobile Java version. As such, if you had installed Minecraft server on a Raspberry Pi, you would only be able to connect from the corresponding Java version (whether on a PC or another Pi).

As there is now a (Java-based) Minecraft Bedrock-compatible server for Raspberry Pi, you can use it to host games played on any device running Bedrock. This gives you the advantage of being fully in control of the server, from setting invites and assigning access rights to installing mods and backing up the world.

Which Raspberry Pi Will Run a Minecraft Bedrock Server?

For this project you have a choice of the Raspberry Pi 3 or Raspberry Pi 4. Naturally the Pi 4 with its 2GB, 4GB, and 8GB variants is the best option. However, you should be able to run Minecraft Bedrock Edition server on a Raspberry Pi 3.

To test this project, I used a Raspberry Pi 3 B+. This device has a 1.4GHz 64-bit quad-core processor and 1GB of RAM. Initial setup was over Wi-Fi, using SSH, but a better response and lower latency can be enjoyed with an Ethernet connection to your router.

Anything lower than a Raspberry Pi 3 should be avoided.

What You Need for a Minecraft Bedrock Server

To host the server software, you will need an operating system. For optimum performance, opt for a lightweight OS–Raspberry Pi OS Lite is probably the best option here.

See our guide to installing an operating system on the Raspberry Pi before proceeding. It is recommended that you configure the installation to automatically connect to your Wi-Fi network (if you’re using one), and have SSH enabled on the Raspberry Pi. If you’re not using SSH, you’ll need a keyboard and display set up and connected.

You will also need to install:

  • Git
  • Java SDK
  • Latest Bedrock Edition-compatible Java build of Minecraft

Follow the steps below to install these and configure your Minecraft Bedrock server.

Configure Raspberry Pi OS for Minecraft Bedrock Edition Server

Before you can install the server software, you will need to configure the Raspberry Pi. These steps assume you have already installed Raspberry Pi OS.

Start by ensuring the operating system is up-to-date:

 sudo apt update && sudo apt upgrade 

Next, open the Raspberry Pi configuration tool, raspi-config:

 sudo raspi-config 

Use the arrow keys to select System Options > GPU Memory and the GPU to 16. This ensures the majority of system resources are dedicated to the server. Hit Tab to select OK.

If you haven’t already enabled SSH at this point, do so by selecting Interfacing Options > SSH press Tab to select Yes and press Enter to confirm.

Next, hit Tab to select Finish, then Enter to reboot the Raspberry Pi.

Set up Minecraft Bedrock Server on Your Raspberry Pi

With the Raspberry Pi restarted, install Git

 sudo apt install git 

This software allows you to clone a GitHub repository to your computer, and is required for installing Minecraft Bedrock server.

You can now install Java.

 sudo apt install default-jdk 

This installs the default (current) version of Java. You can check which version by entering

 java -version 

(Note that to install a specific Java release, use a specific version name, such as sudo apt install openjdk-8-jdk.)

At the time of writing, the default-jdk version was 11.0.16.

Install Minecraft Bedrock Server on Raspberry Pi

You’re not ready to install the server. Begin by entering

 git clone https: 

Wait while this completes, then switch to the Nukkit directory

 cd Nukkit 

Here, update the submodule:

 git submodule update –init 

That will take a while to complete. When done, change permissions on mvnw

 chmod +x mvnw 

Finally:

 ./mvnw clean package 

This final command is the longest part of the process. It’s a good opportunity to boot Minecraft Bedrock Edition on your PC, mobile, or console in readiness.

Run Minecraft Bedrock Server on Raspberry Pi

When ready, change directory:

 cd target 

Here, launch the server software:

 java -jar nukkit-1.0-SNAPSHOT.jar 

You’ll initially be instructed to enter your preferred server language.

Once that is done, Nukkit starts, the server properties are imported, and the game environment is launched. This begins with the default gamemode set to Survival, but you can switch that later.

Once everything appears to be running, enter

 status 

This will display various facts such as memory use, uptime, available memory, load, and number of players.

You can also use the help command (or hit ?) to check what instructions can be used to administer the server. These can either be input directly into the Pi with a keyboard, via SSH, or from the Minecraft game’s chat console (remember to precede each command with “/” in the console).

Connect to Minecraft Bedrock Server from Another Device

With everything set up, you’re ready to connect to your server. To do this

  1. Launch Minecraft Bedrock Edition on any other device
  2. Select Servers > Add Server
  3. Input the server’s Name and IP address (you should know this from using SSH)
  4. Tap Play to enter the server right away, or Save
  5. Subsequent connections, the server will be listed under Additional Servers – simply select it then Join Server

A moment later, you should be in the Minecraft server world. On the server side, this will be recorded:

Create a Minecraft Bedrock Edition Server With Your Raspberry Pi

While a few steps are required to enable the Bedrock Edition server on Raspberry Pi, the end results are good. Our test device, you will recall, was a Raspberry Pi 3B+, more than adequate for 2-5 players. A Raspberry Pi 4 will probably perform better for a greater number of players.

Using a Raspberry Pi is just one of many ways you can create a Minecraft server for free.

MUO – Feed

Laravel : import and export a collection in Excel with Fast Excel


A tutorial on how to use the rap2hpoutre/fast-excel package to export data from a collection or model in Excel xlsx, ods or csv format and import the data from a spreadsheet into a Laravel project.

The French version of this tutorial : Laravel : importer et exporter une collection en Excel avec Fast Excel

Introduction

Fast Excel or fast-excel is a Laravel package that allows to read and write spreadsheet files (CSV, XLSX and ODS). It offers the following features:

  • Export data from collections or models to .xlsx, .csv or .ods files
  • Import data from an Excel .xlsx or .csv file as a collection
  • Import multiple spreadsheets
  • Export large data collections in record time

A collection (Illuminate\Support\Collection) in a Laravel project, is a wrapper that provides methods to efficiently manipulate arrays of data.

This guide will show you how to install and use Fast Excel or rap2hpoutre/fast-excel to perfore the above operations.

At the time of writing this article, I am using version 9.42.2 of Laravel.

Install Fast Excel

To install the rap2hpoutre/fast-excel package in a Laravel project, run the following composer command:

composer require rap2hpoutre/fast-excel

This command will download fast-excel and its dependencies into the /vendor directory of your Laravel project.

The rap2hpoutre/fast-excel package uses the box/spout library to read and write spreadsheet files.

Using Fast Excel

Once fast-excel is downloaded in your project, you can initialize it directly in a controller and access the methods of the Rap2hpoutre\FastExcel\FastExcel class:

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Rap2hpoutre\FastExcel\FastExcel; // The FastExcel class

class FastExcelController extends Controller
{
    public function index () {

        $data = collect(); // A collection or a model
        $fastexcel = new FastExcel($data); // The FastExcel instance
        dd($fastexcel);

    }
}

The global helper

FastExcel also provides the global helper fastexcel() which allows direct access to its methods anywhere in your project:

$data = collect(); // A collection or a model
$fastexcel = fastexcel($data); // The FastExcel instance

The facade

If importing the Rap2hpoutre\FastExcel\FastExcel class or using the global helper fastexcel() does not suit you, you can also register the FastExcel facade in the $aliases array of your /config/app.php file:

'aliases' => Facade::defaultAliases()->merge([

    "FastExcel" => Rap2hpoutre\FastExcel\Facades\FastExcel::class

])->toArray()

Next, initialize Fast Excel by passing data to the method data($data) where $data represents a collection or model:

// A collection or a model
$data = User::first();

// The FastExcel instance
$fastexcel = \FastExcel::data($data);

Export to xlsx, csv or ods with Fast Excel

The export($file) method of FastExcel where $file represents the name of the file followed by the extension “.xlsx”, “.ods” or “.csv” allows you to export the data of a collection or a model to the directory /public :

<?php

namespace App\Http\Controllers;
use Illuminate\Http\Request;
use App\Models\User;

class FastExcelController extends Controller
{
    public function index () {

        // A collection of "App\Models\User"
        $users = User::all();

        // Export to file "/public/users.xlsx"
        $path = (fastexcel($users))->export("users.xlsx");

        // Export to file "/public/users.csv"
        // $path = (fastexcel($users))->export("users.csv");

        // Export to file "/public/users.ods"
        // $path = (fastexcel($users))->export("users.ods");

    }
}

In this example, $path contains the absolute path of the users.xlsx file created. Example : C:\laragon\www\laravel-fastexcel\public\users.xlsx

Rearrange columns

If you want to select the columns to be exported, rearrange the data or apply processing to them, you can use a callback after the file name in the export() method:

// Callback "function($user) { ... }" in export()  
$path = (fastexcel($users))->export("users.xlsx", function ($user) {  
    return [
        "Full Name" => ucfirst($user['name']),
        "E-mail address" => $user['email']
    ];  
});

Download the .xls, .csv or .ods file

Instead of using the export($file) method to save the .xlsx, .csv or .ods files in the /public directory, you can use the download($file) method to start the download:

// Collection of "App\Models\User";
$users = User::select('id', 'name', 'email')->get();

// Download file "users.xlsx"
return fastexcel($users)->download('users.xlsx');

Export multiple collections to an Excel file

Fast Excel allows you to export multiple collections or models to different spreadsheets in an Excel workbook using the SheetCollection class.

Let’s take a look at an example that exports data from the User, Post and Product models to the file users-posts-products.xlsx :

<?php

namespace App\Http\Controllers;
use Illuminate\Http\Request;

// Importing SheetCollection
use Rap2hpoutre\FastExcel\SheetCollection; 

use App\Models\Post;
use App\Models\Product;
use App\Models\User;

class FastExcelController extends Controller
{
    public function index () {

        // Collection "App\Models\User";
        $users = User::select('id', 'name', 'email')->get();

        // Collection "App\Models\Post"
        $posts = Post::orderBy("created_at")->get();

        // Collection "App\Models\Product"
        $products = Product::select('id', "name", "description")->get();

        // Collection of spreadsheets (SheetCollection)
        $sheets = new SheetCollection([
            "Users" => $users,
            "Posts" => $posts,
            "Products" => $products
        ]);

        // Exporting spreadsheets to "/public/users-posts-products.xlsx"
        $path = (fastexcel($sheets))->export("users-posts-products.xlsx");
    }

}

If you open the file users-posts-products.xlsx in Microsoft Excel, you will find the spreadsheets “Users”, “Posts” and “Products”.

Export large collections

If you have a collection that exports a large amount of data, 1M+ rows for example, you can use a generator function to avoid the memory_limit problem:

use App\Models\Client;

// A generator function
function clients () {
    foreach (Client::cursor() as $client) {
        yield $client;
    }
};

// Export to "clients.xlsx
fastexcel(iterator_to_array(clients()))->export("clients.xlsx");

Import an xlsx, csv or ods file with Fast Excel

You can use FastExcel’s import($file) method, where $file represents the path to an .xlsx or .csv file, to import the entries (rows) in $file into a collection (Illuminate\Support\Collection):

// Importing the file "/public/users.xlsx"
$data = fastexcel()->import("users.xlsx");

// $data contains a collection
dd($data);

Import into the database

FastExcel allows you to browse the lines of a file and insert them into the database using a callback after the file name in the import() method:

use App\Models\Client;

// Callback "function ($line) { ... }" in "import"
$data = fastexcel()->import("clients.xlsx", function ($line) {

    return Client::create([
        'name' => $line['name'],
        'email' => $line['email'],
        'phone' => $line['phone'],
        'address' => $line['address'],
    ]);

});

$line['name'] specifies the column named “name” in the clients.xlsx file.

Import multiple spreadsheets from an xlsx file

FastExcel’s importSheets($file) method imports spreadsheet entries (rows) from $file into a collection:

// Import of the file "/public/users-posts-products.xlsx"
$data = fastexcel()->importSheets("users-posts-products.xlsx");

// $data contains a collection of 3 arrays
dd($data);

To import a specific spreadsheet, you can specify its number or position in the workbook using the sheet($number) method:

// Import of the 2nd spreadsheet from the file "/public/users-posts-products.xlsx"
$data = fastexcel()->sheet(2)->import("users-posts-products.xlsx");

Conclusion

We have just seen how to use the Fast Excel package to export data from a collection or model to an Excel file in .xlsx, .csv or .ods format, and import data from a spreadsheet as a collection.

The Fast Excel documentation also shows how to apply styles (text color, font, background, …) to columns and rows of a spreadsheet.

Let’s summarize the fast-excel methods seen :

  • fastexcel($data) : the global FastExcel helper allows to initialize it with the $data of a collection or a model
  • import($file) : import lines from a $file .xlsx, .csv or .ods file into the collection
  • export($file) : export data from a collection or a model to a $file .xlsx, .csv or .ods file
  • importSheets($file) : import spreadsheets from $file
  • sheet($number) : import a specific spreadsheet $number
  • download($file) : start downloading the file $file

Be well! 😉

Laravel News Links

Create Reusable Query With Laravel Query Scope

https://fajarwz.com/blog/create-reusable-query-with-laravel-query-scope/featured-image_hu08b343a7518dff605423dadb51eafa9d_255305_1200x630_fill_q75_bgffffff_box_smart1_3.jpg

One of the best feature Laravel have is Laravel Eloquent ORM. Using Eloquent ORM is easy and it can makes our query simpler, cleaner than Query Builder but sometimes it can also be so long, and maybe we need to reuse the query too.

Is it possible to create a reusable query in Laravel Eloquent ORM?

Laravel Query Scope

We can use query scope to achieve the above situation. Here i will explain about the Laravel Local Scope.

For example if we want to get only the post that is currently marked as non-draft or published, we can use this query:

$publishedPosts = Post::where('is_draft', false)->get();

But I need to use this query in some places, not just in a place. And I don’t want to write this everytime I need it because it is quite long. Then we need to create a query scope for this.

Simple Scope

Create a method inside your model and insert above query to it, name the method with scope prefix like so

<?php
 
namespace App\Models;
 
use Illuminate\Database\Eloquent\Model;
 
class Post extends Model
{
    public function scopePublished($query)
    {
        return $query->where('is_draft', false);
    }
}

Now we have created the scope. However, we should not include the scope prefix when calling the method.

use App\Models\Post;
 
$publishedPosts = Post::published()->get();

Now it is more readable, shorter, and reusable. Even you can chain calls scopes for example if you have a popular scope you can chain it like so

use App\Models\Post;

$popularPublishedPosts = Post::published()->popular()->get();

Read also:

Dynamic Scope

We can also create a scope that accepts parameters like so

class User extends Model {

    public function scopeActive($query, $value)
    {
        return $query->where('is_active', $value);
    }

}

Now we can use the scope dynamically

// Get active users
$activeUsers = User::active(true)->get();

// Get inactive users
$inactiveUsers = User::active(false)->get();

Conclusions

And that’s it, we have tried our own local scope. Now we know how to create and run a local scope.

Further Reading

Query Scopes – Laravel Docs

Read Also

Create Global Query With Laravel Global Scope

Laravel News Links

Laravel Grapes

https://repository-images.githubusercontent.com/577150459/bd2b80fc-9c1e-4591-85c3-ba9ec687a951

GitHub
Total Download
GitHub release (latest by date including pre-releases)

Table of Contents
  1. About Laravel Grapes
  2. Diffrence Between Regular Version And Pro Version
  3. Installation Steps
  4. Usage

  5. Translations
  6. Author
  7. License

About Laravel Grapes

Laravel Grapes is a library for laravel framework, that offer cms drag and drop page builder for frontend which support all Laravel functionality and help user to change all frontend and content just in simple clicks.

Laravel Grapes Comes With A Pro Version Will Be Available On Code Canyon SOON !.


Diffrence Between Regular Version And Pro Version

Feature Regular Version Pro Version
Laravel CSRF yes yes
Laravel Auth User Condition yes yes
Laravel Auth Dynamic Guard yes yes
Multilingual yes yes
Dynamic Laravel Shortcode widgets 1 unlimted
Dynamic Routes /{id} No yes

Installation Steps

composer require msa/laravel-grapes
php artisan vendor:publish --provider="MSA\LaravelGrapes\LaravelGrapesServiceProvider" --tag="*"

Go to config/lg.php

    <?php

    return [
        // routes configurations
        'builder_prefix' => 'hello', // prefix for builder

        'middleware' => null, // middleware for builder

        'frontend_prefix' => '', // prefix for frontend

        /* Define additional translation languages. */
        'languages' => [
            'ar',
            'es',
        ],
    ];`
1) builder_prefix

The builder by default come with route route('website.builder') which consists of your-domain.com/hello/front-end-builder.
you can change the builder prefix to hi so now the builder load with route prefix hi instead of hello.

2) middleware

Assign any middleware you want to the builder for example auth:admin.

3) frontend_prefix

The frontend prefix by default it comes empty that mean that any generated front end page builder it load directly with your slug that created by you so if you need to set prefix for your generated frontend so change it to your prefix that you want.

Now laravel grapes is working.

Navigate to builder route your-domain.com/builder_prefix/front-end-builder.


Usage

The Controll Panel Consists Of 3 Panels :-

1) Options Panel

2) Page Panel

3) View Panel

4) Customize Builder Style Sheet

1. Options Panel

The options panel consists of 11 buttons :-

View Components

The view component button show grid lines for all components droped in the canvas, this help to to select each component individual for example take a look on the screenshot below.

Preview

The preview button help you to show page without pannels like screenshot below

Full Screen

The full screen mode button hide all browser utils and show only the builder.

View Code

The view code button show you the html and css code of the page like sceenshot below

Create New Page

The create new page button at topbar when you press on it, the popup modal open with new page form, so fill page name and slug and if you need the page become a home page type slug / .

After submit the form will receive toast notification that page has been created successfully, so select the new page throw select page input on the top bar to start modifying the page.

Don’t forget to remove the default route in routes/web.php becaues it will conflict with home page, you don’t need web.php for frontend routes because laravel grapes come with it own route file

<?php

use Illuminate\Support\Facades\Route;

/*
|--------------------------------------------------------------------------
| Web Routes
|--------------------------------------------------------------------------
|
| Here is where you can register web routes for your application. These
| routes are loaded by the RouteServiceProvider within a group which
| contains the "web" middleware group. Now create something great!
|
*/

// Route::get('/', function () {
//     return view('welcome');
// });

Edit Code

The edit code button it will open a popup code editor modal that hold page code including html and css.

So you can edit the html and css code from the code editor popup, for editing syles you will find page style inside tag <style></style>.

Note: Html and css on the code editor merged in one page but after submit the code you can View Code, you will find styles and html each of them seperate and each generated page has it’s own blade file and css file.

Component Manager

The Component Manager button will open a popup hold all custome components that has been saved to reused on another page to let you edit name of the component or delete it.

Page Manager

The page manager button will open a popup hold all pages and let you to edit page name and slug.

Clear Canvas

The clear canvas button will remove all components from the canvas.

Save Component

Laravel Grapes let you to save any custome component for reuse it on other pages all you need to select the component and click on Save Component Button.

Save Changes

The save changes button update the page content and if you check the page slug you will find that page content has been changed.

2. Page Panel

The options panel consists of 2 select input :-

Select Page

The select page input let you to select page that you need to modify it.

Select Device

The select device input let you to modify page html and styles on different screens with the following sizes

  • Desktop
  • Extra Large
  • Large
  • Tablet
  • Medium
  • Mobile Landscape
  • Small
  • Extra Small
  • Mobile Portrait

3. View Panel

The View Panel consists of 4 buttons :-

Block Manager

The Block Manager Comes with Bootstrap Components :-

  • Layout which holds

  • Components which holds

  • Typography which holds

  • Templates which holds

  • Saved which holds

Layer Manager

Another utility tool you might find useful when working with web elements is the layer manger. It’s a tree overview of the structure nodes and enables you to manage it easier.

Components Settings

Each component come with it’s own settings you can modify it for example, if you select from the canvas link element and got to component settings you will find the following:

  • The attribute of href link
  • The attribute of target
  • The attribute of toggle
  • Show Laravel Auth User Email
  • Laravel Auth User option, for show element
  • Laravel Auth Guard option, for show element. (default is web)
  • The attribute of id
  • The attribute of title

Style Manager

The Style manager is composed by sectors, which group inside different types of CSS properties. So you can add, for instance, a Dimension sector for width and height, and another one as Typography for font-size and color and more. So it’s up to you decide how organize sectors.

  • Classes

  • General

  • Flex Options

  • Dimension Options

  • Typography Options

  • Decorations Options

  • Extra

4. Customize Builder Style Sheet

Go to public/css/laravel-grapes.css and start Customizing Laravel Grapes Builder style sheet As You Wish.

Translations

Each text component have translation input trait for your languages that you were defined in config/lg.php, In the example below you will find Ar Local and Es Local .

It supported rtl for arabic language (ar)

Author

Mohamed Allam

MIT © Mohamed Allam

Laravel News Links

Laravel Ecommerce Tutorial: Part 1, Introduction

https://res.cloudinary.com/dwinzyahj/image/upload/v1672415188/posts/xkhhfalm6lkqa96hvfid.jpg

I recently built custom ecommerce sites for a couple of clients, and I decided to make a tutorial from my experience using Laravel to build those sites.

In this tutorial, we will build a fully active Laravel Ecommerce site for a mobile phone dealership called Appleplug.Store. Appleplug sells mobile phones and accessories, and they wanted to upgrade their current ecommerce site powered by WordPress to a custom solution. View their live website at appleplug.store

In this tutorial, we will take on this project and build the ecommerce site.

This is going to be an ongoing series, starting with this introduction. In this part of the series, we will do a basic project setup and install the required tools.

What you’ll learn

By the end of this series, you will have

  • Built a working Ecommerce website deployed in production

  • Learned to do Test Driven Development in Laravel

  • Understand Laravel beyond basic crud

  • Learned to use Laravel’s Background jobs

  • Learned to handle authorization

  • And some other cool stuff

Minimum requirements

In order to follow along this tutorial, you will need:

A few things to note, we will be using

  • Bootstrap CSS for our styling, you’re, however, welcome to use any CSS framework of your choice

  • Stisla Admin template

  • The Hotwired Stack (Turbo, Stimulus)

Basic setup

I’m assuming you already have composer installed, let’s start by creating a project in Laravel

composer create-project laravel/laravel --prefer-dist ecommerce

or using the laravel binary

laravel new ecommerce

This will create a new project and install all the dependencies in the ecommerce directory.

Next, let us set up our database

sudo mysql

In the MySQL console, create a user and a database and grant rights on the database to our newly created user.

create database ecommerce;
create user laravel@localhost identified by 'secure password';
grant all on ecommerce.* to laravel@localhost;

After granting rights, open the project folder in your favorite text editor, I’m using JetBrains PhpStorm, If you’re interested in using PhpStorm also checkout Jeffry Way’s video on Laracasts about how to set it up.

In your text editor, open the .env file and edit DB_XX to match the user and database we just created.

Next, open the terminal in the working directory and run our first migrations with

php artisan migrate

Other tools

Next, let’s install other tools we will be using throughout the development of this application.

First, this package allows us to use Turbo within our application.

composer require tonysm/turbo-laravel

After installing, execute the turbo:install Artisan command, which will add a couple JS dependencies to your package.json file.

Next, let’s install another package to let use Stimulus in Laravel

composer require tonysm/stimulus-laravel

After installing, execute the stimulus:install Artisan command to stimulus as dependency and basic scaffolding.

Last, let’s install some blade helper functions to use with stimulus

composer require flixtechs-labs/turbo-laravel-helpers

Now that our basic setup is done, let’s install the dependencies by running yarn or npm install and then start the dev server with

php artisan serve

In the next blog post, we will begin to actually build our ecommerce site. We will set up user authentication authorization.

Subscribe to the newsletter and get notified when I post the next tutorial

Laravel News Links

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity

Dry eye changes how injured cornea heals itself

https://www.futurity.org/wp/wp-content/uploads/2023/01/dry-eye-cornea-healing-1600.jpgA close-up of a person's pupil.

A new study with mice finds that proteins made by stem cells that regenerate the cornea may be new targets for treating and preventing injuries.

People with a condition known as dry eye disease are more likely than those with healthy eyes to suffer injuries to their corneas.

Dry eye disease occurs when the eye can’t provide adequate lubrication with natural tears. People with the common disorder use various types of drops to replace missing natural tears and keep the eyes lubricated, but when eyes are dry, the cornea is more susceptible to injury.

“We have drugs, but they only work well in about 10% to 15% of patients,” says senior investigator Rajendra S. Apte, professor in the department of ophthalmology and visual sciences at Washington University in St. Louis.

“In this study involving genes that are key to eye health, we identified potential targets for treatment that appear different in dry eyes than in healthy eyes.

“Tens of millions of people around the world—with an estimated 15 million in the United States alone—endure eye pain and blurred vision as a result of complications and injury associated with dry eye disease, and by targeting these proteins, we may be able to more successfully treat or even prevent those injuries.”

For the study in the Proceedings of the National Academy of Sciences, the researchers analyzed genes expressed by the cornea in several mouse models—not only of dry eye disease, but also of diabetes and other conditions. They found that in mice with dry eye disease, the cornea activated expression of the gene SPARC. They also found that higher levels of SPARC protein were associated with better healing.

“We conducted single-cell RNA sequencing to identify genes important to maintaining the health of the cornea, and we believe that a few of them, particularly SPARC, may provide potential therapeutic targets for treating dry eye disease and corneal injury,” says first author Joseph B. Lin, an MD/PhD student in Apte’s lab.

“These stem cells are important and resilient and a key reason corneal transplantation works so well,” Apte explains. “If the proteins we’ve identified don’t pan out as therapies to activate these cells in people with dry eye syndrome, we may even be able to transplant engineered limbal stem cells to prevent corneal injury in patients with dry eyes.”

The National Eye Institute, the National Institute of Diabetes and Digestive and Kidney Diseases, and the National Institute of General Medical Sciences of the National Institutes of Health supported the work. Additional funding came from the Jeffrey T. Fort Innovation Fund, a Centene Corp. contract for the Washington University-Centene ARCH Personalized Medicine Initiative, and Research to Prevent Blindness.

Source: Washington University in St. Louis

The post Dry eye changes how injured cornea heals itself appeared first on Futurity.

Futurity