How We Brought a Dead MySQL InnoDB Cluster Back to Life

A war story: complete outage, GTID chaos, duplicate UUIDs, and the steps that finally worked

There’s a particular kind of dread that comes with staring at a database cluster where every node shows OFFLINE.

No reads. No writes. Just silence where your production data used to be.

That’s exactly where we found ourselves with a MySQL InnoDB Cluster — three nodes, all down, all stubbornlyPlanet for the MySQL Community

A better way to crawl websites with PHP

https://freek.dev/og-image/ce502835e5baaaa1251b6fb59c110536.jpeg

Our spatie/crawler. package is one of the first one I created. It allows you to crawl a website with PHP. It is used extensively in Oh Dear and our laravel-sitemap package.

Throughout the years, the API had accumulated some rough edges. With v9, we cleaned all of that up and added a bunch of features we’ve wanted for a long time.

Let me walk you through all of it!

Using the crawler

The simplest way to crawl a site is to pass a URL to Crawler::create() and attach a callback via onCrawled():

use Spatie\Crawler\Crawler;
use Spatie\Crawler\CrawlResponse;

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        echo "{$url}: {$response->status()}\n";
    })
    ->start();

The callable gets a CrawlResponse object. It has these methods

$response->status();        
$response->body();          
$response->header('some-header');  
$response->dom();           
$response->isSuccessful();  
$response->isRedirect();    
$response->foundOnUrl();    
$response->linkText();      
$response->depth();         

The body is cached, so calling body() multiple times won’t re-read the stream. And if you still need the raw PSR-7 response for some reason, toPsrResponse() has you covered.

You can control how many URLs are fetched at the same time with concurrency(), and set a hard cap with limit():

Crawler::create('https://example.com')
    ->concurrency(5)
    ->limit(200) 
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

There are a couple of other on closure callbacks you can use:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response, CrawlProgress $progress) {
        echo "[{$progress->urlsProcessed}/{$progress->urlsFound}] {$url}\n";
    })
    ->onFailed(function (string $url, RequestException $e, CrawlProgress $progress) {
        echo "Failed: {$url}\n";
    })
    ->onFinished(function (FinishReason $reason, CrawlProgress $progress) {
        echo "Done: {$reason->name}\n";
    })
    ->start();

Every on callback now receives a CrawlProgress object that tells you exactly where you are in the crawl:

$progress->urlsProcessed;  
$progress->urlsFailed;     
$progress->urlsFound;      
$progress->urlsPending;    

The start() method now returns a FinishReason enum, so you know exactly why the crawler stopped:

$reason = Crawler::create('https://example.com')
    ->limit(100)
    ->start();


Each CrawlResponse also carries a TransferStatistics object with detailed timing data for the request:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        $stats = $response->transferStats();

        echo "{$url}\n";
        echo " Transfer time: {$stats->transferTimeInMs()}ms\n";
        echo " DNS lookup: {$stats->dnsLookupTimeInMs()}ms\n";
        echo " TLS handshake: {$stats->tlsHandshakeTimeInMs()}ms\n";
        echo " Time to first byte: {$stats->timeToFirstByteInMs()}ms\n";
        echo " Download speed: {$stats->downloadSpeedInBytesPerSecond()} B/s\n";
    })
    ->start();

All timing methods return values in milliseconds. They return null when the stat is unavailable, for example tlsHandshakeTimeInMs() will be null for plain HTTP requests.

Throttling the crawl

I wanted the crawler to a well behaved piece of software. Using the crawler at full speed and with large concurrency could overload some servers. That’s why throttling is a polished feature of the package.

We ship two throttling strategies. The first one is FixedDelayThrottle that can give a fixed delay between all requests.

$crawler->throttle(new FixedDelayThrottle(200)); 

AdaptiveThrottle is a strategy that adjusts the delay based on how fast the server responds. If the server responds fast, the minimum delay will be low. If the server responds slow, we’ll automatically slow down crawling.

$crawler->throttle(new AdaptiveThrottle(
    minDelayMs: 50,
    maxDelayMs: 5000,
));

Testing with fake()

Like Laravel’s HTTP client, the crawler now has a fake to define which response should be returned for a request without making the actually request.

Crawler::create('https://example.com')
    ->fake([
        'https://example.com' => '<html><a href="/about">About</a></html>',
        'https://example.com/about' => '<html>About page</html>',
    ])
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

Using this faking helps to keep your tests executing fast.

Driver-based JavaScript rendering

Like in our Laravel PDF, Laravel Screenshot, and Laravel OG Image packages, Browsershot is no longer a hard dependency. JavaScript rendering is now driver-based, so you can use Browsershot, a new Cloudflare renderer, or write your own:

$crawler->executeJavaScript(new CloudflareRenderer($endpoint));

In closing

I’m usually very humble, but think that in this case I can say that our crawler package is the best available crawler in the entire PHP ecosystem.

You can find the package on GitHub. The full documentation is available on our documentation site.

This is one of the many packages we’ve created at Spatie. If you want to support our open source work, consider picking up one of our paid products.

Laravel News Links

LEGO Builder’s Work Tray

https://theawesomer.com/photos/2026/03/lego_model_making_wood_tray_t.jpg

LEGO Builder’s Work Tray

This large wooden tray provides the ideal work surface for building LEGO models and other construction sets. Measuring 41.3″ wide by 21.6″ deep, it has 11 trays for organizing parts and a spacious work area. A smooth-spinning lazy susan lets you access your creations from all sides. At 12.1 lb., it’s easy to move around and is thin enough to store behind the couch.

The Awesomer

Real Python: Automate Python Data Analysis With YData Profiling

https://files.realpython.com/media/report-overview.c5b7b1fa2ba4.png

The YData Profiling package generates an exploratory data analysis (EDA) report with a few lines of code. The report provides dataset and column-level analysis, including plots and summary statistics to help you quickly understand your dataset. These reports can be exported to HTML or JSON so you can share them with other stakeholders.

By the end of this tutorial, you’ll understand that:

  • YData Profiling generates interactive reports containing EDA results, including summary statistics, visualizations, correlation matrices, and data quality warnings from DataFrames.
  • ProfileReport creates a profile you can save with .to_file() for HTML or JSON export, or display inline with .to_notebook_iframe().
  • Setting tsmode=True and specifying a date column with sortby enables time series analysis, including stationarity tests and seasonality detection.
  • The .compare() method generates side-by-side reports highlighting distribution shifts and statistical differences between datasets.

To get the most out of this tutorial, you’ll benefit from having knowledge of pandas.

Note: The examples in this tutorial were tested using Python 3.13. Additionally, you may need to install setuptools<81 for backward compatibility.

You can install this package using pip:

Shell

$ python -m pip install ydata-profiling

Once installed, you’re ready to transform any pandas DataFrame into an interactive report. To follow along, download the example dataset you’ll work with by clicking the link below:

Get Your Code: Click here to download the free sample code and start automating Python data analysis with YData Profiling.

The following example generates a profiling report from the 2024 flight delay dataset and saves it to disk:

Python
flight_report.py

import pandas as pd
from ydata_profiling import ProfileReport

df = pd.read_csv("flight_data_2024_sample.csv")

profile = ProfileReport(df)
profile.to_file("flight_report.html")

This code generates an HTML file containing interactive visualizations, statistical summaries, and data quality warnings:

Dataset overview displaying statistics and variable types. Statistics include 35 variables, 10,000 observations, and 3.2% missing cells. Variable types: 5 categorical, 23 numeric, 1 DateTime, 6 text.

You can open the file in any browser to explore your data’s characteristics without writing additional analysis code.

There are a number of tools available for high-level dataset exploration, but not all are built for the same purpose. The following table highlights a few common options and when each one is a good fit:

Use case Pick Best for
You want to quickly generate an exploratory report ydata-profiling Generating exploratory data analysis reports with visualizations
You want an overview of a large dataset skimpy or df.describe() Providing fast, lightweight summaries in the console
You want to enforce data quality pandera Validating schemas and catching errors in data pipelines

Overall, YData Profiling is best used as an exploratory report creation tool. If you’re looking to generate an overview for a large dataset, using SkimPy or a built-in DataFrame library method may be more efficient. Other tools, like Pandera, are more appropriate for data validation.

If YData Profiling looks like the right choice for your use case, then keep reading to learn about its most important features.

Building a Report With YData Profiling

A YData Profiling report is composed of several sections that summarize different aspects of your dataset. Before customizing a report, it helps to understand the main components it includes and what each one is designed to show.

Read the full article at https://realpython.com/ydata-profiling-eda/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

Real Python: Quiz: The pandas DataFrame: Make Working With Data Delightful

https://realpython.com/static/real-python-placeholder-3.5082db8a1a4d.jpg

In this quiz, you’ll test your understanding of the
pandas DataFrame.

By working through this quiz, you’ll review how to create pandas DataFrames, access and modify columns, insert and sort data, extract values as NumPy arrays, and how pandas handles missing data.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

4 productivity-boosting tmux features you should be using

https://static0.howtogeekimages.com/wordpress/wp-content/uploads/2025/10/tux-the-linux-mascot-sitting-with-a-laptop-in-front-of-a-large-terminal-window-1.png

Has your terminal app ever crashed mid-op? Ever wish you didn’t have to juggle multiple terminal tabs or deal with failed processes caused by terminal connection drops? If any of that sounds relatable, multiplexing, which isn’t as complicated as it sounds, can save you from the tab chaos and turn your Linux terminal into a productivity dashboard.

How-To Geek

Keto diet could improve response to exercise in people with high blood sugar

https://www.futurity.org/wp/wp-content/uploads/2026/02/keto-diet-exercise-high-blood-sugar-diabetes-1600.jpgA person cuts avocado on a cutting board.

A new study finds that feeding mice with hyperglycemia a high-fat, low-carbohydrate diet lowered their blood sugar and improved their bodies’ response to exercise.

To be healthy, conventional wisdom tells us to exercise and limit fatty foods. Exercise helps us lose weight and build muscle. It makes our hearts stronger and boosts how we take in and use oxygen for energy—one of the strongest predictors of health and longevity.

But people with high blood sugar often don’t achieve those benefits from exercise, especially the ability to use oxygen efficiently. They’re at higher risk for heart and kidney disease, but high blood sugar can prevent their muscles from taking up oxygen more effectively in response to exercise.

For them, the new study suggests the answer could be eating not less fat, but more.

The study by exercise medicine scientist Sarah Lessard in Nature Communications, found that a high-fat, ketogenic diet reduced high blood sugar, or hyperglycemia, in mice, and their bodies were more responsive to exercise.

“After one week on the ketogenic diet, their blood sugar was completely normal, as though they didn’t have diabetes at all,” says Lessard, associate professor at Virginia Tech’s Fralin Biomedical Research Institute at VTC’s Center for Exercise Medicine Research.

“Over time, the diet caused remodeling of the mice’s muscles, making them more oxidative and making them react better to aerobic exercise.”

The ketogenic diet is named for its ability to induce ketosis, a metabolic state that shifts the body to burning fat for fuel instead of sugar. The diet is controversial because it calls for eating high-fat, very low-carbohydrate foods, which is counter to the low-fat diet historically urged by health advocates.

However, the keto diet has been linked to benefits for people with some diseases, including epilepsy and Parkinson’s disease. In the 1920s, before the discovery of insulin, it was a way to manage diabetes because of its ability to lower blood sugar.

In earlier research, Lessard found that people with high blood sugar had lower exercise capacity. She wondered if the diet might improve the response to exercise, leading to higher exercise capacity.

Mice were fed a high-fat, low-carbohydrate diet and exercised on running wheels. The mice developed more slow-twitch muscle fibers, which give better endurance.

“Their bodies were more efficiently using oxygen, which is a sign of higher aerobic capacity,” Lessard says.

Lessard says exercise positively affects virtually every tissue in our body, even fat tissue, but she and others are seeing that the greatest health improvements won’t come with diet or exercise alone.

“What we’re really finding from this study and from our other studies is that diet and exercise aren’t simply working in isolation,” says Lessard, who also holds an appointment in the Department of Human Foods, Nutrition, and Exercise in Virginia Tech’s College of Agriculture and Life Sciences.

“There are a lot of combined effects, and so we can get the most benefits from exercise if we eat a healthy diet at the same time.”

Next, Lessard would like to continue her research in human subjects to see if they gain the same benefits from the keto diet seen in mice.

She also notes that the keto diet is challenging to follow. A less restrictive regimen, such as the Mediterranean diet, might be easier for people to follow and still be effective. That diet can also keep blood sugar low, while including carbohydrates from unprocessed fruits, vegetables, and whole grains rather than restricting carbohydrates altogether.

“Our previous studies have shown that any strategy you and your doctor have arrived at to reduce your blood sugar could work,” she says.

Source: Virginia Tech

The post Keto diet could improve response to exercise in people with high blood sugar appeared first on Futurity.

Futurity

How to build an automatic internet speed tracker for your home network

https://static0.howtogeekimages.com/wordpress/wp-content/uploads/2026/02/laptop-displaying-an-automated-internet-speed-tracker-with-a-speedometer-graphic-and-terminal-window-results.png

Have you been experiencing frequent internet connection issues, but whenever you do an internet speed test, the results show you’re getting the speeds your internet service provider promised? If you can relate to that, consider building an automatic internet speed tracker and logger.

How-To Geek

Laravel Launches an Open Directory of AI Agent Skills for Laravel and PHP

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/laravel-skils.png

The Laravel ecosystem continues to lean into the agent-driven future with the launch of Laravel Skills, an open directory of reusable AI agent skills designed specifically for Laravel and PHP developers.

Available at https://skills.laravel.cloud/, the new site makes it easy to discover, share, and install skills that help AI tools better understand your codebase, workflows, and best practices.

What Is Laravel Skills?

Laravel Skills is described as an open directory of reusable AI agent skills for Laravel and PHP, where developers can browse and install skills with a single command.

These skills are designed to work with popular AI coding environments including Claude Code, Cursor, Windsurf, Copilot, and others, helping agents perform tasks like:

  • Following Laravel conventions
  • Applying PHP best practices
  • Working with Eloquent and queues
  • Running TDD workflows
  • Structuring applications correctly
  • Reviewing code quality

Instead of repeatedly explaining your stack or preferences, you can load a skill that teaches your agent how to behave.


Install Skills with a Single Command

One of the highlights is how lightweight the workflow is. Skills can be installed using a simple command:

npx skills add <owner/repo>

A Growing Library of Community Skills

The directory already includes skills covering areas like:

  • Laravel architecture guidelines
  • Eloquent optimization
  • Modern PHP patterns
  • Testing workflows
  • API design practices
  • Frontend integrations

Because it’s community powered, developers can submit their own skills and contribute patterns that reflect real-world experience.

Laravel News