Ward: A Security Scanner for Laravel

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/Ward-2-LN.png

Ward, created by El Jakani Yassine is a command-line security scanner written in Go designed around Laravel’s structure. Rather than running generic pattern matching across your codebase, it first parses your project’s structure—routes, models, controllers, middleware, Blade templates, config files, environment variables, and dependencies—then runs targeted checks against that context.

Installation

Ward is distributed as a Go binary so you’ll need to ensure you Go already installed and then you can run:

go install github.com/eljakani/ward@latest

 

# Make sure $GOPATH/bin is in your PATH

export PATH="$PATH:$(go env GOPATH)/bin"

After installing, run ward init to create ~/.ward/ with a default config file, 42+ built-in rules organized by category, and directories for reports and scan history.

Scanning a Project

Point Ward at a local directory or a remote Git repository:

# Local project

ward scan /path/to/laravel-project

 

# Remote repository (shallow cloned)

ward scan https://github.com/user/laravel-project.git

When run in a terminal, Ward displays a TUI. A scan view shows pipeline progress and live severity counts as scanners run. Once complete, a results view presents a sortable findings table with severity badges, category grouping, and a detail panel showing descriptions, code snippets, and remediation guidance.

Screenshot of the Ward TUI
A Screenshot of the Ward TUI

What It Checks

Ward ships with four independent scan engines:

  • env-scanner runs 8 checks against your .env file, including debug mode enabled in production, missing or weak APP_KEY, and secrets leaked in .env.example.
  • config-scanner runs 13 checks across your config/*.php files, covering hardcoded credentials, insecure session flags, CORS wildcard origins, and missing security options.
  • dependency-scanner queries the OSV.dev advisory database in real time against your composer.lock to find vulnerable Packagist packages. Because it queries live data rather than a bundled list, it reflects current advisories rather than whatever was current at the tool’s last release.
  • rules-scanner applies 42 rules across 7 categories: secrets (hardcoded passwords, API keys, AWS credentials), injection (SQL, command, eval), XSS (unescaped Blade output, JavaScript injection), debug artifacts (dd(), dump(), phpinfo()), weak cryptography (md5, sha1, insecure RNG), configuration issues (CORS, CSRF, mass assignment), and authentication gaps (missing middleware, absent rate limiting).

Output Formats

Configure output formats in ~/.ward/config.yaml:

output:

formats: [json, sarif, html, markdown]

dir: ./reports

  • JSON — machine-readable results
  • SARIF — compatible with GitHub Code Scanning and IDE integrations
  • HTML — standalone dark-themed visual report
  • Markdown — suitable for PR comments

CI/CD Integration

Ward returns non-zero exit codes when findings meet or exceed a specified severity, making it straightforward to gate deployments:

ward scan . --output json --fail-on high

A GitHub Actions example from the project’s documentation:

name: Ward Security Scan

on: [push, pull_request]

 

jobs:

security-scan:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- uses: actions/setup-go@v5

with:

go-version: '1.24'

- name: Install Ward

run: go install github.com/eljakani/ward@latest

- name: Run Ward

run: ward init && ward scan . --output json

- name: Upload SARIF

if: always()

uses: github/codeql-action/upload-sarif@v3

with:

sarif_file: ward-report.sarif

Baseline Management

For teams that want to acknowledge existing findings without suppressing future ones, Ward supports a baseline workflow:

# Capture current state

ward scan . --output json --update-baseline .ward-baseline.json

 

# On subsequent runs, suppress known findings and fail only on new ones

ward scan . --output json --baseline .ward-baseline.json --fail-on high

Committing .ward-baseline.json to your repository lets the team track which findings have been acknowledged and catch regressions in CI.

Custom Rules

Drop .yaml files into ~/.ward/rules/ to define additional checks. Rules support regex or substring patterns, file-existence checks, and negative patterns that fire when something is absent—for example, flagging routes that lack @csrf. You can target PHP files, Blade templates, config files, environment files, routes, migrations, or JavaScript files.

rules:

- id: TEAM-001

title: "Hardcoded internal service URL"

severity: medium

patterns:

- type: regex

target: php-files

pattern: 'https?://internal-service\.\w+'

Individual built-in rules can also be disabled or have their severity overridden in config.yaml without touching the rule files themselves.

Scan History

Ward saves each scan result to ~/.ward/store/, and on subsequent runs it surfaces a diff against the previous scan—for example, "2 new, 3 resolved (12→11)"—so you can track how your security posture changes over time.

Visit Eljakani/ward on GitHub to browse the source and get started.

Laravel News

Workflow 3.0

https://rodolfoberrios.com/org/chevere/packages/workflow/workflow-social.png

After three years of development and extensive production use, Workflow 3.0 brings significant improvements to building multi-step procedures in PHP. This release focuses on simplifying asynchronous execution, improving developer experience, and adding essential resilience features.

# Dependency injection

Version 3.0 introduces container support for injecting dependencies into jobs at runtime:

This enables workflows to remain stateless while accessing services like databases or HTTP clients through the container.

# Callable support

Version 3.0 accepts any PHP callable as a job, providing flexibility in how you define workflow steps:

This eliminates boilerplate for simple operations while maintaining support for Action classes when business logic requires full class structure. Callables enable inline data transformation without requiring dedicated action classes for single-use operations.

# Response property access

Version 3.0 extends response() to access public object properties directly, not just array keys:

This works transparently with both arrays and objects, allowing actions to return domain objects without requiring array conversion. The workflow engine inspects the response and accesses properties or array keys accordingly.

# Retry policies

Transient failures in distributed systems are inevitable. Workflow 3.0 implements configurable retry policies:

Retry policies are essential for handling transient failures in distributed systems, where network operations and external services may temporarily fail but succeed on subsequent attempts.

# True async execution

The parallel runner has been replaced with a true async implementation using AMPHP. This provides non-blocking execution without the overhead of process forking, leveraging PHP 8.1+ Fibers for efficient multitasking.

Independent jobs execute concurrently while the engine manages the resolution of the dependency graph. This follows asynchronous task-based execution model where the scheduler unrolls the graph and executes nodes as soon as their data dependencies (like response()) are satisfied. This shift significantly reduces memory footprint compared to the previous process-based model while maintaining strict execution order.

# Conditional execution

Version 3.0 adds withRunIfNot() for cleaner conditional logic:

This complements withRunIf() and accepts boolean literals, variables, job responses, and callables. Conditional execution enables branching without complex orchestration logic.

# Type safety

Integration with chevere/parameter 2.0 provides runtime validation:

Workflow validates inputs before job execution and verifies response types match expected parameters in dependent jobs. This eliminates a class of runtime errors that would otherwise require extensive testing.

# Practical example

Here’s a complete workflow for processing user uploads:

This workflow validates the file, resizes and optimizes it in parallel, then stores the result. The resize job retries on failure, and both processing jobs only run if validation succeeds.

# Migration notes

The parallel runner removal is the only breaking change. Applications using parallel execution should switch to async jobs with appropriate dependency declarations. The async runner provides better performance and simpler semantics.

# Conclusion

Workflow 3.0 represents three years of production refinement. The addition of container support, callables, retry policies, and true async execution address real-world requirements while maintaining the declarative approach that makes workflows maintainable.

The library continues following established patterns from workflow research and distributed systems literature. Each job remains independently testable, workflows stay declarative, and the dependency graph handles execution ordering automatically.

For complete documentation and examples, visit chevere.org/packages/workflow.

Laravel News Links

Hardening MySQL: Practical Security Strategies for DBAs

https://percona.community/blog/2026/03/mysql-security.png

MySQL Security Best Practices: A Practical Guide for Locking Down Your Database

Introduction

MySQL runs just about everywhere. I’ve seen it behind small personal projects, internal tools, SaaS platforms, and large enterprise systems handling serious transaction volume. When your database sits at the center of everything, it becomes part of your security perimeter whether you planned it that way or not. And that makes it a target.

Securing MySQL isn’t about flipping one magical setting and calling it done. It’s about layers. Tight access control. Encrypted connections. Clear visibility into what’s happening on the server. And operational discipline that doesn’t drift over time.

In this guide, I’m going to walk through practical MySQL security best practices that you can apply right away. These are the kinds of checks and hardening steps that reduce real risk in real environments, and help build a database platform that stays resilient under pressure.


1. Principle of Least Privilege

One of the most common security mistakes is over-granting privileges.
Applications and users should have only the permissions they absolutely
need.

Bad Practice

sql

GRANT ALL PRIVILEGES ON *.* TO 'appuser'@'10.%';

Better Approach

sql

GRANT SELECT, INSERT, UPDATE ON appdb.* TO 'appuser'@'10.%';

Recommendations

  • Avoid global privileges unless absolutely required
  • Restrict users by host whenever possible
  • Separate admin accounts from application accounts
  • Use different credentials for read-only vs write operations

Audit Existing Privileges

sql

SELECT user, host, Select_priv, Insert_priv, Update_priv, Delete_priv
FROM mysql.user;

2. Strong Authentication & Password Policies

Weak credentials remain one of the easiest attack vectors.

Enable Password Validation

component_validate_password is MySQL’s modern password policy engine. Think of it as a gatekeeper for credential quality. Every time someone tries to set or change a password, it checks whether that password meets your defined security standards before letting it in.

It replaces the older validate_password plugin with a component-based architecture that is more flexible and better aligned with MySQL 8.x design.

sql

INSTALL COMPONENT 'file://component_validate_password';

What It Does

When enabled, it enforces rules such as:

  • Minimum password length
  • Required mix of character types
  • Dictionary file checks
  • Strength scoring

If a password fails policy, the statement is rejected before the credential is stored.

Why It Matters

Weak passwords remain one of the most common entry points in database breaches. This component reduces risk by enforcing baseline credential hygiene automatically, instead of relying on developer discipline.

Recommended Policies

  • Minimum length: 14+ characters
  • Require mixed case, numbers, and symbols
  • Enable dictionary checks
  • Enable username checks

Remove Anonymous Accounts

Find Anonymous Users

Anonymous users have an empty User field.

sql

SELECT user, host FROM mysql.user WHERE user='';

If you see rows returned, those are anonymous accounts.

Drop Anonymous Users

In modern MySQL versions:

sql

DROP USER ''@'localhost';
DROP USER ''@'%';

Adjust the Host value based on what your query returned.

Why This Matters

Anonymous users:

  • Allow login without credentials
  • May have default privileges in some distributions
  • Increase the attack surface unnecessarily

In hardened environments, there should be zero accounts with an empty username. Every identity should be explicit, accountable, and least-privileged.

3. Encryption Everywhere

Encryption protects data both in transit and at rest.

Enable Transparent Data Encryption (TDE)

See my January 13 post for a deep dive into Transparent Data Encryption:
Configuring the Component Keyring in Percona Server and PXC 8.4

Enable TLS for Connections

sql

require_secure_transport=ON

Verify SSL Usage

sql

SHOW STATUS LIKE 'Ssl_cipher';

Encryption Areas to Consider

  • Client-server connections
  • Replication channels
  • Backups and snapshot storage
  • Disk-level encryption

4. Patch Management & Version Hygiene

Running outdated MySQL versions is equivalent to leaving known
vulnerabilities exposed.

Maintenance Strategy

  • Track vendor security advisories
  • Apply minor updates regularly
  • Test patches in staging before production rollout
  • Avoid unsupported MySQL versions

Check Version

5. Logging, Auditing, and Monitoring

Security without visibility is blind defense, enable Audit Logging.

1. audit_log Plugin (Legacy Model)

Installation

sql

INSTALL PLUGIN audit_log SONAME 'audit_log.so';

Verify

sql

SHOW PLUGINS LIKE 'audit%';

2. audit_log_filter Component (Modern Model)

Introduced in MySQL 8 to provide a more flexible and granular alternative to the older plugin model.

Installation

sql

INSTALL COMPONENT 'file://component_audit_log_filter';

Verify

sql

SELECT * FROM mysql.component;

Architecture Difference

Instead of a single global policy, you create:

  • Filters (define what to log)
  • Users assigned to filters

It’s granular and rule-driven.

Auditing Key Events

  • Failed logins
  • Privilege changes
  • Schema modifications
  • Unusual query activity

References:

  1. Audit Log Filter Component
  2. Audit Log Filters Part II

Useful Metrics

sql

SHOW GLOBAL STATUS LIKE 'Aborted_connects';
SHOW GLOBAL STATUS LIKE 'Connections';

6. Secure Configuration Hardening

A secure baseline configuration reduces risk from common attack
patterns.

Recommended Settings

ini

local_infile=OFF
secure_file_priv=/var/lib/mysql-files
sql_mode="STRICT_ALL_TABLES"
secure-log-path=/var/log/mysql

Why These Matter

  • Prevent arbitrary file imports
  • Reduce filesystem abuse
  • Restrict data export/import locations

7. Backup Security

Backups often contain everything an attacker wants.

Backup Best Practices

  • Encrypt backups
  • Restrict filesystem permissions
  • Store offsite copies securely
  • Rotate backup credentials
  • Verify restore procedures regularly

Example Permission Check

8. Replication & Cluster Security

Replication is not just a data distribution feature. It is a persistent, privileged communication channel between servers. If misconfigured, it can become a lateral movement pathway inside your infrastructure. Treat every replication link as a trusted but tightly controlled corridor.

Principle: Replication Is a Privileged Service Account

Replication users require elevated capabilities. They must be isolated, tightly scoped, and monitored like any other service identity.

Secure Replication Users

sql

CREATE USER 'repl'@'10.%'
 IDENTIFIED BY 'strongpassword'
 REQUIRE SSL;

GRANT REPLICATION REPLICA ON *.* TO 'repl'@'10.%';

Hardening considerations:

  • Restrict host patterns as narrowly as possible. Avoid % whenever feasible.
  • Require SSL or X.509 certificate authentication.
  • Enforce strong password policies or use a secrets manager.
  • Disable interactive login capability if applicable.

Encrypt Replication Traffic

Replication traffic may include sensitive row data, DDL statements, and metadata. Always encrypt it.

At minimum:

  • Enable require_secure_transport=ON
  • Configure TLS certificates on source and replica
  • Set replication channel to use SSL:

sql

CHANGE REPLICATION SOURCE TO
 SOURCE_SSL=1,
 SOURCE_SSL_CA='/path/ca.pem',
 SOURCE_SSL_CERT='/path/client-cert.pem',
 SOURCE_SSL_KEY='/path/client-key.pem';

For MySQL Group Replication or InnoDB Cluster:

  • Enable group communication SSL
  • Validate certificate identity
  • Use dedicated replication networks

Binary Log and Relay Log Protection

Replication relies on binary logs. Protect them.

  • Set binlog_encryption=ON
  • Set relay_log_info_repository=TABLE
  • Restrict filesystem access to log directories
  • Monitor log retention policies

Compromised binary logs can reveal historical data changes.

9. Continuous Security Reviews

Security is not a one-time checklist. Regular audits help catch
configuration drift and evolving threats.

Suggested Review Cadence

  • Weekly: failed login review
  • Monthly: privilege audits
  • Quarterly: configuration review
  • Semiannually: full security assessment

Security Checklist Summary

Area Key Action
Access Control Least privilege grants
Authentication Strong password policies
Encryption TLS + encrypted storage
Updates Regular patching
Monitoring Audit logging enabled
Configuration Harden defaults
Backups Encrypt and protect
Replication Secure replication users

Final Thoughts

Strong MySQL security doesn’t come from one feature or one tool. It comes from layers working together. Hardened configuration. Tight, intentional privilege design. Encryption everywhere it makes sense. And monitoring that actually gets reviewed instead of just written to disk.

In my experience, the strongest environments aren’t the ones trying to be unbreakable. They’re the ones built to detect, contain, and respond. Every layer should either reduce blast radius or increase visibility. If an attacker gets through one control, the next one slows them down. And while they’re slowing down, your logging and monitoring should already be telling you something isn’t right.

That’s what a mature security posture looks like in practice.

Planet for the MySQL Community

How We Brought a Dead MySQL InnoDB Cluster Back to Life

A war story: complete outage, GTID chaos, duplicate UUIDs, and the steps that finally worked

There’s a particular kind of dread that comes with staring at a database cluster where every node shows OFFLINE.

No reads. No writes. Just silence where your production data used to be.

That’s exactly where we found ourselves with a MySQL InnoDB Cluster — three nodes, all down, all stubbornlyPlanet for the MySQL Community

A better way to crawl websites with PHP

https://freek.dev/og-image/ce502835e5baaaa1251b6fb59c110536.jpeg

Our spatie/crawler. package is one of the first one I created. It allows you to crawl a website with PHP. It is used extensively in Oh Dear and our laravel-sitemap package.

Throughout the years, the API had accumulated some rough edges. With v9, we cleaned all of that up and added a bunch of features we’ve wanted for a long time.

Let me walk you through all of it!

Using the crawler

The simplest way to crawl a site is to pass a URL to Crawler::create() and attach a callback via onCrawled():

use Spatie\Crawler\Crawler;
use Spatie\Crawler\CrawlResponse;

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        echo "{$url}: {$response->status()}\n";
    })
    ->start();

The callable gets a CrawlResponse object. It has these methods

$response->status();        
$response->body();          
$response->header('some-header');  
$response->dom();           
$response->isSuccessful();  
$response->isRedirect();    
$response->foundOnUrl();    
$response->linkText();      
$response->depth();         

The body is cached, so calling body() multiple times won’t re-read the stream. And if you still need the raw PSR-7 response for some reason, toPsrResponse() has you covered.

You can control how many URLs are fetched at the same time with concurrency(), and set a hard cap with limit():

Crawler::create('https://example.com')
    ->concurrency(5)
    ->limit(200) 
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

There are a couple of other on closure callbacks you can use:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response, CrawlProgress $progress) {
        echo "[{$progress->urlsProcessed}/{$progress->urlsFound}] {$url}\n";
    })
    ->onFailed(function (string $url, RequestException $e, CrawlProgress $progress) {
        echo "Failed: {$url}\n";
    })
    ->onFinished(function (FinishReason $reason, CrawlProgress $progress) {
        echo "Done: {$reason->name}\n";
    })
    ->start();

Every on callback now receives a CrawlProgress object that tells you exactly where you are in the crawl:

$progress->urlsProcessed;  
$progress->urlsFailed;     
$progress->urlsFound;      
$progress->urlsPending;    

The start() method now returns a FinishReason enum, so you know exactly why the crawler stopped:

$reason = Crawler::create('https://example.com')
    ->limit(100)
    ->start();


Each CrawlResponse also carries a TransferStatistics object with detailed timing data for the request:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        $stats = $response->transferStats();

        echo "{$url}\n";
        echo " Transfer time: {$stats->transferTimeInMs()}ms\n";
        echo " DNS lookup: {$stats->dnsLookupTimeInMs()}ms\n";
        echo " TLS handshake: {$stats->tlsHandshakeTimeInMs()}ms\n";
        echo " Time to first byte: {$stats->timeToFirstByteInMs()}ms\n";
        echo " Download speed: {$stats->downloadSpeedInBytesPerSecond()} B/s\n";
    })
    ->start();

All timing methods return values in milliseconds. They return null when the stat is unavailable, for example tlsHandshakeTimeInMs() will be null for plain HTTP requests.

Throttling the crawl

I wanted the crawler to a well behaved piece of software. Using the crawler at full speed and with large concurrency could overload some servers. That’s why throttling is a polished feature of the package.

We ship two throttling strategies. The first one is FixedDelayThrottle that can give a fixed delay between all requests.

$crawler->throttle(new FixedDelayThrottle(200)); 

AdaptiveThrottle is a strategy that adjusts the delay based on how fast the server responds. If the server responds fast, the minimum delay will be low. If the server responds slow, we’ll automatically slow down crawling.

$crawler->throttle(new AdaptiveThrottle(
    minDelayMs: 50,
    maxDelayMs: 5000,
));

Testing with fake()

Like Laravel’s HTTP client, the crawler now has a fake to define which response should be returned for a request without making the actually request.

Crawler::create('https://example.com')
    ->fake([
        'https://example.com' => '<html><a href="/about">About</a></html>',
        'https://example.com/about' => '<html>About page</html>',
    ])
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

Using this faking helps to keep your tests executing fast.

Driver-based JavaScript rendering

Like in our Laravel PDF, Laravel Screenshot, and Laravel OG Image packages, Browsershot is no longer a hard dependency. JavaScript rendering is now driver-based, so you can use Browsershot, a new Cloudflare renderer, or write your own:

$crawler->executeJavaScript(new CloudflareRenderer($endpoint));

In closing

I’m usually very humble, but think that in this case I can say that our crawler package is the best available crawler in the entire PHP ecosystem.

You can find the package on GitHub. The full documentation is available on our documentation site.

This is one of the many packages we’ve created at Spatie. If you want to support our open source work, consider picking up one of our paid products.

Laravel News Links

LEGO Builder’s Work Tray

https://theawesomer.com/photos/2026/03/lego_model_making_wood_tray_t.jpg

LEGO Builder’s Work Tray

This large wooden tray provides the ideal work surface for building LEGO models and other construction sets. Measuring 41.3″ wide by 21.6″ deep, it has 11 trays for organizing parts and a spacious work area. A smooth-spinning lazy susan lets you access your creations from all sides. At 12.1 lb., it’s easy to move around and is thin enough to store behind the couch.

The Awesomer

Real Python: Automate Python Data Analysis With YData Profiling

https://files.realpython.com/media/report-overview.c5b7b1fa2ba4.png

The YData Profiling package generates an exploratory data analysis (EDA) report with a few lines of code. The report provides dataset and column-level analysis, including plots and summary statistics to help you quickly understand your dataset. These reports can be exported to HTML or JSON so you can share them with other stakeholders.

By the end of this tutorial, you’ll understand that:

  • YData Profiling generates interactive reports containing EDA results, including summary statistics, visualizations, correlation matrices, and data quality warnings from DataFrames.
  • ProfileReport creates a profile you can save with .to_file() for HTML or JSON export, or display inline with .to_notebook_iframe().
  • Setting tsmode=True and specifying a date column with sortby enables time series analysis, including stationarity tests and seasonality detection.
  • The .compare() method generates side-by-side reports highlighting distribution shifts and statistical differences between datasets.

To get the most out of this tutorial, you’ll benefit from having knowledge of pandas.

Note: The examples in this tutorial were tested using Python 3.13. Additionally, you may need to install setuptools<81 for backward compatibility.

You can install this package using pip:

Shell

$ python -m pip install ydata-profiling

Once installed, you’re ready to transform any pandas DataFrame into an interactive report. To follow along, download the example dataset you’ll work with by clicking the link below:

Get Your Code: Click here to download the free sample code and start automating Python data analysis with YData Profiling.

The following example generates a profiling report from the 2024 flight delay dataset and saves it to disk:

Python
flight_report.py

import pandas as pd
from ydata_profiling import ProfileReport

df = pd.read_csv("flight_data_2024_sample.csv")

profile = ProfileReport(df)
profile.to_file("flight_report.html")

This code generates an HTML file containing interactive visualizations, statistical summaries, and data quality warnings:

Dataset overview displaying statistics and variable types. Statistics include 35 variables, 10,000 observations, and 3.2% missing cells. Variable types: 5 categorical, 23 numeric, 1 DateTime, 6 text.

You can open the file in any browser to explore your data’s characteristics without writing additional analysis code.

There are a number of tools available for high-level dataset exploration, but not all are built for the same purpose. The following table highlights a few common options and when each one is a good fit:

Use case Pick Best for
You want to quickly generate an exploratory report ydata-profiling Generating exploratory data analysis reports with visualizations
You want an overview of a large dataset skimpy or df.describe() Providing fast, lightweight summaries in the console
You want to enforce data quality pandera Validating schemas and catching errors in data pipelines

Overall, YData Profiling is best used as an exploratory report creation tool. If you’re looking to generate an overview for a large dataset, using SkimPy or a built-in DataFrame library method may be more efficient. Other tools, like Pandera, are more appropriate for data validation.

If YData Profiling looks like the right choice for your use case, then keep reading to learn about its most important features.

Building a Report With YData Profiling

A YData Profiling report is composed of several sections that summarize different aspects of your dataset. Before customizing a report, it helps to understand the main components it includes and what each one is designed to show.

Read the full article at https://realpython.com/ydata-profiling-eda/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

Real Python: Quiz: The pandas DataFrame: Make Working With Data Delightful

https://realpython.com/static/real-python-placeholder-3.5082db8a1a4d.jpg

In this quiz, you’ll test your understanding of the
pandas DataFrame.

By working through this quiz, you’ll review how to create pandas DataFrames, access and modify columns, insert and sort data, extract values as NumPy arrays, and how pandas handles missing data.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

4 productivity-boosting tmux features you should be using

https://static0.howtogeekimages.com/wordpress/wp-content/uploads/2025/10/tux-the-linux-mascot-sitting-with-a-laptop-in-front-of-a-large-terminal-window-1.png

Has your terminal app ever crashed mid-op? Ever wish you didn’t have to juggle multiple terminal tabs or deal with failed processes caused by terminal connection drops? If any of that sounds relatable, multiplexing, which isn’t as complicated as it sounds, can save you from the tab chaos and turn your Linux terminal into a productivity dashboard.

How-To Geek

Keto diet could improve response to exercise in people with high blood sugar

https://www.futurity.org/wp/wp-content/uploads/2026/02/keto-diet-exercise-high-blood-sugar-diabetes-1600.jpgA person cuts avocado on a cutting board.

A new study finds that feeding mice with hyperglycemia a high-fat, low-carbohydrate diet lowered their blood sugar and improved their bodies’ response to exercise.

To be healthy, conventional wisdom tells us to exercise and limit fatty foods. Exercise helps us lose weight and build muscle. It makes our hearts stronger and boosts how we take in and use oxygen for energy—one of the strongest predictors of health and longevity.

But people with high blood sugar often don’t achieve those benefits from exercise, especially the ability to use oxygen efficiently. They’re at higher risk for heart and kidney disease, but high blood sugar can prevent their muscles from taking up oxygen more effectively in response to exercise.

For them, the new study suggests the answer could be eating not less fat, but more.

The study by exercise medicine scientist Sarah Lessard in Nature Communications, found that a high-fat, ketogenic diet reduced high blood sugar, or hyperglycemia, in mice, and their bodies were more responsive to exercise.

“After one week on the ketogenic diet, their blood sugar was completely normal, as though they didn’t have diabetes at all,” says Lessard, associate professor at Virginia Tech’s Fralin Biomedical Research Institute at VTC’s Center for Exercise Medicine Research.

“Over time, the diet caused remodeling of the mice’s muscles, making them more oxidative and making them react better to aerobic exercise.”

The ketogenic diet is named for its ability to induce ketosis, a metabolic state that shifts the body to burning fat for fuel instead of sugar. The diet is controversial because it calls for eating high-fat, very low-carbohydrate foods, which is counter to the low-fat diet historically urged by health advocates.

However, the keto diet has been linked to benefits for people with some diseases, including epilepsy and Parkinson’s disease. In the 1920s, before the discovery of insulin, it was a way to manage diabetes because of its ability to lower blood sugar.

In earlier research, Lessard found that people with high blood sugar had lower exercise capacity. She wondered if the diet might improve the response to exercise, leading to higher exercise capacity.

Mice were fed a high-fat, low-carbohydrate diet and exercised on running wheels. The mice developed more slow-twitch muscle fibers, which give better endurance.

“Their bodies were more efficiently using oxygen, which is a sign of higher aerobic capacity,” Lessard says.

Lessard says exercise positively affects virtually every tissue in our body, even fat tissue, but she and others are seeing that the greatest health improvements won’t come with diet or exercise alone.

“What we’re really finding from this study and from our other studies is that diet and exercise aren’t simply working in isolation,” says Lessard, who also holds an appointment in the Department of Human Foods, Nutrition, and Exercise in Virginia Tech’s College of Agriculture and Life Sciences.

“There are a lot of combined effects, and so we can get the most benefits from exercise if we eat a healthy diet at the same time.”

Next, Lessard would like to continue her research in human subjects to see if they gain the same benefits from the keto diet seen in mice.

She also notes that the keto diet is challenging to follow. A less restrictive regimen, such as the Mediterranean diet, might be easier for people to follow and still be effective. That diet can also keep blood sugar low, while including carbohydrates from unprocessed fruits, vegetables, and whole grains rather than restricting carbohydrates altogether.

“Our previous studies have shown that any strategy you and your doctor have arrived at to reduce your blood sugar could work,” she says.

Source: Virginia Tech

The post Keto diet could improve response to exercise in people with high blood sugar appeared first on Futurity.

Futurity