New Legal Strategy Challenges ATF’s Interpretation of the 1986 Hughes Amendment Machine Gun Ban

https://www.ammoland.com/wp-content/uploads/2026/03/MP5-machine-gun-9mm-full-auto-iStock-474974070-500×334.jpg

Opinion

In a new “breaking news” sit-down on The Four Boxes Diner, constitutional litigator and Second Amendment historian Stephen P. Halbrook joins host Mark W. Smith to walk viewers through a question gun owners have debated for decades: does federal law actually forbid the registration of post-May 19, 1986 machine guns for ordinary Americans—or did ATF “fill in the blanks” with regulation and judicial deference that no longer holds up?

This is a lawyer-to-lawyer conversation about statutory text, agency overreach, and the post-Chevron legal landscape—plus a developing strategy in places like West Virginia and Kentucky that could force a clean test of ATF’s long-standing interpretation.

Below is what Halbrook and Smith argued, why it matters, and what gun owners should understand before the “legalize machine guns” headlines run away with the story.

The core fight: what 18 U.S.C. § 922(o) says vs. what ATF does

The so-called Hughes Amendment lives at 18 U.S.C. § 922(o). The key structure is simple:

(o)(1): “Except as provided in paragraph (2), it shall be unlawful for any person to transfer or possess a machinegun.”
(o)(2)(A) then carves out an exception for “a transfer to or by, or possession by or under the authority of, the United States… or a State… or political subdivision thereof.”
(o)(2)(B) preserves lawful possession of machine guns lawfully possessed before the effective date.

Smith’s argument, echoed by Halbrook’s earlier litigation history, is that the statutory phrase “under the authority of” reads like permission/authorization, not “for the benefit of government” or “government use only.”

That distinction matters because ATF’s implementing regulation took a very different path.

The regulation that changed everything: “for the benefit of government.”

ATF’s machine gun regulation, 27 C.F.R. § 479.105, is where the “government use” concept becomes explicit. It states that applications to make/register machine guns after May 19, 1986 will be approved only when made “for the benefit of” a federal/state/local governmental entity, backed by specific information and (in practice) a government request/on-behalf-of showing.

Smith and Halbrook argue this is the pivot point: the statute’s text doesn’t contain “for the benefit of government,” yet the regulation effectively adds it. In their telling, that add-on hardened into “common knowledge” because courts spent decades deferring to agency interpretation.

Which brings us to the big modern change.

The post-Chevron landscape is significant because the Loper Bright decision effectively removes the policy of judicial deference.

Halbrook points to the Supreme Court’s 2024 decision in Loper Bright Enterprises v. Raimondo, which overruled the Chevron doctrine that frequently pushed courts to defer to agencies on ambiguous statutes.

Their thesis: if ATF’s position became entrenched largely through deference-era judging, that foundation is weaker now. Courts are supposed to decide the best reading of the statute themselves—not default to “ATF says so.”

That doesn’t automatically mean gun owners win. But it does mean older “we defer to ATF” opinions aren’t the trump card they once were, especially if a case tees up the statutory language cleanly.

Halbrook’s front-row history lesson: the Hughes Amendment’s messy birth

Halbrook describes watching the 1986 House debate where Rep. William Hughes introduced the machine gun amendment late in the process, amid chaos, and it was adopted without the kind of clean, deliberate record you’d expect for a ban this sweeping. (That political history doesn’t override the statutory text—but it matters when courts look for clarity.)

He also notes that the ban took effect after a delay, during which manufacturers produced/registerable machine guns before the cutoff, a well-known quirk of how the “registry freeze” era began.

The case that shaped the modern status quo: Farmer v. Higgins

Halbrook recounts his early challenge involving a would-be maker application denied after Hughes. The dispute is closely associated with Farmer v. Higgins in the Eleventh Circuit, which rejected the district court’s more permissive reading and sided with ATF’s position.

Smith’s point is blunt: Farmer became a “leapfrog precedent”—one circuit cites another, and soon the ATF interpretation is treated as settled law without fresh analysis.

Halbrook agrees that this is a recurring disease in gun jurisprudence: once a court writes “government wins,” other courts copy-paste.

The Commerce Clause pressure point: Lopez and Alito’s Rybar dissent

A second major thread in the video is constitutional: even if ATF’s reading stands, does § 922(o) have a solid Article I hook?

Halbrook highlights the Supreme Court’s Commerce Clause decision in United States v. Lopez (1995), which struck down the Gun-Free School Zones Act because it criminalized mere possession without a sufficient commerce nexus.

Smith then ties that logic to machine guns. In United States v. Rybar (3d Cir. 1996), then-Judge Samuel Alito dissented, calling § 922(o) the “closest” relative to the law struck in Lopez and arguing Congress hadn’t shown the required substantial effect on interstate commerce.

You don’t have to accept every step of their reasoning to see the strategic value: if a court rejects the “under the authority of” statutory argument, the fallback becomes a renewed constitutional attack—Commerce Clause and, in today’s environment, likely Second Amendment arguments as well.

States’ “permission” strategy: why West Virginia and Kentucky are being watched

The practical plan discussed is not “buy a machine gun tomorrow.” It’s a litigation-minded approach:

  • A state sets up a program where a state entity (often discussed as a division within state police) acquires/holds machine guns.
  • The state then authorizes transfers/possession under state authority, with a process for qualified citizens.
  • Applicants file the relevant federal paperwork, and if ATF denies on the “government use only” theory, that denial becomes the injury for a direct legal challenge.

Halbrook’s point is tactical: clean plaintiffs and clean facts matter. Civil litigation with ordinary, law-abiding citizens is very different from a criminal appeal with ugly fact patterns.

What gun owners should take away?

1) The statutory text really does contain a government/State carveout. The words “under the authority of” are there, and they do work in other legal contexts.
2) ATF’s regulation explicitly adds a “for the benefit of government” framework. That’s the gap the video targets.
3) The legal environment changed after Loper Bright. Agency deference is no longer the automatic shield it once was.
4) There are two lanes of attack—statutory and constitutional. Lopez and Alito’s Rybar dissent show why some lawyers think § 922(o) is vulnerable even apart from ATF’s interpretation.
5) None of this is “done.” Even a strong legal theory has to survive hostile circuits, political pressure, and a federal bureaucracy that has spent nearly 40 years treating the registry freeze as untouchable.

Halbrook and Smith are making a provocative—but legally literate—argument: the post-’86 machine gun ban as enforced today may rest on an ATF gloss that goes beyond Congress’s words, preserved for decades by judicial deference that’s now been repudiated.

If West Virginia/Kentucky (or another state) can tee up a clean denial case, it could force courts to answer the question they’ve dodged for a generation: does “under the authority of a State” mean what normal English says it means or what ATF wrote into a regulation?

And if courts won’t take the statutory off-ramp, the constitutional cliff edge—Commerce Clause and Second Amendment—still looms.

Idaho Introduces Bill to Legalize Machine Guns If Federal Ban Falls

Kentucky HB 749 Follows West Virginia in Expanding Citizens’ Access to Modern Machine Guns


AmmoLand Shooting Sports News

Before the Index, Before the Schema: MySQL Makes Three Promises

https://rendiment.io/assets/img/gallery/what-databases-do-hero.png

Most people, when asked what a database does, say something like: “it stores data.”

That’s like saying a restaurant “stores food.”

Technically true. Completely misses the point.

A restaurant has to cook fast, serve many tables at once, and not poison anyone. Fail any one of those three and it doesn’t matter how good the kitchen looks. A database has the same problem — except the stakes are your production system at 2am.

A few years ago I gave a talk at Percona Live in Denver where I tried to answer this properly. Not from a features list. Not from a vendor slide deck. From first principles: what does a database have to do?

Three things. Everything else — every configuration parameter, every architecture decision, every incident you’ve ever fought — falls into one of them.


Execute Queries

MySQL query execution flow through InnoDB — buffer pool, redo log, and doublewrite buffer working together

A restaurant has one core job: take an order and bring food to the table. Fast, correct, and for as many tables as possible simultaneously.

A database has the same job. Answer questions about data. Record changes. As fast as possible, as many as possible, without corrupting anything in the process.

That last part is the one that gets sacrificed first when you’re optimizing for speed. InnoDB’s entire machinery — the buffer pool, the redo log, the doublewrite buffer — exists to make sure “fast” and “correct” happen at the same time. ACID isn’t a marketing term. It’s the contract the database makes with every query it executes.

The tension is real. Disabling foreign_key_checks before a bulk load makes the operation faster. It also removes a correctness guarantee while it’s disabled. That tradeoff isn’t inherently wrong — but you can only make it deliberately if you understand what you’re trading. If you’re curious about the hidden consequences of foreign keys, I covered one particularly dangerous scenario in the ON DELETE CASCADE blind spot in MySQL’s binary log.

When a query is slow, the reflex is to reach for indexes. Sometimes that’s right. But a query can also be slow because lock contention is serializing execution, because the working set stopped fitting in the buffer pool, or because something upstream is flooding the connection pool. Same symptom, completely different root causes, completely different solutions. Knowing the responsibility narrows the search. Understanding InnoDB semaphore contention is one way to tell lock contention apart from other causes.


Relationships

Database relationships — users, replicas, and dev/ops teams all depend on the database

No database is an island.

Think of it like a person who has three very different kinds of relationships in their life — and does a bad job with any one of them at their own peril.

With users, the relationship is trust and boundaries. Who gets in, what they can see, what they can touch. MySQL’s account model — hosts, privileges, roles — is the entire machinery for this. When someone asks why the application can’t just run as root, this is why. The database has a responsibility to protect data from people and systems that shouldn’t have it. That responsibility doesn’t disappear because setting it up is inconvenient.

With other databases, the relationship is coordination. A replica trusts that the primary is sending it a faithful copy of reality. A PXC node trusts that the other nodes in the cluster will agree on the same writes. When wsrep_local_recv_queue starts climbing, the cluster is telling you a relationship is under stress — one node can’t keep up with what the others are sending. It’s a relationship problem before it’s a performance problem. Treating it as a performance problem first is how you end up chasing the wrong metric.

With dev and ops teams, the relationship is communication. Logs, status variables, Performance Schema — this is how the database talks. When you skip configuring the slow query log because it adds overhead, you’re choosing silence. You’ll regret that choice during the next incident, when you’re flying blind trying to reconstruct what happened. Tools like PMM Query Analytics exist precisely to bridge this communication gap.

A database that executes queries correctly but can’t communicate its state, can’t cooperate with peers, and can’t enforce who has access — is a ticking clock.


Survive

Database survival — CPU, memory, and disk as physical constraints the database must negotiate

This is the one nobody talks about at conferences, and it’s the one that kills you.

A database doesn’t run in the cloud. It runs on a machine. A machine with a CPU that can be saturated, memory that can be exhausted, and a disk that fills up and then — not slowly degrades, but stops. Full disk doesn’t slow MySQL down. It stops it cold.

Think of it like a tenant who has to know the rules of the building they live in. The landlord — the OS — controls memory allocation, file descriptors, I/O scheduling. The tenant can push their luck, but only so far before the landlord intervenes. An OOM kill at 3am is the landlord evicting a tenant who was using more than their share.

innodb_buffer_pool_size is the most important negotiation a MySQL server has with its host machine. Too low and you’re leaving performance on the table. Too high on a box running other processes and you’re gambling that the OS won’t reclaim that memory mid-write. That configuration parameter isn’t a performance knob. It’s a survival decision.

Disk is more insidious. A table that grows 100MB per day doesn’t look dangerous today. In six months it’s 18GB. The database won’t warn you. It will just stop one day. The monitoring that watches disk growth trends and alerts before the cliff — that’s not operational overhead. That’s the database fulfilling its responsibility to survive the physical world it lives in. Setting up smart alerting with dynamic thresholds is how you catch these slow-moving threats.

Backups live here too. A database that can’t be recovered after a failure didn’t survive. Full stop.


Why This Framework Matters

The three promises as a diagnostic framework — locating a problem before solving it

These three categories won’t tell you how to fix anything. They’re not a checklist. What they give you is a way to locate a problem before you start solving it — and that matters more than most people admit.

Replica falling behind? Three possible zip codes:

  • Execute Queries — the primary is running queries so heavy that the replica can’t replay them fast enough
  • Relationships — the network between primary and replica can’t carry the replication stream
  • Survive — the replica’s disk I/O is the bottleneck

Same symptom. Three completely different tools. If you go straight to tuning queries when the real problem is disk throughput on the replica, you will waste hours.

The framework doesn’t solve the problem. It tells you which drawer to open first.

Every decision you make as a DBA is in service of one of these three things. Execute queries correctly and fast. Manage relationships with users, peers, and teams. Survive the physical constraints of the machine it runs on.

That’s the whole job.


I first presented this framework at Percona Live in Denver. The talk was aimed at DBAs, but I’ve always believed that database fundamentals should be explainable to anyone — and that explaining them clearly forces a deeper understanding than talking only to specialists.

Planet for the MySQL Community

Ward: A Security Scanner for Laravel

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/Ward-2-LN.png

Ward, created by El Jakani Yassine is a command-line security scanner written in Go designed around Laravel’s structure. Rather than running generic pattern matching across your codebase, it first parses your project’s structure—routes, models, controllers, middleware, Blade templates, config files, environment variables, and dependencies—then runs targeted checks against that context.

Installation

Ward is distributed as a Go binary so you’ll need to ensure you Go already installed and then you can run:

go install github.com/eljakani/ward@latest

 

# Make sure $GOPATH/bin is in your PATH

export PATH="$PATH:$(go env GOPATH)/bin"

After installing, run ward init to create ~/.ward/ with a default config file, 42+ built-in rules organized by category, and directories for reports and scan history.

Scanning a Project

Point Ward at a local directory or a remote Git repository:

# Local project

ward scan /path/to/laravel-project

 

# Remote repository (shallow cloned)

ward scan https://github.com/user/laravel-project.git

When run in a terminal, Ward displays a TUI. A scan view shows pipeline progress and live severity counts as scanners run. Once complete, a results view presents a sortable findings table with severity badges, category grouping, and a detail panel showing descriptions, code snippets, and remediation guidance.

Screenshot of the Ward TUI
A Screenshot of the Ward TUI

What It Checks

Ward ships with four independent scan engines:

  • env-scanner runs 8 checks against your .env file, including debug mode enabled in production, missing or weak APP_KEY, and secrets leaked in .env.example.
  • config-scanner runs 13 checks across your config/*.php files, covering hardcoded credentials, insecure session flags, CORS wildcard origins, and missing security options.
  • dependency-scanner queries the OSV.dev advisory database in real time against your composer.lock to find vulnerable Packagist packages. Because it queries live data rather than a bundled list, it reflects current advisories rather than whatever was current at the tool’s last release.
  • rules-scanner applies 42 rules across 7 categories: secrets (hardcoded passwords, API keys, AWS credentials), injection (SQL, command, eval), XSS (unescaped Blade output, JavaScript injection), debug artifacts (dd(), dump(), phpinfo()), weak cryptography (md5, sha1, insecure RNG), configuration issues (CORS, CSRF, mass assignment), and authentication gaps (missing middleware, absent rate limiting).

Output Formats

Configure output formats in ~/.ward/config.yaml:

output:

formats: [json, sarif, html, markdown]

dir: ./reports

  • JSON — machine-readable results
  • SARIF — compatible with GitHub Code Scanning and IDE integrations
  • HTML — standalone dark-themed visual report
  • Markdown — suitable for PR comments

CI/CD Integration

Ward returns non-zero exit codes when findings meet or exceed a specified severity, making it straightforward to gate deployments:

ward scan . --output json --fail-on high

A GitHub Actions example from the project’s documentation:

name: Ward Security Scan

on: [push, pull_request]

 

jobs:

security-scan:

runs-on: ubuntu-latest

steps:

- uses: actions/checkout@v4

- uses: actions/setup-go@v5

with:

go-version: '1.24'

- name: Install Ward

run: go install github.com/eljakani/ward@latest

- name: Run Ward

run: ward init && ward scan . --output json

- name: Upload SARIF

if: always()

uses: github/codeql-action/upload-sarif@v3

with:

sarif_file: ward-report.sarif

Baseline Management

For teams that want to acknowledge existing findings without suppressing future ones, Ward supports a baseline workflow:

# Capture current state

ward scan . --output json --update-baseline .ward-baseline.json

 

# On subsequent runs, suppress known findings and fail only on new ones

ward scan . --output json --baseline .ward-baseline.json --fail-on high

Committing .ward-baseline.json to your repository lets the team track which findings have been acknowledged and catch regressions in CI.

Custom Rules

Drop .yaml files into ~/.ward/rules/ to define additional checks. Rules support regex or substring patterns, file-existence checks, and negative patterns that fire when something is absent—for example, flagging routes that lack @csrf. You can target PHP files, Blade templates, config files, environment files, routes, migrations, or JavaScript files.

rules:

- id: TEAM-001

title: "Hardcoded internal service URL"

severity: medium

patterns:

- type: regex

target: php-files

pattern: 'https?://internal-service\.\w+'

Individual built-in rules can also be disabled or have their severity overridden in config.yaml without touching the rule files themselves.

Scan History

Ward saves each scan result to ~/.ward/store/, and on subsequent runs it surfaces a diff against the previous scan—for example, "2 new, 3 resolved (12→11)"—so you can track how your security posture changes over time.

Visit Eljakani/ward on GitHub to browse the source and get started.

Laravel News

Workflow 3.0

https://rodolfoberrios.com/org/chevere/packages/workflow/workflow-social.png

After three years of development and extensive production use, Workflow 3.0 brings significant improvements to building multi-step procedures in PHP. This release focuses on simplifying asynchronous execution, improving developer experience, and adding essential resilience features.

# Dependency injection

Version 3.0 introduces container support for injecting dependencies into jobs at runtime:

This enables workflows to remain stateless while accessing services like databases or HTTP clients through the container.

# Callable support

Version 3.0 accepts any PHP callable as a job, providing flexibility in how you define workflow steps:

This eliminates boilerplate for simple operations while maintaining support for Action classes when business logic requires full class structure. Callables enable inline data transformation without requiring dedicated action classes for single-use operations.

# Response property access

Version 3.0 extends response() to access public object properties directly, not just array keys:

This works transparently with both arrays and objects, allowing actions to return domain objects without requiring array conversion. The workflow engine inspects the response and accesses properties or array keys accordingly.

# Retry policies

Transient failures in distributed systems are inevitable. Workflow 3.0 implements configurable retry policies:

Retry policies are essential for handling transient failures in distributed systems, where network operations and external services may temporarily fail but succeed on subsequent attempts.

# True async execution

The parallel runner has been replaced with a true async implementation using AMPHP. This provides non-blocking execution without the overhead of process forking, leveraging PHP 8.1+ Fibers for efficient multitasking.

Independent jobs execute concurrently while the engine manages the resolution of the dependency graph. This follows asynchronous task-based execution model where the scheduler unrolls the graph and executes nodes as soon as their data dependencies (like response()) are satisfied. This shift significantly reduces memory footprint compared to the previous process-based model while maintaining strict execution order.

# Conditional execution

Version 3.0 adds withRunIfNot() for cleaner conditional logic:

This complements withRunIf() and accepts boolean literals, variables, job responses, and callables. Conditional execution enables branching without complex orchestration logic.

# Type safety

Integration with chevere/parameter 2.0 provides runtime validation:

Workflow validates inputs before job execution and verifies response types match expected parameters in dependent jobs. This eliminates a class of runtime errors that would otherwise require extensive testing.

# Practical example

Here’s a complete workflow for processing user uploads:

This workflow validates the file, resizes and optimizes it in parallel, then stores the result. The resize job retries on failure, and both processing jobs only run if validation succeeds.

# Migration notes

The parallel runner removal is the only breaking change. Applications using parallel execution should switch to async jobs with appropriate dependency declarations. The async runner provides better performance and simpler semantics.

# Conclusion

Workflow 3.0 represents three years of production refinement. The addition of container support, callables, retry policies, and true async execution address real-world requirements while maintaining the declarative approach that makes workflows maintainable.

The library continues following established patterns from workflow research and distributed systems literature. Each job remains independently testable, workflows stay declarative, and the dependency graph handles execution ordering automatically.

For complete documentation and examples, visit chevere.org/packages/workflow.

Laravel News Links

Hardening MySQL: Practical Security Strategies for DBAs

https://percona.community/blog/2026/03/mysql-security.png

MySQL Security Best Practices: A Practical Guide for Locking Down Your Database

Introduction

MySQL runs just about everywhere. I’ve seen it behind small personal projects, internal tools, SaaS platforms, and large enterprise systems handling serious transaction volume. When your database sits at the center of everything, it becomes part of your security perimeter whether you planned it that way or not. And that makes it a target.

Securing MySQL isn’t about flipping one magical setting and calling it done. It’s about layers. Tight access control. Encrypted connections. Clear visibility into what’s happening on the server. And operational discipline that doesn’t drift over time.

In this guide, I’m going to walk through practical MySQL security best practices that you can apply right away. These are the kinds of checks and hardening steps that reduce real risk in real environments, and help build a database platform that stays resilient under pressure.


1. Principle of Least Privilege

One of the most common security mistakes is over-granting privileges.
Applications and users should have only the permissions they absolutely
need.

Bad Practice

sql

GRANT ALL PRIVILEGES ON *.* TO 'appuser'@'10.%';

Better Approach

sql

GRANT SELECT, INSERT, UPDATE ON appdb.* TO 'appuser'@'10.%';

Recommendations

  • Avoid global privileges unless absolutely required
  • Restrict users by host whenever possible
  • Separate admin accounts from application accounts
  • Use different credentials for read-only vs write operations

Audit Existing Privileges

sql

SELECT user, host, Select_priv, Insert_priv, Update_priv, Delete_priv
FROM mysql.user;

2. Strong Authentication & Password Policies

Weak credentials remain one of the easiest attack vectors.

Enable Password Validation

component_validate_password is MySQL’s modern password policy engine. Think of it as a gatekeeper for credential quality. Every time someone tries to set or change a password, it checks whether that password meets your defined security standards before letting it in.

It replaces the older validate_password plugin with a component-based architecture that is more flexible and better aligned with MySQL 8.x design.

sql

INSTALL COMPONENT 'file://component_validate_password';

What It Does

When enabled, it enforces rules such as:

  • Minimum password length
  • Required mix of character types
  • Dictionary file checks
  • Strength scoring

If a password fails policy, the statement is rejected before the credential is stored.

Why It Matters

Weak passwords remain one of the most common entry points in database breaches. This component reduces risk by enforcing baseline credential hygiene automatically, instead of relying on developer discipline.

Recommended Policies

  • Minimum length: 14+ characters
  • Require mixed case, numbers, and symbols
  • Enable dictionary checks
  • Enable username checks

Remove Anonymous Accounts

Find Anonymous Users

Anonymous users have an empty User field.

sql

SELECT user, host FROM mysql.user WHERE user='';

If you see rows returned, those are anonymous accounts.

Drop Anonymous Users

In modern MySQL versions:

sql

DROP USER ''@'localhost';
DROP USER ''@'%';

Adjust the Host value based on what your query returned.

Why This Matters

Anonymous users:

  • Allow login without credentials
  • May have default privileges in some distributions
  • Increase the attack surface unnecessarily

In hardened environments, there should be zero accounts with an empty username. Every identity should be explicit, accountable, and least-privileged.

3. Encryption Everywhere

Encryption protects data both in transit and at rest.

Enable Transparent Data Encryption (TDE)

See my January 13 post for a deep dive into Transparent Data Encryption:
Configuring the Component Keyring in Percona Server and PXC 8.4

Enable TLS for Connections

sql

require_secure_transport=ON

Verify SSL Usage

sql

SHOW STATUS LIKE 'Ssl_cipher';

Encryption Areas to Consider

  • Client-server connections
  • Replication channels
  • Backups and snapshot storage
  • Disk-level encryption

4. Patch Management & Version Hygiene

Running outdated MySQL versions is equivalent to leaving known
vulnerabilities exposed.

Maintenance Strategy

  • Track vendor security advisories
  • Apply minor updates regularly
  • Test patches in staging before production rollout
  • Avoid unsupported MySQL versions

Check Version

5. Logging, Auditing, and Monitoring

Security without visibility is blind defense, enable Audit Logging.

1. audit_log Plugin (Legacy Model)

Installation

sql

INSTALL PLUGIN audit_log SONAME 'audit_log.so';

Verify

sql

SHOW PLUGINS LIKE 'audit%';

2. audit_log_filter Component (Modern Model)

Introduced in MySQL 8 to provide a more flexible and granular alternative to the older plugin model.

Installation

sql

INSTALL COMPONENT 'file://component_audit_log_filter';

Verify

sql

SELECT * FROM mysql.component;

Architecture Difference

Instead of a single global policy, you create:

  • Filters (define what to log)
  • Users assigned to filters

It’s granular and rule-driven.

Auditing Key Events

  • Failed logins
  • Privilege changes
  • Schema modifications
  • Unusual query activity

References:

  1. Audit Log Filter Component
  2. Audit Log Filters Part II

Useful Metrics

sql

SHOW GLOBAL STATUS LIKE 'Aborted_connects';
SHOW GLOBAL STATUS LIKE 'Connections';

6. Secure Configuration Hardening

A secure baseline configuration reduces risk from common attack
patterns.

Recommended Settings

ini

local_infile=OFF
secure_file_priv=/var/lib/mysql-files
sql_mode="STRICT_ALL_TABLES"
secure-log-path=/var/log/mysql

Why These Matter

  • Prevent arbitrary file imports
  • Reduce filesystem abuse
  • Restrict data export/import locations

7. Backup Security

Backups often contain everything an attacker wants.

Backup Best Practices

  • Encrypt backups
  • Restrict filesystem permissions
  • Store offsite copies securely
  • Rotate backup credentials
  • Verify restore procedures regularly

Example Permission Check

8. Replication & Cluster Security

Replication is not just a data distribution feature. It is a persistent, privileged communication channel between servers. If misconfigured, it can become a lateral movement pathway inside your infrastructure. Treat every replication link as a trusted but tightly controlled corridor.

Principle: Replication Is a Privileged Service Account

Replication users require elevated capabilities. They must be isolated, tightly scoped, and monitored like any other service identity.

Secure Replication Users

sql

CREATE USER 'repl'@'10.%'
 IDENTIFIED BY 'strongpassword'
 REQUIRE SSL;

GRANT REPLICATION REPLICA ON *.* TO 'repl'@'10.%';

Hardening considerations:

  • Restrict host patterns as narrowly as possible. Avoid % whenever feasible.
  • Require SSL or X.509 certificate authentication.
  • Enforce strong password policies or use a secrets manager.
  • Disable interactive login capability if applicable.

Encrypt Replication Traffic

Replication traffic may include sensitive row data, DDL statements, and metadata. Always encrypt it.

At minimum:

  • Enable require_secure_transport=ON
  • Configure TLS certificates on source and replica
  • Set replication channel to use SSL:

sql

CHANGE REPLICATION SOURCE TO
 SOURCE_SSL=1,
 SOURCE_SSL_CA='/path/ca.pem',
 SOURCE_SSL_CERT='/path/client-cert.pem',
 SOURCE_SSL_KEY='/path/client-key.pem';

For MySQL Group Replication or InnoDB Cluster:

  • Enable group communication SSL
  • Validate certificate identity
  • Use dedicated replication networks

Binary Log and Relay Log Protection

Replication relies on binary logs. Protect them.

  • Set binlog_encryption=ON
  • Set relay_log_info_repository=TABLE
  • Restrict filesystem access to log directories
  • Monitor log retention policies

Compromised binary logs can reveal historical data changes.

9. Continuous Security Reviews

Security is not a one-time checklist. Regular audits help catch
configuration drift and evolving threats.

Suggested Review Cadence

  • Weekly: failed login review
  • Monthly: privilege audits
  • Quarterly: configuration review
  • Semiannually: full security assessment

Security Checklist Summary

Area Key Action
Access Control Least privilege grants
Authentication Strong password policies
Encryption TLS + encrypted storage
Updates Regular patching
Monitoring Audit logging enabled
Configuration Harden defaults
Backups Encrypt and protect
Replication Secure replication users

Final Thoughts

Strong MySQL security doesn’t come from one feature or one tool. It comes from layers working together. Hardened configuration. Tight, intentional privilege design. Encryption everywhere it makes sense. And monitoring that actually gets reviewed instead of just written to disk.

In my experience, the strongest environments aren’t the ones trying to be unbreakable. They’re the ones built to detect, contain, and respond. Every layer should either reduce blast radius or increase visibility. If an attacker gets through one control, the next one slows them down. And while they’re slowing down, your logging and monitoring should already be telling you something isn’t right.

That’s what a mature security posture looks like in practice.

Planet for the MySQL Community

How We Brought a Dead MySQL InnoDB Cluster Back to Life

A war story: complete outage, GTID chaos, duplicate UUIDs, and the steps that finally worked

There’s a particular kind of dread that comes with staring at a database cluster where every node shows OFFLINE.

No reads. No writes. Just silence where your production data used to be.

That’s exactly where we found ourselves with a MySQL InnoDB Cluster — three nodes, all down, all stubbornlyPlanet for the MySQL Community

A better way to crawl websites with PHP

https://freek.dev/og-image/ce502835e5baaaa1251b6fb59c110536.jpeg

Our spatie/crawler. package is one of the first one I created. It allows you to crawl a website with PHP. It is used extensively in Oh Dear and our laravel-sitemap package.

Throughout the years, the API had accumulated some rough edges. With v9, we cleaned all of that up and added a bunch of features we’ve wanted for a long time.

Let me walk you through all of it!

Using the crawler

The simplest way to crawl a site is to pass a URL to Crawler::create() and attach a callback via onCrawled():

use Spatie\Crawler\Crawler;
use Spatie\Crawler\CrawlResponse;

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        echo "{$url}: {$response->status()}\n";
    })
    ->start();

The callable gets a CrawlResponse object. It has these methods

$response->status();        
$response->body();          
$response->header('some-header');  
$response->dom();           
$response->isSuccessful();  
$response->isRedirect();    
$response->foundOnUrl();    
$response->linkText();      
$response->depth();         

The body is cached, so calling body() multiple times won’t re-read the stream. And if you still need the raw PSR-7 response for some reason, toPsrResponse() has you covered.

You can control how many URLs are fetched at the same time with concurrency(), and set a hard cap with limit():

Crawler::create('https://example.com')
    ->concurrency(5)
    ->limit(200) 
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

There are a couple of other on closure callbacks you can use:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response, CrawlProgress $progress) {
        echo "[{$progress->urlsProcessed}/{$progress->urlsFound}] {$url}\n";
    })
    ->onFailed(function (string $url, RequestException $e, CrawlProgress $progress) {
        echo "Failed: {$url}\n";
    })
    ->onFinished(function (FinishReason $reason, CrawlProgress $progress) {
        echo "Done: {$reason->name}\n";
    })
    ->start();

Every on callback now receives a CrawlProgress object that tells you exactly where you are in the crawl:

$progress->urlsProcessed;  
$progress->urlsFailed;     
$progress->urlsFound;      
$progress->urlsPending;    

The start() method now returns a FinishReason enum, so you know exactly why the crawler stopped:

$reason = Crawler::create('https://example.com')
    ->limit(100)
    ->start();


Each CrawlResponse also carries a TransferStatistics object with detailed timing data for the request:

Crawler::create('https://example.com')
    ->onCrawled(function (string $url, CrawlResponse $response) {
        $stats = $response->transferStats();

        echo "{$url}\n";
        echo " Transfer time: {$stats->transferTimeInMs()}ms\n";
        echo " DNS lookup: {$stats->dnsLookupTimeInMs()}ms\n";
        echo " TLS handshake: {$stats->tlsHandshakeTimeInMs()}ms\n";
        echo " Time to first byte: {$stats->timeToFirstByteInMs()}ms\n";
        echo " Download speed: {$stats->downloadSpeedInBytesPerSecond()} B/s\n";
    })
    ->start();

All timing methods return values in milliseconds. They return null when the stat is unavailable, for example tlsHandshakeTimeInMs() will be null for plain HTTP requests.

Throttling the crawl

I wanted the crawler to a well behaved piece of software. Using the crawler at full speed and with large concurrency could overload some servers. That’s why throttling is a polished feature of the package.

We ship two throttling strategies. The first one is FixedDelayThrottle that can give a fixed delay between all requests.

$crawler->throttle(new FixedDelayThrottle(200)); 

AdaptiveThrottle is a strategy that adjusts the delay based on how fast the server responds. If the server responds fast, the minimum delay will be low. If the server responds slow, we’ll automatically slow down crawling.

$crawler->throttle(new AdaptiveThrottle(
    minDelayMs: 50,
    maxDelayMs: 5000,
));

Testing with fake()

Like Laravel’s HTTP client, the crawler now has a fake to define which response should be returned for a request without making the actually request.

Crawler::create('https://example.com')
    ->fake([
        'https://example.com' => '<html><a href="/about">About</a></html>',
        'https://example.com/about' => '<html>About page</html>',
    ])
    ->onCrawled(function (string $url, CrawlResponse $response) {
        
    })
    ->start();

Using this faking helps to keep your tests executing fast.

Driver-based JavaScript rendering

Like in our Laravel PDF, Laravel Screenshot, and Laravel OG Image packages, Browsershot is no longer a hard dependency. JavaScript rendering is now driver-based, so you can use Browsershot, a new Cloudflare renderer, or write your own:

$crawler->executeJavaScript(new CloudflareRenderer($endpoint));

In closing

I’m usually very humble, but think that in this case I can say that our crawler package is the best available crawler in the entire PHP ecosystem.

You can find the package on GitHub. The full documentation is available on our documentation site.

This is one of the many packages we’ve created at Spatie. If you want to support our open source work, consider picking up one of our paid products.

Laravel News Links

LEGO Builder’s Work Tray

https://theawesomer.com/photos/2026/03/lego_model_making_wood_tray_t.jpg

LEGO Builder’s Work Tray

This large wooden tray provides the ideal work surface for building LEGO models and other construction sets. Measuring 41.3″ wide by 21.6″ deep, it has 11 trays for organizing parts and a spacious work area. A smooth-spinning lazy susan lets you access your creations from all sides. At 12.1 lb., it’s easy to move around and is thin enough to store behind the couch.

The Awesomer

Real Python: Automate Python Data Analysis With YData Profiling

https://files.realpython.com/media/report-overview.c5b7b1fa2ba4.png

The YData Profiling package generates an exploratory data analysis (EDA) report with a few lines of code. The report provides dataset and column-level analysis, including plots and summary statistics to help you quickly understand your dataset. These reports can be exported to HTML or JSON so you can share them with other stakeholders.

By the end of this tutorial, you’ll understand that:

  • YData Profiling generates interactive reports containing EDA results, including summary statistics, visualizations, correlation matrices, and data quality warnings from DataFrames.
  • ProfileReport creates a profile you can save with .to_file() for HTML or JSON export, or display inline with .to_notebook_iframe().
  • Setting tsmode=True and specifying a date column with sortby enables time series analysis, including stationarity tests and seasonality detection.
  • The .compare() method generates side-by-side reports highlighting distribution shifts and statistical differences between datasets.

To get the most out of this tutorial, you’ll benefit from having knowledge of pandas.

Note: The examples in this tutorial were tested using Python 3.13. Additionally, you may need to install setuptools<81 for backward compatibility.

You can install this package using pip:

Shell

$ python -m pip install ydata-profiling

Once installed, you’re ready to transform any pandas DataFrame into an interactive report. To follow along, download the example dataset you’ll work with by clicking the link below:

Get Your Code: Click here to download the free sample code and start automating Python data analysis with YData Profiling.

The following example generates a profiling report from the 2024 flight delay dataset and saves it to disk:

Python
flight_report.py

import pandas as pd
from ydata_profiling import ProfileReport

df = pd.read_csv("flight_data_2024_sample.csv")

profile = ProfileReport(df)
profile.to_file("flight_report.html")

This code generates an HTML file containing interactive visualizations, statistical summaries, and data quality warnings:

Dataset overview displaying statistics and variable types. Statistics include 35 variables, 10,000 observations, and 3.2% missing cells. Variable types: 5 categorical, 23 numeric, 1 DateTime, 6 text.

You can open the file in any browser to explore your data’s characteristics without writing additional analysis code.

There are a number of tools available for high-level dataset exploration, but not all are built for the same purpose. The following table highlights a few common options and when each one is a good fit:

Use case Pick Best for
You want to quickly generate an exploratory report ydata-profiling Generating exploratory data analysis reports with visualizations
You want an overview of a large dataset skimpy or df.describe() Providing fast, lightweight summaries in the console
You want to enforce data quality pandera Validating schemas and catching errors in data pipelines

Overall, YData Profiling is best used as an exploratory report creation tool. If you’re looking to generate an overview for a large dataset, using SkimPy or a built-in DataFrame library method may be more efficient. Other tools, like Pandera, are more appropriate for data validation.

If YData Profiling looks like the right choice for your use case, then keep reading to learn about its most important features.

Building a Report With YData Profiling

A YData Profiling report is composed of several sections that summarize different aspects of your dataset. Before customizing a report, it helps to understand the main components it includes and what each one is designed to show.

Read the full article at https://realpython.com/ydata-profiling-eda/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

Real Python: Quiz: The pandas DataFrame: Make Working With Data Delightful

https://realpython.com/static/real-python-placeholder-3.5082db8a1a4d.jpg

In this quiz, you’ll test your understanding of the
pandas DataFrame.

By working through this quiz, you’ll review how to create pandas DataFrames, access and modify columns, insert and sort data, extract values as NumPy arrays, and how pandas handles missing data.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python