https://fls-9f826fcc-b2ad-40d8-813f-9cf7dac049fa.laravel.cloud/posts/og-images/01KQFSMY0V9C227XEAWCQ2PDQZ.pngBuild a production-ready SaaS the Artisan way with Laravel: Starter Kits, Socialite, Cashier, Pennant, and Laravel Cloud.Laravel Blog
Stop Burning API Credits! Run AI Locally on Windows with Ollama & Laravel AI SDK
Laravel News Links
Laravel Schema Sentinel: Detect and Fix Database Schema Drift
https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/minimal-db-featured.png
Laravel Schema Sentinel by Ahtesham at Broadway Web Service detects schema drift— to help identify when your actual database no longer matches your migrations. It builds a shadow database from your migrations, diffs it against the live schema, and can generate a corrective migration when the two diverge.
- Deep drift detection — audits tables, columns, data types, nullability, defaults, indexes, and foreign keys
- Cross-environment comparison — verify local migrations match staging or production
- Auto-generated migrations — create corrective migrations with interactive review
- Visual dashboard — built-in Livewire component for database health monitoring
- Pre-migration guard — automatically block
php artisan migrateif drift is detected - And more…
Detecting Drift
The core command runs your migrations into a temporary shadow database, then compares that shadow against your live connection:
php artisan schema:drift
It checks tables, columns, data types, nullability, defaults, indexes, and foreign keys. Running with --strict also flags columns or tables present in the live database that have no corresponding migration.
Generating a Fix
When drift is found, you can have Sentinel generate a migration to close the gap:
php artisan schema:drift --fix --interactive
Interactive mode walks you through each detected difference before writing anything. A --sql flag skips file creation and prints the migration code to the terminal for review.
Cross-Environment Comparison
You can point the diff at a different environment’s database connection instead of your local one:
php artisan schema:drift --compare-env=staging
This uses the named connection from your config/database.php, so you can verify your local migrations match what’s deployed to staging or production before running them.
Programmatic API
A Sentinel facade exposes the same diff as a DTO, which you can use in controllers, Livewire components, or admin dashboards:
use Sentinel\SchemaSentinel\Facades\Sentinel;
$diff = Sentinel::check(strict: true);
return response()->json([
'in_sync' => !$diff->hasDifferences(),
'drift' => $diff->toArray(),
]);
The package also ships a Blade-embeddable Livewire component for a visual health dashboard, though it only renders in local environments:
<livewire:sentinel-database-health />
The package supports Laravel 11.x through 13.x. You can find Laravel Schema Sentinel on GitHub.
Laravel News
Supreme Court tosses Louisiana’s race-based congressional map
https://media.notthebee.com/articles/69f21a8a7f6d969f21a8a7f6da.jpg
Big news from the frontlines of the redistricting war.
Not the Bee
WATCH: This lady tap-dancing to metal is my new favorite thing
https://media.notthebee.com/articles/69ef8b84b8a2a69ef8b84b8a2b.jpg
I found it, the best thing on the internet right now.
Not the Bee
Notepad++ Finally Lands On macOS as a Native App
BrianFagioli writes: Notepad++ has finally made its way to macOS, and this time it is not through a compatibility layer. A new community-driven port brings the long-standing Windows text editor over as a fully native Mac application, built with Cocoa and compiled for both Apple Silicon and Intel systems. Instead of relying on Wine or similar tools, the project replaces the Windows-specific interface with a macOS-native one while keeping the core editing engine intact, allowing longtime users to retain the same workflow, shortcuts, and overall feel.
The port is independent from the original Notepad++ project but tracks upstream changes closely, with development happening in the open. It is code-signed and notarized, and notably avoids telemetry or ads. Plugin support is being rebuilt for macOS and is still evolving, but the groundwork is in place. While macOS already has several established editors, this effort is aimed squarely at users who want the familiar Notepad++ experience without relearning a new tool. You can download the app here.
Read more of this story at Slashdot.
Slashdot
Raspberry Pi Powered C-3PO Head
https://cdn-blog.adafruit.com/uploads/2026/04/Screenshot-2026-04-24-at-10.00.55-AM.png

StarWars Day is almost here! What better way to celebrate than a chat with C-3PO. Samuel Potozkin put a lot of work into this build. Beyond the hardware, Potozkin pulled off a ton of prop building techniques to get a believable bot. The head alone required meticulous finishing of 3d printing parts to create the metallic finish.
Via Reddit:
I built a C-3PO head and integrated a Raspberry Pi system inside so you can actually talk to it and it responds in real time.
Here’s how it works:
Audio comes in through a MEMS mic
The Pi processes the input and generates a response
Output is played through an internal speaker
I also used an exciter instead of a traditional speaker so the sound comes through the shell instead of a visible driver.This was my first time using a raspberry pi for anything and it took some tweaking. But I’m happy with how it turned out.
Very thorough walkthrough video via YouTube:
Project Repository:
https://github.com/spotozkin/threepio
3D printing – Adafruit Industries – Makers, hackers, artists, designers and engineers!
Parents who want to give their young children smartphones should watch this viral ad before doing so
https://media.notthebee.com/articles/69e260c276c5969e260c276c5a.jpg
This is the kind of ad that makes you stop dead in your tracks — and with good reason.
Not the Bee
Percona Operator for MySQL 1.1.0: PITR, Incremental Backups, and Compression
https://www.percona.com/wp-content/uploads/2026/04/featured-2.png
The latest release of the Percona Operator for MySQL, 1.1.0, is here. It brings point-in-time recovery, incremental backups, zstd backup compression, configurable asynchronous replication retries, and a set of stability fixes. This post walks through the highlights and how they help your MySQL deployments on Kubernetes.
Percona Operator for MySQL 1.1.0

Running stateful databases on Kubernetes means your backup and recovery story has to be airtight. A full nightly backup is fine, until the DBA drops a table at 2 PM and you’re looking at 14 hours of lost work. Or until your storage bill grows faster than your actual data because every backup is a full copy.
Percona Operator for MySQL 1.1.0 addresses exactly these pain points. This release lands point-in-time recovery, incremental backups, and backup compression: three features that together give you finer recovery control, faster backup jobs, and meaningfully smaller storage footprints. It also brings configurable asynchronous replication retries and a set of stability fixes that harden everyday operations.
This is a community-driven release. Nearly every headline feature in 1.1.0 traces back to user feedback: issues raised on forums.percona.com, JIRA tickets filed by operators in production, and recurring questions from teams running MySQL on Kubernetes at scale. The operator is fully open source, runs on any CNCF-conformant Kubernetes distribution (GKE, EKS, OpenShift, or bare metal), and costs nothing to run. Let’s walk through what’s new.
In this post, you’ll learn about:
- Point-in-Time Recovery (Tech Preview)
- Incremental Backups (Tech Preview)
- Backup Compression with zstd
- Asynchronous replication retry configuration
- Other improvements
Point-in-Time Recovery (Tech Preview)
A backup restores your cluster to the moment the backup was taken, but incidents rarely respect your backup schedule. With point-in-time recovery now available in Tech Preview, you can restore your MySQL cluster to any specific timestamp or GTID position, not just to a backup snapshot.
The operator continuously collects binary logs and stores them alongside your full and incremental backups. When a restore is needed, it starts from the nearest full backup, applies incremental backups, and then replays binary logs forward to the exact point in time you specify. PITR works identically across asynchronous and group replication topologies, so you don’t need to restructure your setup to take advantage of it.
A timestamp-based restore targets the exact moment before an incident:
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLRestore metadata: name: restore-pitr-example spec: clusterName: cluster1 backupName: backup-20260418 pitr: type: date date: "2026-04-18 13:45:00" #Restore with GTID # type: gtid # gtid: a3e5ff70-83e2-11ef-8e57-7a62caf7e1e3:1-36 |
When you need finer precision than timestamp-based recovery (for example, replaying right up to the transaction immediately before a bad UPDATE), use pitr.type: gtid and specify the exact GTID position.
This is especially useful after an accidental DROP TABLE or a bad application deploy mid-day: you recover to the moment just before the event, not to last night’s snapshot.
See the documentation for the full configuration reference.
Note: PITR is marked Tech Preview in 1.1.0 and is not recommended for production workloads yet. Try it in staging and share your feedback on the community forum.
Incremental Backups (Tech Preview)

Full backups work, but they come with a cost: every job copies your entire dataset, consuming time, I/O, and storage whether or not much has changed since the last run. Incremental backups solve this by capturing only the changes since the previous backup.
The Operator integrates incremental backup support, powered by Percona XtraBackup, across all supported backup storage backends (S3-compatible, GCS, Azure Blob Storage). Both scheduled and on-demand backup jobs can run incrementally. When you trigger a restore, the Operator reconstructs the full state by chaining the base backup with the subsequent incremental sets, so you don’t manage that complexity manually.
This helps when you need:
-
- Faster daily backup jobs on large datasets that change slowly
-
- Lower storage and egress costs per backup cycle
-
- Tighter recovery windows without sacrificing backup frequency
-
- Less I/O pressure on the primary during backup jobs
The backup manifest lives in deploy/backup/backup.yaml. Note the commented type and incrementalBaseBackupName fields: they are exactly how you switch a backup to incremental mode and point it at a previous backup as its base.
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLBackup metadata: finalizers: – percona.com/delete-backup name: backup1 spec: clusterName: ps-cluster1 storageName: minio type: incremental |
Set type: full to take a base backup, then for each subsequent incremental set type: incremental.
Note: Incremental backups are also marked Tech Preview in 1.1.0. You can learn more about this feature in a separate blog post: Incremental backups in Percona Kubernetes Operator for MySQL
Backup Compression with zstd

Even without incremental backups, you can now shrink your full backup size significantly. The operator adds support for zstd compression, which compresses backup data with Percona XtraBackup before it streams to object storage.
Smaller transfers mean faster uploads, lower egress costs, and less object storage consumption, especially relevant when your cluster is in a different region from your storage bucket. The operator handles decompression transparently during restore, so your recovery workflow stays the same.
You can enable compression globally by configuring XtraBackup in mysql.configuration on the Custom Resource:
|
spec: mysql: configuration: | [xtrabackup] compress=zstd |
Or enable it per on-demand backup via containerOptions:
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLBackup metadata: name: backup1-compressed finalizers: – percona.com/delete-backup spec: clusterName: ps-cluster1 storageName: s3-us-west containerOptions: args: xtrabackup: – "–compress" |
Full details are in the compressed backups documentation. Percona XtraBackup’s zstd compression reference covers the algorithm-level tradeoffs if you want to tune further. One known limitation in 1.1.0: lz4 compression is not yet supported pending an upstream resolution.
Asynchronous Replication Retry Configuration

In asynchronous replication topologies, transient network issues can stall replication threads on a MySQL Pod. Previously, reconnection behavior was fixed. Now you can tune it via the Custom Resource using two environment variables:
- ASYNC_SOURCE_RETRY_COUNT: the number of reconnection attempts before the replica gives up
- ASYNC_SOURCE_CONNECT_RETRY: the delay in seconds between reconnection attempts
|
spec: mysql: env: – name: ASYNC_SOURCE_RETRY_COUNT value: "10" – name: ASYNC_SOURCE_CONNECT_RETRY value: "30" |
This is useful in environments with higher network latency or less reliable connectivity between zones. You can give the replica more time to recover without manual intervention.
A related improvement (K8SPS-69): the readiness probe now fails if replication threads stop on a MySQL Pod. This prevents Kubernetes from routing traffic to a replica that has quietly fallen behind, a common source of stale reads that were difficult to detect without custom monitoring.
Other Improvements
Operational polish shipped alongside the headline features:
-
- Readiness probe catches stopped replication (K8SPS-69): the readiness probe now fails when replication threads stop, so Kubernetes stops routing traffic to replicas that have quietly fallen behind.
-
- Automatic PVC removal on async replication restore (K8SPS-215): old PVCs are cleaned up automatically when restoring in async replication mode, one less manual step after a restore.
-
- Scheduled backups paused on unhealthy clusters (K8SPS-435): backups no longer kick off against a degraded cluster, preventing partial or corrupted backup sets.
-
- Structured error handling (K8SPS-595): invalid storage configurations now surface as structured error messages instead of Operator panics.
-
- Status events reclassified (K8SPS-601): normal status transitions emit as Normal event types instead of warnings, cutting noise in kubectl describe output and alerting pipelines.
-
- HAProxy file descriptor handling (K8SPS-666): file descriptor management in the HAProxy container is optimized so connection counts are no longer silently capped on busy clusters.
The release also ships improved documentation: OpenShift installation instructions now include the full OLM procedure, an Operator upgrade tutorial for OpenShift has been added, and Helm documentation covers customized parameters and custom release naming.
Conclusion
Percona Operator for MySQL 1.1.0 delivers meaningful improvements to every phase of the database lifecycle on Kubernetes. PITR and incremental backups in Tech Preview give you a path toward granular recovery without full-backup overhead. Compression with zstd reduces your storage and egress costs immediately. Configurable async replication retries and a batch of stability fixes harden the Operator for production workloads at scale. These features are in this release because the community asked for them.
We encourage you to read the full release notes and try the new features. Feedback is welcome on the GitHub repository, the Community Forum, or JIRA.
Try It Out
Planet for the MySQL Community
MySQL MCP Server v1.7.0 is out
https://askdba.net/wp-content/uploads/2026/04/gemini_generated_image_fjioksfjioksfjio.png?w=624
April 19, 2026
It took three release candidates and more CI tweaks than I’d like to admit, but v1.7.0 is finally tagged GA. Here’s what actually changed and why it matters.
The thing I kept getting asked about: add_connection
Almost every multi-database user hits the same wall: you configure your connections at startup, and that’s it. Want to point Claude at a different instance mid-session? Restart the server. Not great.
add_connection fixes that. Enable it with MYSQL_MCP_EXTENDED=1 and MYSQL_MCP_ENABLE_ADD_CONNECTION=1, and Claude can register a new named connection on the fly — DSN validation, duplicate-name rejection, and a hard block on the root MySQL user all happen before the connection is accepted. Once it’s in, use_connection it works as usual.
It’s intentionally opt-in behind two flags. Allowing an AI client to register arbitrary database connections at runtime warrants an explicit “yes, I want this” from the operator.
Finding stuff across a big schema: search_schema and schema_diff
Two tools I personally felt the absence of every time I was debugging a large schema.
search_schema does what it sounds like — pattern-match against table and column names across all accessible databases. Before this, you’d either write the query yourself or ask Claude to guess where a column lived. Now you just ask.
schema_diff is the one I’m more excited about. Point it at two databases, and it tells you what’s structurally different. Columns that exist in staging but not prod, type mismatches, missing indexes — all surface immediately. We’ve already caught more than a few “oh, that migration never ran” moments with it.
Pagination, retries, and the unglamorous stuff
run_query now supports an offset parameter for SELECT and UNION queries, returning has_more and next_offset in the response. Big result sets no longer mean hitting row caps and wondering what you missed.
Retries got a proper implementation too. Transient errors — bad pooled connections, deadlocks, lock wait timeouts — now trigger exponential backoff instead of just failing. After a driver.ErrBadConn the pool is re-pinged, which cuts recovery time noticeably after a MySQL restart.
Neither of these is flashy, but they’re the kind of thing that makes the tool feel solid rather than fragile.
Column masking
Set MYSQL_MCP_MASK_COLUMNS=email,password,ssn and those columns are redacted in every run_query response. Nothing leaves the server. No query rewrites, no application changes. It’s a small feature that a few teams have been asking for since before v1.6.
One breaking change worth knowing about: SSH host key verification
This one could bite you on upgrade if you’re using SSH tunnels. Host key verification is now on by default. The tunnel checks ~/.ssh/known_hosts (or MYSQL_SSH_KNOWN_HOSTS, or a pinned MYSQL_SSH_HOST_KEY_FINGERPRINT) before allowing the connection.
If you were running without strict host key checking, your tunnel will fail after upgrading until you either add the host key to known_hosts or explicitly opt out with MYSQL_SSH_STRICT_HOST_KEY_CHECKING=false. The opt-out exists, but it’s a MITM risk — the default is the right behavior.
Upgrading
# Homebrewbrew update && brew upgrade mysql-mcp-server# Dockerdocker pull ghcr.io/askdba/mysql-mcp-server:latest
Full changelog: github.com/askdba/mysql-mcp-server/releases/tag/v1.7.0
Questions and issues are welcome on GitHub.
Planet MySQL
