https://media.notthebee.com/articles/69e260c276c5969e260c276c5a.jpg
This is the kind of ad that makes you stop dead in your tracks — and with good reason.
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/69e260c276c5969e260c276c5a.jpg
This is the kind of ad that makes you stop dead in your tracks — and with good reason.
Not the Bee
https://cdn-blog.adafruit.com/uploads/2026/04/Screenshot-2026-04-24-at-10.00.55-AM.png

StarWars Day is almost here! What better way to celebrate than a chat with C-3PO. Samuel Potozkin put a lot of work into this build. Beyond the hardware, Potozkin pulled off a ton of prop building techniques to get a believable bot. The head alone required meticulous finishing of 3d printing parts to create the metallic finish.
Via Reddit:
I built a C-3PO head and integrated a Raspberry Pi system inside so you can actually talk to it and it responds in real time.
Here’s how it works:
Audio comes in through a MEMS mic
The Pi processes the input and generates a response
Output is played through an internal speaker
I also used an exciter instead of a traditional speaker so the sound comes through the shell instead of a visible driver.This was my first time using a raspberry pi for anything and it took some tweaking. But I’m happy with how it turned out.
Very thorough walkthrough video via YouTube:
Project Repository:
https://github.com/spotozkin/threepio
3D printing – Adafruit Industries – Makers, hackers, artists, designers and engineers!
https://www.percona.com/wp-content/uploads/2026/04/featured-2.png
The latest release of the Percona Operator for MySQL, 1.1.0, is here. It brings point-in-time recovery, incremental backups, zstd backup compression, configurable asynchronous replication retries, and a set of stability fixes. This post walks through the highlights and how they help your MySQL deployments on Kubernetes.

Running stateful databases on Kubernetes means your backup and recovery story has to be airtight. A full nightly backup is fine, until the DBA drops a table at 2 PM and you’re looking at 14 hours of lost work. Or until your storage bill grows faster than your actual data because every backup is a full copy.
Percona Operator for MySQL 1.1.0 addresses exactly these pain points. This release lands point-in-time recovery, incremental backups, and backup compression: three features that together give you finer recovery control, faster backup jobs, and meaningfully smaller storage footprints. It also brings configurable asynchronous replication retries and a set of stability fixes that harden everyday operations.
This is a community-driven release. Nearly every headline feature in 1.1.0 traces back to user feedback: issues raised on forums.percona.com, JIRA tickets filed by operators in production, and recurring questions from teams running MySQL on Kubernetes at scale. The operator is fully open source, runs on any CNCF-conformant Kubernetes distribution (GKE, EKS, OpenShift, or bare metal), and costs nothing to run. Let’s walk through what’s new.
In this post, you’ll learn about:

A backup restores your cluster to the moment the backup was taken, but incidents rarely respect your backup schedule. With point-in-time recovery now available in Tech Preview, you can restore your MySQL cluster to any specific timestamp or GTID position, not just to a backup snapshot.
The operator continuously collects binary logs and stores them alongside your full and incremental backups. When a restore is needed, it starts from the nearest full backup, applies incremental backups, and then replays binary logs forward to the exact point in time you specify. PITR works identically across asynchronous and group replication topologies, so you don’t need to restructure your setup to take advantage of it.
A timestamp-based restore targets the exact moment before an incident:
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLRestore metadata: name: restore-pitr-example spec: clusterName: cluster1 backupName: backup-20260418 pitr: type: date date: "2026-04-18 13:45:00" #Restore with GTID # type: gtid # gtid: a3e5ff70-83e2-11ef-8e57-7a62caf7e1e3:1-36 |
When you need finer precision than timestamp-based recovery (for example, replaying right up to the transaction immediately before a bad UPDATE), use pitr.type: gtid and specify the exact GTID position.
This is especially useful after an accidental DROP TABLE or a bad application deploy mid-day: you recover to the moment just before the event, not to last night’s snapshot.
See the documentation for the full configuration reference.
Note: PITR is marked Tech Preview in 1.1.0 and is not recommended for production workloads yet. Try it in staging and share your feedback on the community forum.

Full backups work, but they come with a cost: every job copies your entire dataset, consuming time, I/O, and storage whether or not much has changed since the last run. Incremental backups solve this by capturing only the changes since the previous backup.
The Operator integrates incremental backup support, powered by Percona XtraBackup, across all supported backup storage backends (S3-compatible, GCS, Azure Blob Storage). Both scheduled and on-demand backup jobs can run incrementally. When you trigger a restore, the Operator reconstructs the full state by chaining the base backup with the subsequent incremental sets, so you don’t manage that complexity manually.
This helps when you need:
The backup manifest lives in deploy/backup/backup.yaml. Note the commented type and incrementalBaseBackupName fields: they are exactly how you switch a backup to incremental mode and point it at a previous backup as its base.
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLBackup metadata: finalizers: – percona.com/delete-backup name: backup1 spec: clusterName: ps-cluster1 storageName: minio type: incremental |
Set type: full to take a base backup, then for each subsequent incremental set type: incremental.
Note: Incremental backups are also marked Tech Preview in 1.1.0. You can learn more about this feature in a separate blog post: Incremental backups in Percona Kubernetes Operator for MySQL

Even without incremental backups, you can now shrink your full backup size significantly. The operator adds support for zstd compression, which compresses backup data with Percona XtraBackup before it streams to object storage.
Smaller transfers mean faster uploads, lower egress costs, and less object storage consumption, especially relevant when your cluster is in a different region from your storage bucket. The operator handles decompression transparently during restore, so your recovery workflow stays the same.
You can enable compression globally by configuring XtraBackup in mysql.configuration on the Custom Resource:
|
spec: mysql: configuration: | [xtrabackup] compress=zstd |
Or enable it per on-demand backup via containerOptions:
|
apiVersion: ps.percona.com/v1 kind: PerconaServerMySQLBackup metadata: name: backup1-compressed finalizers: – percona.com/delete-backup spec: clusterName: ps-cluster1 storageName: s3-us-west containerOptions: args: xtrabackup: – "–compress" |
Full details are in the compressed backups documentation. Percona XtraBackup’s zstd compression reference covers the algorithm-level tradeoffs if you want to tune further. One known limitation in 1.1.0: lz4 compression is not yet supported pending an upstream resolution.

In asynchronous replication topologies, transient network issues can stall replication threads on a MySQL Pod. Previously, reconnection behavior was fixed. Now you can tune it via the Custom Resource using two environment variables:
|
spec: mysql: env: – name: ASYNC_SOURCE_RETRY_COUNT value: "10" – name: ASYNC_SOURCE_CONNECT_RETRY value: "30" |
This is useful in environments with higher network latency or less reliable connectivity between zones. You can give the replica more time to recover without manual intervention.
A related improvement (K8SPS-69): the readiness probe now fails if replication threads stop on a MySQL Pod. This prevents Kubernetes from routing traffic to a replica that has quietly fallen behind, a common source of stale reads that were difficult to detect without custom monitoring.
Operational polish shipped alongside the headline features:
The release also ships improved documentation: OpenShift installation instructions now include the full OLM procedure, an Operator upgrade tutorial for OpenShift has been added, and Helm documentation covers customized parameters and custom release naming.
Percona Operator for MySQL 1.1.0 delivers meaningful improvements to every phase of the database lifecycle on Kubernetes. PITR and incremental backups in Tech Preview give you a path toward granular recovery without full-backup overhead. Compression with zstd reduces your storage and egress costs immediately. Configurable async replication retries and a batch of stability fixes harden the Operator for production workloads at scale. These features are in this release because the community asked for them.
We encourage you to read the full release notes and try the new features. Feedback is welcome on the GitHub repository, the Community Forum, or JIRA.
Planet for the MySQL Community
https://askdba.net/wp-content/uploads/2026/04/gemini_generated_image_fjioksfjioksfjio.png?w=624
April 19, 2026
It took three release candidates and more CI tweaks than I’d like to admit, but v1.7.0 is finally tagged GA. Here’s what actually changed and why it matters.
add_connectionAlmost every multi-database user hits the same wall: you configure your connections at startup, and that’s it. Want to point Claude at a different instance mid-session? Restart the server. Not great.
add_connection fixes that. Enable it with MYSQL_MCP_EXTENDED=1 and MYSQL_MCP_ENABLE_ADD_CONNECTION=1, and Claude can register a new named connection on the fly — DSN validation, duplicate-name rejection, and a hard block on the root MySQL user all happen before the connection is accepted. Once it’s in, use_connection it works as usual.
It’s intentionally opt-in behind two flags. Allowing an AI client to register arbitrary database connections at runtime warrants an explicit “yes, I want this” from the operator.
search_schema and schema_diffTwo tools I personally felt the absence of every time I was debugging a large schema.
search_schema does what it sounds like — pattern-match against table and column names across all accessible databases. Before this, you’d either write the query yourself or ask Claude to guess where a column lived. Now you just ask.
schema_diff is the one I’m more excited about. Point it at two databases, and it tells you what’s structurally different. Columns that exist in staging but not prod, type mismatches, missing indexes — all surface immediately. We’ve already caught more than a few “oh, that migration never ran” moments with it.
run_query now supports an offset parameter for SELECT and UNION queries, returning has_more and next_offset in the response. Big result sets no longer mean hitting row caps and wondering what you missed.
Retries got a proper implementation too. Transient errors — bad pooled connections, deadlocks, lock wait timeouts — now trigger exponential backoff instead of just failing. After a driver.ErrBadConn the pool is re-pinged, which cuts recovery time noticeably after a MySQL restart.
Neither of these is flashy, but they’re the kind of thing that makes the tool feel solid rather than fragile.
Set MYSQL_MCP_MASK_COLUMNS=email,password,ssn and those columns are redacted in every run_query response. Nothing leaves the server. No query rewrites, no application changes. It’s a small feature that a few teams have been asking for since before v1.6.
This one could bite you on upgrade if you’re using SSH tunnels. Host key verification is now on by default. The tunnel checks ~/.ssh/known_hosts (or MYSQL_SSH_KNOWN_HOSTS, or a pinned MYSQL_SSH_HOST_KEY_FINGERPRINT) before allowing the connection.
If you were running without strict host key checking, your tunnel will fail after upgrading until you either add the host key to known_hosts or explicitly opt out with MYSQL_SSH_STRICT_HOST_KEY_CHECKING=false. The opt-out exists, but it’s a MITM risk — the default is the right behavior.
# Homebrewbrew update && brew upgrade mysql-mcp-server# Dockerdocker pull ghcr.io/askdba/mysql-mcp-server:latest
Full changelog: github.com/askdba/mysql-mcp-server/releases/tag/v1.7.0
Questions and issues are welcome on GitHub.
Planet MySQL
https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/Spatie-AI-Skills-LN.png
The Spatie team has open-sourced their internal coding guidelines as reusable AI skills via spatie/guidelines-skills. Skills are reusable instruction sets for AI coding assistants that activate automatically based on context — think of them as project-aware prompts that keep an AI agent aligned with your team’s conventions without repeated manual guidance.
The package ships with four skills covering the areas Spatie cares most about:
const declarations, strict equality operators, named functions, and destructuring patterns.The skills are distributed through skills.sh, which means they work across multiple AI stacks — Claude Code, Cursor, Codex, and GitHub Copilot.
For Laravel Boost users, installation is via Composer:
composer require spatie/guidelines-skills --dev
php artisan boost:install
Select the Spatie guidelines from the available options and they’ll be set up automatically. To keep them current as the guidelines evolve, run:
composer update spatie/guidelines-skills
php artisan boost:update
If you’re not using Laravel Boost, you can install via the skills.sh CLI instead:
npx skills add spatie/guidelines-skills
Previously, Spatie offered a similar package spatie/boost-spatie-guidelines exclusively for Laravel Boost users. The move to skills.sh opens this up to anyone regardless of their AI tooling setup.
If you want a head start aligning your AI coding assistant with battle-tested Laravel and PHP conventions, the source is available on GitHub.
Laravel News
https://gizmodo.com/app/uploads/2026/04/96ea2509a90e527642c822303e56296a07bcfce4-1920×1080-1-1280×853.jpg
Wall Street seems to think Anthropic’s new AI design tool could be a serious threat to Figma and other software.
On Friday, Anthropic announced Claude Design, a new tool that lets users create polished visuals like slide decks, app prototypes, and marketing one-pagers using simple text prompts. The tool is powered by Claude Opus 4.7 and is rolling out as a research preview to Claude Pro, Max, Team, and Enterprise subscribers gradually throughout today.
It works by letting users describe what they want in plain language prompts. They can also upload codebases and design files, allowing Claude to build a design system that automatically applies a team’s colors, typography, and other design components across projects.
Claude then generates an initial version of the design, which users can refine through conversation, inline comments, direct edits, or custom sliders built by Claude.
Projects can be exported as PDFs, PowerPoints, or into Canva. Once completed, designs can also be packaged for Claude Code to build into working projects.
Anthropic said the tool has already been used to create realistic prototypes, pitch decks, and marketing materials. The company is pitching it as a way for experienced designers to explore ideas more quickly, while also giving founders and product managers without a design background a way to bring their ideas to life.
“Claude Design gives designers room to explore widely and everyone else a way to produce visual work,” the company said in a press release.
Anthropic is also emphasizing that the tool could be used to complement other products rather than completely replace them.
“We’re excited to build on our collaboration with Claude, making it seamless for people to bring ideas and drafts from Claude Design into Canva, where they instantly become fully editable and collaborative designs ready to refine, share, and publish,” Canva’s CEO said in Anthropic’s press release.
It should be noted that LLMs have been incredibly unreliable when it comes to generating visual elements. Yes, image generators can be impressive on first glance, but when a user starts trying to edit individual elements, things can quickly fall apart. We will have to wait and see how well Claude Design pulls off its stated purpose.
Still, Wall Street appears to see it as competition for the design industry.
Figma’s stock fell about 7% on Friday following the announcement. The company is widely considered the dominant player in UI and UX design for websites and apps, with an estimated 80% to 90% marketshare.
The timing is notable. Just two months ago, Figma launched a feature called Code to Canvas, which lets users convert code generated by tools like Claude Code into editable designs inside Figma.
Adding to the tension, Anthropic Chief Product Officer Mike Krieger stepped down from Figma’s board just days ago amid speculation the company was gearing up to launch a design tool.
Figma did not immediately respond to a request for comment from Gizmodo.
Gizmodo
https://media.notthebee.com/articles/69e1268e7e12769e1268e7e128.jpg
Do you think ABC is upset?
Not the Bee
https://cdn-blog.adafruit.com/uploads/2026/04/32_IMG_3150_lowres.jpg
Shared by Layer me up NL on MakerWorld:
Bring a touch of magic into your room with these Potions!
Inspired by the iconic Minecraft potion bottle, these 3D-printable models are perfect for adding a playful fantasy vibe to your space.Several versions are available — you can choose a transparent model to add an LED light inside, a solid version that also works beautifully as a mini vase, or a mini Potion, which makes a fun and unique gift for kids (we recommend gluing the small cap for safety).
Download the files and learn more

Every Thursday is #3dthursday here at Adafruit! The DIY 3D printing community has passion and dedication for making solid objects from digital models. Recently, we have noticed electronics projects integrated with 3D printed enclosures, brackets, and sculptures, so each Thursday we celebrate and highlight these bold pioneers!
Have you considered building a 3D project around an Arduino or other microcontroller? How about printing a bracket to mount your Raspberry Pi to the back of your HD monitor? And don’t forget the countless LED projects that are possible when you are modeling your projects in 3D!
3D printing – Adafruit Industries – Makers, hackers, artists, designers and engineers!
https://theawesomer.com/photos/2026/04/rough_enough_crossbody_bag_t.jpg
This compact crossbody bag from Rough Enough’s is built for carrying phones, keys, wallets, and other small everyday accessories. It’s fabricated from water-repellent 1000D CORDURA fabric. The 4″ x 1″ x 8″ pouch fits even the biggest smartphones, and keeps EDC essentials organized with a quick-access front pocket and YKK zippers.
The Awesomer
https://static.pingcap.com/files/2026/04/10114939/Blog-Feature-Banner.png
Editor’s note: This post originally appeared on The New Stack and is republished with permission. The original version is available here.
Key Takeaways
- S3’s durability and global availability remove the need to co-locate data with compute.
- Decoupled storage enables ephemeral clusters, event-driven workflows, and automatic tiering.
- TiDB X uses S3 as its shared backend for independent scaling and faster recovery.
- Object storage-first architecture matches the elastic, on-demand needs of AI agents.
For decades, database designers have built distributed databases around the assumption that storage must live close to compute.
The farther data travels over the network, the reasoning goes, the greater the potential for delay. Local RAID (redundant array of independent disks) arrays, network-attached storage (NAS), and cluster file systems keep data close, making it quick and easy to access.
But in a distributed system, keeping the entire data store close to compute makes scaling slow, cumbersome, and expensive. Each time you replicate a node or cluster, you must replicate its associated data as well.
It isn’t ideal, but until recently, there wasn’t any reasonable alternative. Databases had to scale. Teams had to meet service-level agreements (SLAs). Wide-area networks weren’t reliable enough to support high-performance databases at scale. Database designers accordingly spent a great deal of energy solving problems related to coordination, consistency, and replication logic.
But imagine things were different. What if they didn’t have to worry about the network, where their data lived, or how to get it from Point A to Point B? How would they design a database then?
That’s the intriguing question raised by the advent of cloud object storage services like AWS S3, Google Cloud Storage, and Microsoft Blob Storage.
The structure of cloud object storage services couldn’t be simpler. They’re essentially giant heaps of data, accessed via an API, through key/value pairings.
Their unlimited storage capacity and their “everywhere” availability make them revolutionary. They can hold billions of records — images, logs, training data, whatever you need — and crucially, they can make every one of those records available to compute anywhere in the world, at any level of workload.
S3 is extremely reliable. AWS designed S3 for 11 nines of durability (that’s 99.999999999%) and 99.99% availability, and it replicates data automatically across Amazon’s regional facilities. This means data on S3 is extremely safe and highly available without the need to manage physical disks or replication.
In addition, S3 scales seamlessly. There are no fixed volumes. No need for capacity planning. You can store practically unlimited data, and performance scales with parallel access rather than a single-server bottleneck. These guarantees free architects from worrying about low-level storage failures, capacity, and edge cases involving consistency.
In short, cloud object storage provides a highly durable, always-on, strongly-consistent single source of truth. It’s not as fast as local storage, but it doesn’t have to be. What services like S3 lack in sheer speed, they more than make up for in reliability and ease of maintenance. Instead of worrying about shards, segmentation, and software-defined networks, a database can simply retrieve data with confidence that it will be delivered in a reasonable amount of time.
What this means is that for the next generation of distributed databases, cloud object storage will, for all intents and purposes, be the network.
Building on cloud object storage enables several architectural patterns that were previously impractical.
Now let’s see how these capabilities translate into a real-world design. PingCAP uses cloud object storage as the foundation for TiDB X, the latest version of our popular open source distributed SQL database, TiDB.
Figure 1. TiDB X’s architecture with built-in object storage.
As shown in the diagram above, TiDB X fully separates compute and storage, using S3 for the shared backend. Compute nodes scale independently up and down. Fast local caches and Raft ensure consistency and low-latency access for hot data. Instead of keeping the entire data store close by, TiDB X keeps only the most active data near compute. TiDB X monitors query patterns, latency targets, and data characteristics, then reshapes itself in response to demand.
Its object storage-based architecture streamlines recovery and backup processes. By using S3 for primary data persistence, TiDB X reduces the overhead of traditional backup maintenance, enabling significantly faster completion times. This design also mitigates the impact of node failures: since local state functions primarily serve as a cache for durable, replicated storage, a failed instance can be replaced by retrieving its required state directly from object storage to resume operations.
From an operational perspective, cloud object storage makes TiDB X both highly adaptable and extremely cost-efficient. Its autoscaler responds not just to preset infrastructure thresholds, but to contextual signals like query patterns, latency targets, and data types. This enables it to reshape its resources in real time to address different tasks.
In sum, by building atop AWS’s high-performance object data store, TiDB X demonstrates how a cloud database can achieve elasticity, performance, and simplicity without sacrificing consistency or scale.
Keeping large relational data stores close to compute resources has always been a compromise. It was an expensive solution to a problem created by the limitations of traditional networking.
With architectures like TiDB, we see that the sheer power and scale of services like S3 have made the old workarounds unnecessary. They’ve rendered traditional architectures increasingly obsolete. More than that, they’ve enabled practices, such as ephemeral compute, suited to a world where users are more likely to be AI agents than humans.
As AI reshapes business organizations and best practices, the database itself is changing form. In large part, it’s services like S3 that are making that shift possible. By making data placeless, ubiquitous, and effortlessly accessible, cloud object storage is overturning the assumptions that once guided database design. The result will be databases that are more flexible and resilient — ones that are simpler to manage and scale almost effortlessly.
TiDB X is built from the ground up on cloud object storage. Explore the architecture behind it or try TiDB Cloud for free to see it in action.
Planet for the MySQL Community