The New Season Of ‘Californians Move To Texas’ Is Finally Here

https://media.babylonbee.com/articles/69d53ecbe60ab69d53ecbe60ac.jpg

Steve and Timpani moved from California to Texas in the hit series Californians Move To Texas. There were a few cultural differences they weren’t prepared for in going from California wokeness to Texas freedom. Now their story continues…

In the all-new season, Steve and Timpani’s continued adjustment to all things Texas hits a speed bump when Timpani’s sister, Brittuni, arrives to talk some "California sense" into her gun-loving sister. Can Steve and Timpani’s love survive the wedge slowly being driven between them? And who knows what other surprises may be in store.

Catch the trailer here and get hype:

Episode 1: The Rodeo will premiere on YouTube on April 7 at 7PM PT:

Babylon Bee

Tournament of Databases: The Winner!

https://villagesql.com/blog/content/images/size/w1200/2026/04/Gemini_Generated_Image_803gp6803gp6803g.png

And we have a winner! It was a busy weekend of match ups and we have our champion.

Round 2 Results

#1 Oracle vs. #2 MongoDB – Winner = Oracle

Oracle has too much enterprise credibility to overcome and they outlast the document database fans to win its matchup.

#1 MySQL vs. #3 DuckDB – Winner = MySQL

While in-process analytics are gaining in importance, the versatility and transactional nature of MySQL makes this a comfortable win for MySQL. 

#1 PostgreSQL vs. #2 Snowflake – Winner =  PostgreSQL

This was a match up of heavyweights with a battle of OTLP leader vs. OLAP leader. A classic matchup of different styles. Ultimately, the open source community of committers and extensions carried PostgreSQL to the victory.  

#1 SQL Server vs. #2 Databricks – Winner = SQL Server

Another battle of styles, where the enterprise chops of SQL Server go up against the momentum of Databricks in data management. Ultimately, Microsoft ability to to recruit from the transfer portal was enough to squeak by Databricks in this last second decision.  

Round 3 Results

#1 Oracle vs. #1 MySQL – Winner = MySQL

It’s the age-old story of the protege vs. the parent figure. Oracle is the owner of both databases but only one is open source. That open source ability allows the community to pull together to push it to victory. This was really a match up of proprietary vs. open source, and today, at least, open source has carried the day. 

#1 PostgreSQL vs. #1 SQL Server – Winner = PostgreSQL

In what has become a theme of the tournament, it’s an open source juggernaut vs. the incumbent proprietary database. While SQL Server had all the support of the windows community, the overall open source community were able to hold on to win. The unsung hero was the extension authors that make PostgreSQL the innovation platform it is.  

Championship Game Results

#1 PostgreSQL vs. #1 MySQL

I think we can all agree that tournaments and databases are better when there are two open source powerhouses to compete. This is the renewal of a 30+ year rivalry and it surely didn’t disappoint. The community and extensions of PostgreSQL showed up when it counted and had MySQL on the ropes in the second hard. Ultimately, the multi-threaded nature of MySQL and its default replication wich have been the bedrock of MySQL usage, were able to hold of Postgres and seal the victory and the championship.

Champion = MySQL 

Summary:

What a thrilling end to the tournament. In the end, it was going to a two horse race by the open source OLTP leaders. It was just a question of which was going to outlast the other. The real winner was open source and the communities that support them, so keep supporting your favorite open source project.

Congrats to MySQL! The winner of the 2026 Tournament of Databases.  

To get more database news and updates, subscribe to the Village Crier or checkout VillageSQL on Github.

Planet for the MySQL Community

The New ‘Masters of the Universe’ Trailer Brings Eternia to Life

https://gizmodo.com/app/uploads/2026/02/Masters-of-the-Universe-transform-1280×853.jpg

The first trailer for Masters of the Universe set the tone, and now this second one digs deeper, gets bigger, and really lets us know what we can expect later this summer. There’s more Eternia, more fan-favorite side characters, and more Prince Adam, who finds himself in our world to protect the secrets of his home.

Directed by Travis Knight, Masters of the Universe comes to theaters June 5. It’s the long-awaited, highly anticipated return to live action for the popular toy line/animated series that found new life on Netflix. Here, though, Nicholas Galitzine stars as He-Man, alongside Camila Mendes as Teela, Idris Elba as Man-At-Arms, Alison Brie as Evil-Lyn, Morena Baccarin as the Sorceress, James Purefoy as King Randor, and, who could forget, Jared Leto as Skeletor.

Check out the new trailer for Masters of the Universe below.

We sincerely hope this film can find that tonal balance that Knight found with his Bumblebee movie, but we aren’t so sure. In this day and age, are general audiences ready to embrace such an out-there, fantastical world? Especially one that’s so based on decades-old nostalgia?

We’ll find out soon and have much more on Masters of the Universe in the coming weeks. For now, let us know what you thought of the trailer below.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

Real Python: How to Use Ollama to Run Large Language Models Locally

https://files.realpython.com/media/How-to-Run-Large-Language-Models-Locally-with-Ollama_Watermarked.c14373d94c34.jpg

Running Ollama in your terminal allows you to start chatting with a local large language model (LLM) quickly. You won’t need API keys, cloud services, or ongoing costs. Ollama is a free, open-source tool that lets you download and run models directly on your machine. By following this guide, you’ll install Ollama, chat with local models from your terminal, and use them to power agentic coding tools:

Example of Using Ollama to Run an LLM Locally

Large language models traditionally require expensive API subscriptions and a constant internet connection. Ollama eliminates both requirements by running models directly on your hardware. Because everything runs locally, your prompts stay on your machine, and no per-token fees apply.

Get Your Cheat Sheet: Click here to download your free Ollama cheat sheet and keep the essential steps and commands for running LLMs locally at your fingertips.

Take the Quiz: Test your knowledge with our interactive “How to Use Ollama to Run Large Language Models Locally” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

How to Use Ollama to Run Large Language Models Locally

Test your knowledge of running LLMs locally with Ollama. Install it, pull models, chat, and connect coding tools from your terminal.

Prerequisites

To follow this guide, you’ll need the following software and hardware:

  • macOS 14 Sonoma or newer, Windows 10 or newer, or a recent Linux distribution
  • At least 8 GB of RAM, or 16 GB or more for larger models
  • 5–16 GB of free disk space to store models
  • Basic skills with the command line or terminal, including opening a terminal and running commands

No Python installation is required for this guide, and no prior experience with LLMs or AI is needed. If you want to integrate Ollama with Python after finishing here, check out How to Integrate Local LLMs With Ollama and Python.

Step 1: Install Ollama and Pull Your First Model

To quickly install Ollama on your operating system, run the following command based on your platform:

Windows PowerShell

PS> irm https://ollama.com/install.ps1 | iex
Shell

$ curl -fsSL https://ollama.com/install.sh | sh

Once this command finishes, Ollama will be installed on your system.

Note: In some Linux distributions, you may need to install curl to download the installer and the zstd library for extraction. On Debian/Ubuntu, you can install them with the following command:

Shell

$ sudo apt update && sudo apt install curl zstd

Alternatively, you can download a dedicated installer for Windows and macOS. Visit Ollama’s download page to get the installer for those operating systems.

Note: Ollama has a GUI application for macOS and Windows users. This quick guide focuses solely on the command-line (CLI) tool. See Ollama’s app announcement if you want to explore that option.

After installation, you can verify that the CLI is available with the following command:

Shell

$ ollama -v
ollama version is 0.17.7

The Ollama service should be running in the background. Normally, you don’t need to start it manually. It runs on port 11434 by default. If you get a warning after running the command above, then you may need to run the background server manually:

Shell

$ ollama serve

Read the full article at https://realpython.com/ollama/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Planet Python

Lerd – A Herd-like local PHP development environment for Linux

Lerd

A Herd-like local PHP development environment for Linux — Podman-native, rootless, zero system dependencies.

Lerd bundles Nginx, PHP-FPM, and optional services (MySQL, Redis, PostgreSQL, Meilisearch, RustFS) as rootless Podman containers, giving you automatic .test domain routing, per-project PHP/Node version isolation, and one-command TLS — all without touching your system’s PHP or web server. Laravel-first, with built-in support for Symfony, WordPress, and any PHP framework via YAML definitions.


Lerd vs Laravel Sail

Laravel Sail is the official per-project Docker Compose solution. Lerd is a shared infrastructure approach, closer to what Laravel Herd does on macOS. Both are valid — they solve slightly different problems.

Lerd Laravel Sail
Nginx One shared container for all sites Per-project
PHP-FPM One container per PHP version, shared Per-project container
Services (MySQL, Redis…) One shared instance Per-project (or manually shared)
.test domains Automatic, zero config Manual /etc/hosts or dnsmasq
HTTPS lerd secure → trusted cert instantly Manual or roll your own mkcert
RAM with 5 projects running ~200 MB ~1–2 GB (5× stacks)
Requires changes to project files No Yes — needs docker-compose.yml committed
Works on legacy / client repos Yes — just lerd link Only if you can add Sail
Defined in code (infra-as-code) No Yes
Team parity (all OS) Linux only macOS, Windows, Linux

Choose Sail when: your team uses it, you need per-project service versions, or you want infrastructure defined in the repo.

Choose Lerd when: you work across many projects at once and don’t want a separate stack per repo, you can’t modify project files, you want instant .test routing, or you’re on Linux and want the Herd experience.


Lerd vs ddev

ddev is a popular open-source local development tool that spins up per-project Docker containers with a shared Traefik router. It supports many frameworks (Laravel, WordPress, Drupal, etc.) and runs on macOS, Windows, and Linux. Lerd is narrower in scope — Laravel-focused, Podman-native, shared infrastructure — closer to the Herd model.

Lerd ddev
Container runtime Rootless Podman Docker (or Orbstack / Colima)
Architecture Shared Nginx + PHP-FPM across all projects Per-project containers + shared Traefik router
Services (MySQL, Redis…) One shared instance Per-project (isolated by default)
Domains .test — automatic, zero config .ddev.site or custom — automatic via Traefik
HTTPS lerd secure → trusted cert instantly Built-in via mkcert
RAM with 5 projects running ~200 MB ~500 MB–1 GB (5× app containers + router)
Requires changes to project files No Yes — needs .ddev/config.yaml committed
Works on legacy / client repos Yes — just lerd link Only if you can add ddev config
Framework support Laravel built-in; any PHP framework via YAML definitions Laravel, WordPress, Drupal, and many more
Defined in code (infra-as-code) No Yes
Team parity (all OS) Linux only macOS, Windows, Linux

Choose ddev when: your team is cross-platform, you work with multiple frameworks (not just Laravel), you want per-project service isolation, or your workflow already depends on Docker.

Choose Lerd when: you’re on Linux, want a zero-config shared stack you can drop any project into without touching its files, prefer rootless Podman, or want the lightweight Herd-like experience.


Next steps

Laravel News Links

Other than Apple-1, other world-changing inventions launched in 1976

https://photos5.appleinsider.com/gallery/48649-95006-000-lead-Woz-xl.jpgApple’s 50th anniversary is also the anniversary of the Apple-1. The Apple-1 isn’t the only world-changing product that came out in 1976, with many other world-changing inventions sharing the stage.

Apple founder Steve Wozniak, holding up an Apple-1 green logic board, in a crowd of people
The Apple-1 came out in 1976, but it wasn’t the only history maker

In 1976, Steve Wozniak, Steve Jobs, and Ronald Wayne shipped Apple’s first product — the Apple-1. Fifty years later, absent all three founders for various reasons, the company stands as one of the world’s largest technology companies by revenue. Not only is Apple vastly profitable, it has made incredible globe-spanning strides in computing, smartphones, wearables, and more.

While the Apple-1 is undeniably one of the most important devices in the home computing revolution, it was hardly the only heavy-hitter that came out that year. As it turns out, incredible strides were being made across many industries, ranging from spaceflight to medtech, consumer electronics to cryptography, with many of the inventions laying groundwork for products and systems we see today.

Continue Reading on AppleInsider | Discuss on our ForumsAppleInsider News

MySQL Archiving: 3 Ways to Clear the Bloat

https://continuent-cdn-prod.s3.us-east-1.amazonaws.com/public/blog/social/1999.jpg

The Silent Bloat: Managing Massive Logging Tables

Certain tables in MySQL, such as logging tables, can grow extremely large and occupy the bulk of a database. In many MySQL environments, 90% of the storage is consumed by data that is 0% useful for daily operations. Not only can they be difficult to query, these large tables can quickly affect RPO and RTO because most backup and restore time is devoted to non-critical data.

A well-designed logging system would, for instance, take advantage of MySQL table partitions. A partition can be quickly dropped with almost no overhead on the database. However, most systems start small, and the exponential growth of these tables is not accounted for.

A massive insert/delete is NOT safe:

  • Undo Log Bloat
    Huge transactions require a lot of temporary space to allow for a potential rollback.
  • The Rollback Trap
    If you cancel a massive delete halfway through, MySQL must undo every single row change, which is often slower than the delete itself.
  • Replication Gridlock
    Replicas usually process transactions serially; one massive 30-minute delete on the primary will stop all data flow to your replicas for that same 30 minutes.

With this in mind, here are a few options to assist our customers with archiving.

Option 1: pt-archiver (Percona Toolkit)

pt-archiver is a Perl script (part of the Percona Toolkit) that "nibbles" at the data. It selects a chunk of rows, inserts them into the archive, deletes them from the source, and commits. It monitors replication lag automatically (if using native MySQL replication) and pauses if the database gets too busy.

The Process:

  1. Install Percona Toolkit: sudo apt-get install percona-toolkit (or similar).
  2. Create the Archive Table:

    CREATE TABLE transactions_archive LIKE transactions;
    -- Optional: Switch engine to Archive or MyISAM if you need compression and don't need updates
    -- ALTER TABLE transactions_archive ENGINE=ARCHIVE;
  3. Run the Archiver:

    pt-archiver \
      --source h=localhost,D=mydb,t=transactions \
      --dest h=localhost,D=mydb,t=transactions_archive \
      --where "created_at < DATE_SUB(NOW(), INTERVAL 90 DAY)" \
      --limit 1000 \
      --txn-size 1000 \
      --bulk-delete \
      --progress 5000

Note

NOT Lag Aware: The --check-slave-lag flag does not support Tungsten Replicator, so you should monitor the THL apply time. This would normally be used to ensure lag does not get too high.

  • Non-Blocking: It works in small transactions (1000 rows at a time).
  • Zero Data Loss: It explicitly inserts then deletes based on the Primary Key.

Option 2: DIY

The Logic (Pseudo-code): You want to iterate through the table using the Primary Key to avoid table scans.

  1. Identify the Cutoff: Find the Primary Key (ID) corresponding to 90 days ago.
  2. The Loop:

    • Start Transaction.
    • Select a batch of rows (e.g., 1000) that are older than 90 days FOR UPDATE (to lock them safely).
    • Insert them into the archive table.
    • Delete them from the transactions table using their specific IDs.
    • Commit Transaction.
    • Crucial: Sleep for 1-2 seconds to let the server breathe and replication catch up.

Perl code:

$dbh->{AutoCommit} = 0;
while (1) {
    # 1. Select IDs to move (Limit locking)
    my $ids = $dbh->selectcol_arrayref(
        "SELECT id FROM transactions WHERE created_at < ? LIMIT 1000 FOR UPDATE", 
        undef, 
        $cutoff_date
    );
    last unless @$ids; # Exit if no more rows
    my $id_list = join(',', @$ids);


    # 2. Copy to Archive
    $dbh->do("INSERT INTO transactions_archive SELECT * FROM transactions WHERE id IN ($id_list)");
    # 3. Delete from Source
    $dbh->do("DELETE FROM transactions WHERE id IN ($id_list)");
    $dbh->commit;
    # 4. Safety Pause
    sleep(1); 
}

Option 3: Partitioning

Note

The Partition key (e.g., created_at) MUST be part of the Primary Key.

Step 1: Create the New "Shadow" Table

Create your new table with the exact same schema, but add partitioning immediately.

CREATE TABLE transactions_new (
    id INT NOT NULL,
    created_at DATETIME NOT NULL,
    amount DECIMAL(10,2),
    -- ... other columns ...
    PRIMARY KEY (id, created_at) -- Partition key must be in PK
)
PARTITION BY RANGE (TO_DAYS(created_at)) (
    PARTITION p_old VALUES LESS THAN (TO_DAYS('2024-01-01')),
    PARTITION p_2024_01 VALUES LESS THAN (TO_DAYS('2024-02-01')),
    PARTITION p_2024_02 VALUES LESS THAN (TO_DAYS('2024-03-01')),
    -- Always have a catch-all for future dates
    PARTITION p_future VALUES LESS THAN MAXVALUE
);

Step 2: The Migration (The "Gradual" Part)

You have two choices here depending on your uptime requirements.

Option A: The "Maintenance Window" (Safest & Easiest)
Ideal if you can afford 15-30 minutes of downtime

  1. Stop the application (or pause writes).
  2. Rename transactions to transactions_legacy.
  3. Rename transactions_new to transactions.
  4. Copy the "Hot" Data: Run a SQL script to copy only the last 90 days of data from legacy to the new table.

    INSERT INTO transactions SELECT * FROM transactions_legacy
    WHERE created_at >= DATE_SUB(NOW(), INTERVAL 90 DAY);
  5. Start the application.

Result: Your app creates new rows in the partitioned table. The old table (transactions_legacy) is now effectively your "Archive". You can drop it later or back it up to cold storage.

Option B: Zero Downtime (Double Writes)
Ideal if you cannot stop the business

  1. Modify your application code to write new transactions to BOTH transactions (old) and transactions_new, or use database triggers to handle the cascaded write. BE SURE THESE TRIGGERS ARE CAA AWARE IF USING TUNGSTEN CAA TOPOLOGY.
  2. Wait for the code to deploy.
  3. Run a backfill script (like the Perl one above) to copy the "Hot" data (last 90 days) from Old → New.
    You must handle duplicate key errors since the app is already writing new data.
  4. Once the new table has the last 90 days of data, deploy code to read/write ONLY to transactions_new.
  5. Drop the old table.

Conclusion: Which Approach Should You Take?

Managing huge logging tables is about more than saving disk space. It’s also about making sure your backups and restores actually work when you need them. There isn’t a single "perfect" way to do this, so choose the one that fits your setup:

  • Use pt-archiver if you want a solid, pre-built tool and you’re running a standard MySQL setup. It’s great for "set it and forget it" archiving.
  • Use a DIY script if you have a complex setup or need total control over how much the process "sleeps" to keep replication lag at zero.
  • Use Partitioning if you’re ready to change your table structure to make future cleanups as simple as dropping a partition.

Planet for the MySQL Community

CMMG MK4 DL-44 Blaster .22LR – $899.99

https://www.ammoland.com/wp-content/uploads/2026/03/CMMG-Blaster-Deal-500×367.jpg

Limited Time Deal

CMMG MK4 DL-44 Blaster .22LR – $899.99

For Star Wars fans and rimfire shooters alike, the CMMG MK4 DL-44 Blaster is the kind of gun that instantly grabs your attention. Inspired by the iconic look of Han Solo’s legendary blaster, this .22 LR pistol brings sci-fi style into the real world with a hand-carved grip, custom muzzle device, and battle-worn finish that give it serious cinematic appeal.

It is not just a novelty piece, either. Under that unmistakable space-gun profile is a functional semi-auto .22 built on CMMG’s proven platform, making it a fun range gun for collectors, plinkers, and anyone who has ever wanted to own something that looks like it came straight out of a galaxy far, far away.

Top Features

  • Limited-run DL-44 Blaster styling with hand-carved grip
  • Battle-worn Cerakote finish makes each pistol look unique
  • Lightweight 3.3 lb setup for easy handling
  • 4.5″ barrel with 1/2×28 threads for added versatility
  • Semi-auto .22 LR fun with a 10-round capacity

Why Shooters Love It

The CMMG MK4 DL-44 Blaster delivers more than just looks. It gives shooters a lightweight, semi-auto .22 LR pistol built on CMMG’s platform, blending collectible appeal with practical range fun. The threaded barrel, billet upper, forged lower, and limited-production styling make this one stand out for plinking, display, or just owning something different from the usual rimfire lineup.

Unbeatable Price

  • Regular Price: $1,199.99
  • Deal Price: $899.99
    • You Save: $300.00 (25% off)

2025-Buy-Now-Button x300

Before you buy read AmmoLand News’s complete Daily Deal Disclaimer here.

AmmoLand Shooting Sports News

Ultimate Guide to Connecting 3D Printed Parts | Pins, Fins, Slots, & Snaps

http://img.youtube.com/vi/vsHpiHhB3RU/0.jpg

Slant 3D shared this video on Youtube!

Joining 3D printed parts shouldn’t be guesswork. In this video, we break down the most reliable ways to connect multi-part prints — without relying on perfect tolerances, support-heavy features, or fragile pegs. You’ll see why common connectors fail, and better options you can use instead, from diamond pegs and slab-and-slot joints to spring-loaded T-slots, snap fits, and advanced locking tabs. Whether you’re making terrain, models, or large assemblies, these design rules help your parts fit cleanly, hold tightly, and print reliably on any machine.

See more!

3D printing – Adafruit Industries – Makers, hackers, artists, designers and engineers!