Watch ILM Recreate the Death Star Trench Run Out of Virtual Gingerbread

https://gizmodo.com/app/uploads/2025/12/star-wars-minis-ilm-death-star-trench-run-1280×853.jpg

Just in time for the holidays, Star Wars is celebrating in style with a cutesy recreation of the iconic Death Star trench run from A New Hope rendered as if it was painstakingly made out of gingerbread. But a simple festive sweet treat, this ain’t: it’s the first in a volley of shorts for a new animated miniseries, Star Wars Minis.

Lucasfilm has released the first of the shorts, a brief side-by-side comparison of the trench run sequences from the original Star Wars with the gingerbread recreation. It’s very cute, from the gingerbread cameos of Luke, Han, and Vader, to the gumdrop proton torpedoes fired to destroy the battle station (which blows up with a suitably adorable cookie aftershock ring).

But the short is really just a herald for a new series of similarly ideated shorts called Star Wars Minis, which will be less festively inclined. An accompanying behind-the-scenes video from ILM frames the new shorts a series of ways to explore beloved moments from across Star Wars film and TV in new styles and materials, utilizing new technologies developed by ILM.

Have no fear about “new technologies” just yet, in the wake of Disney’s attempts to embrace generative AI before the tech bubble bursts: Star Wars Minis looks to be modelling things actually crafted by ILM first, from printed, chibi-fied models of C-3PO and R2-D2 to hand-knitted crochet dolls of Yoda, Grogu, and more. The latter style definitely seems to be the focus of this teaser, with knitted riffs on multiple scenes from Phantom Menace, A New Hope, Empire Strikes Back, and Return of the Jedi, as well as The Mandalorian rendered in digital fuzzy felt.

It’s a fun way to create short little Star Wars riffs, especially with fun technological solutions to deliver them on a similarly smaller scale.

We’ll see more from Star Wars Minis in 2026.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

The First ‘Avengers: Doomsday’ Teaser Is Finally Here

https://gizmodo.com/app/uploads/2025/12/avengers-doomsday-trailer-steve-rogers-1280×853.jpg

It’s been nearly a decade since the last MCU movie to wear the “Avengers” banner (and not hide it), and now the Earth’s Mightiest Heroes are returning for Avengers: Doomsday.

Helmed by returning directors Joe and Anthony Russo, the fourth Avengers film sees two different Avengers teams—one with the recently christened New Avengers, the other brought together by Anthony Mackie’s Captain America—caught up in a war for the multiverse that also loops in older versions of the Fox X-Men, our latest iteration of the Fantastic Four, and even the Wakandans and Namor. Oh, and Doctor Doom, played by former Iron Man Robert Downey, Jr.

https://x.com/MarvelStudios/status/2003465624325095737

But instead of all that, this tease centers on Chris Evans’ Steve Rogers. The worst-kept secret of the film is the return of the original Captain America, who has himself a child with presumably Peggy Carter (Haley Atwell, also returning for this), since they sealed their time-displaced romance with a kiss at the very end of Endgame. What that means for the film’s plot is a big mystery, but with the subtitle, things likely won’t stay rosy for the Rogers family.

“Steve Rogers will return” when Avengers: Doomsday hits theaters on December 18, 2026, followed by Avengers: Secret Wars on December 17, 2027. And before Doomsday comes out, Marvel sure hopes you liked Endgame enough to see it on the big screen again in September 2026.

Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, what’s next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.

Gizmodo

How To Measure The Impact Of Features

http://files.smashing.media/articles/how-measure-impact-features-tars/how-measure-impact-features-tars.jpg

So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”

2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”

3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”

4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.



Video + UX Training

$ 495.00 $ 799.00

Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00


Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources

Further Reading

Smashing Magazine

Cut MySQL RDS Audit Log Costs by 95% with AWS S3

Detailed MySQL RDS audit logs are non-negotiable for security and compliance standards like PCI-DSS and HIPAA. However, a bloated cloud bill for storing these logs shouldn’t be your default reality.

This blog shows you how to strategically leverage AWS services to maintain full compliance while implementing massive cost savings using the Mydbops RDS LogShift tool. We’ll walk through a real client case where we reduced their annual audit log costs from over $30,000 to under $2,000. The client stayed on Amazon RDS for MySQL as the managed database platform, with no compromise in security or observability.

The $30,000 Story: How We Cut Our Client’s Audit Log Costs by 95%

One of our clients needed to retain MySQL audit logs for five years to meet compliance standards. They had enabled log streaming to Amazon CloudWatch Logs, which seemed like the straightforward solution. However, after seeing their AWS bill climb month after month, they reached out to us for a cost optimization review.

The problem was stark: they were generating 1 TB of audit data monthly, and nobody had looked closely at the retention settings after the initial setup.

Like many AWS users, they had left the CloudWatch Log Group’s default retention policy set to "Never Expire." This meant they were paying premium CloudWatch storage rates indefinitely.

Their Painful Cost Breakdown

CloudWatch Audit Log Cost Breakdown

1 TB MySQL RDS audit logs / month

Cost Component (Monthly for 1 TB) Calculation Annual Cost
CloudWatch Ingestion Fee 1,024 GB × $0.50/GB $6,144.00
CloudWatch Storage Fee 1,024 GB × $0.03/GB $368.64
Total Annual Cost (Recurring)
Key baseline
$6,512.64
Projected Cost (5 Years, Compounding Storage) $32,563.20

Based on 1 TB/month of MySQL RDS audit logs streamed to Amazon CloudWatch Logs with default retention.

If you already stream MySQL RDS logs into CloudWatch, this pattern may look familiar. For a deeper dive into how RDS features impact ongoing cloud cost, you can refer to the Mydbops article on Point-In-Time Recovery in MySQL RDS, which also discusses retention trade-offs and storage impact.

We recommended a different approach: keep only the minimum data required for immediate operational scans in CloudWatch and move everything else to cold storage. Here’s how we cut their RDS audit log costs by 95%.

Step 1: Optimize CloudWatch Retention to the Minimum

The first immediate relief came from capping the high-cost storage by managing the CloudWatch retention policy intelligently. The principle is simple: only keep the data you need for active, real-time operational scanning in CloudWatch Logs Insights. Everything else should be pruned.

We navigated to the Log Group in the AWS Console and changed the retention policy to 30 days. This ensured logs were automatically deleted after they passed their high-utility operational phase.

The Cost Impact of 30-Day Retention

This single change delivered two immediate benefits:

  • Eliminated the perpetual storage cost for any data older than 30 days
  • Minimized the volume of data scanned by Log Insights queries, reducing query costs

Step 2: The S3 Advantage for Long-Term Archival

With the operational window contained to 30 days, the next challenge was capturing and storing the long-term compliance data (5 years) cost-effectively.

The optimal solution is Amazon S3 with lifecycle policies. S3 allows data to move seamlessly through storage tiers, eventually landing in S3 Glacier Deep Archive where storage costs drop to approximately $0.00099 per GB—a 97% reduction compared to CloudWatch storage.

The math is compelling, but the real challenge was implementation: how do we get logs from RDS to S3 without continuing to pay those crushing CloudWatch ingestion fees?

In practice, this means the client could store the same 60 TB of cumulative audit logs over five years at a tiny fraction of what CloudWatch would have charged. If you want to see how Mydbops thinks about backups, long-term durability, and recovery windows on RDS, the blog on migrating MySQL data to RDS/Aurora using XtraBackup and the post on MySQL RDS Point-In-Time Recovery show how S3 is used across backup and restore workflows.

Step 3: Cutting Costs with Mydbops RDS LogShift

The final game-changing step ensured that future log volumes bypass the costly CloudWatch ingestion pipeline altogether and flow directly to S3 for archival. This is where the Mydbops RDS LogShift tool delivered the essential optimization.

By deploying RDS LogShift, we achieved immediate and sustained cost reduction that will compound over the entire 5-year retention period.

How RDS LogShift Achieved a 95% Saving

The core of our optimization lies in how Mydbops RDS LogShift strategically manages log flow, directly addressing the biggest cost drivers:

Bypassing Ingestion Fees (The Critical Save): This is the game-changer. RDS LogShift can either directly retrieve rotated audit logs from the RDS instance itself or pull existing logs within their short retention period in CloudWatch Logs. By doing this, the tool ensures your long-term archival data circumvents the exorbitant $0.50/GB CloudWatch ingestion fee entirely. This process becomes a simple data transfer, turning a major cost center into a minor operational expense.

Compression and Partitioning: The tool efficiently compresses logs (reducing storage volume) and pushes them to S3 with date-based partitioning. This makes it easy to download and query specific logs when needed for compliance audits or security investigations.

The Long-Term Results: Over $30,000 Saved

The cumulative savings achieved for our client over the 5-year retention period are substantial:

Cost overview

CloudWatch vs. optimized storage

Same audit log volume, two retention windows.

Period Cumulative log volume CloudWatch cumulative cost Optimized S3 cumulative cost Total savings
1 Year 12 TB $6,512 $350 $6,162
5 Years
near 95% saved
60 TB $32,563 $1,700 $30,863

By implementing the Mydbops RDS LogShift solution, our client gained full compliance while cutting their log costs by 94.7%. They maintained the same security posture and audit capabilities—just at a fraction of the cost.

Turn Your Audit Log Liability into a Cost-Saving Success Story

If you’re storing MySQL RDS audit logs in CloudWatch without a retention strategy, you’re likely overpaying by thousands of dollars annually. The solution doesn’t require compromising on compliance or security—it just requires smarter architecture.

Ready to see your AWS bill drop while maintaining full compliance? Contact Mydbops today to implement the RDS LogShift solution and start saving immediately.

Planet for the MySQL Community

The Meta Quest 3S is back down to its Cyber Monday all-time low of $250

The Meta Quest 3S is back on sale at its all-time low price of $250. That’s $50 off, or a discount of 17 percent, and matches a deal we saw on Cyber Monday. You can get the deal at Amazon and Best Buy, and the latter offers a $50 gift card with purchase.

The 3S is the more affordable model in the company’s current VR headset lineup. It features the same Snapdragon XR2 processor as the more expensive Meta Quest 3, but with lower resolution per eye and a slightly narrower field of view.

In our hands-on review, we gave the Meta Quest 3S a score of 90, noting how impressive the tech was compared to its price. The headset was comfortable to wear during longer gaming periods, and the performance was quick and responsive thanks largely to the upgraded processor and increased RAM from the Quest 2.

We were big fans of the new controllers, which the 3S shares with the more expensive Quest 3. This new generation of controller sports a more refined design, shedding the motion tracking ring and leaving behind a sleek form factor that fits in your hand like a glove.

We did miss the headphone jack, though most users are probably fine with the built-in speakers. You can wirelessly connect headphones for higher quality sound if you feel the need. The Quest 3S also recycles the old Fresnel lenses from the Quest 2, which can lead to some artifacts.

If you were considering a VR headset for yourself or a loved one this holiday season, the Meta Quest 3S offers an excellent value alongside impressive performance.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-meta-quest-3s-is-back-down-to-its-cyber-monday-all-time-low-of-250-144027382.html?src=rssEngadget

Introducing Lightweight MySQL MCP Server: Secure AI Database Access

https://i0.wp.com/askdba.net/wp-content/uploads/2025/12/gemini_generated_image_ilnfp3ilnfp3ilnf.png?fit=1200%2C655&ssl=1&w=640


A lightweight, secure, and extensible MCP (Model Context Protocol) server for MySQL designed to bridge the gap between relational databases and large language models (LLMs).

I’m releasing a new open-source project: mysql-mcp-server, a lightweight server that connects MySQL to AI tools via the Model Context Protocol (MCP). It’s designed to make MySQL safely accessible to language models, structured, read-only, and fully auditable.

This project started out of a practical need: as LLMs become part of everyday development workflows, there’s growing interest in using them to explore database schemas, write queries, or inspect real data. But exposing production databases directly to AI tools is a risk, especially without guardrails.

mysql-mcp-server offers a simple, secure solution. It provides a minimal but powerful MCP server that speaks directly to MySQL, while enforcing safety, observability, and structure.

What it does

mysql-mcp-server allows tools that speak MC, such as Claude Desktop, to interact with MySQL in a controlled, read-only environment. It currently supports:

  • Listing databases, tables, and columns
  • Describing table schemas
  • Running parameterized SELECT queries with row limits
  • Introspecting indexes, views, triggers (optional tools)
  • Handling multiple connections through DSNs
  • Optional vector search support if using MyVector
  • Running as either a local MCP-compatible binary or a remote REST API server

By default, it rejects any unsafe operations such as INSERT, UPDATE, or DROP. The goal is to make the server safe enough to be used locally or in shared environments without unintended side effects.

Why this matters

As more developers, analysts, and teams adopt LLMs for querying and documentation, there’s a gap between conversational interfaces and real database systems. Model Context Protocol helps bridge that gap by defining a set of safe, predictable tools that LLMs can use.

mysql-mcp-server brings that model to MySQL in a way that respects production safety while enabling exploration, inspection, and prototyping. It’s helpful in local development, devops workflows, support diagnostics, and even hybrid RAG scenarios when paired with a vector index.

Getting started

You can run it with Docker:

docker run -e MYSQL_DSN='user:pass@tcp(mysql-host:3306)/' \
  -p 7788:7788 ghcr.io/askdba/mysql-mcp-server:latest

Or install via Homebrew:

brew install askdba/tap/mysql-mcp-server
mysql-mcp-server

Once running, you can connect any MCP-compatible client (like Claude Desktop) to the server and begin issuing structured queries.

Use cases

  • Developers inspecting unfamiliar databases during onboarding
  • Data teams writing and validating SQL queries with AI assistance
  • Local RAG applications using MySQL and vector search with MyVector
  • Support and SRE teams need read-only access for troubleshooting

Roadmap and contributions

This is an early release and still evolving. Planned additions include:

  • More granular introspection tools (e.g., constraints, stored procedures)
  • Connection pooling and config profiles
  • Structured logging and tracing
  • More examples for integrating with LLM environments

If you’re working on anything related to MySQL, open-source AI tooling, or database accessibility, I’d be glad to collaborate.

Learn more

If you have feedback, ideas, or want to contribute, the project is open and active. Pull requests, bug reports, and discussions are all welcome.

Planet for the MySQL Community