How To Measure The Impact Of Features

http://files.smashing.media/articles/how-measure-impact-features-tars/how-measure-impact-features-tars.jpg

So we design and ship a shiny new feature. How do we know if it’s working? How do we measure and track its impact? There is no shortage in UX metrics, but what if we wanted to establish a simple, repeatable, meaningful UX metric — specifically for our features? Well, let’s see how to do just that.

I first heard about the TARS framework from Adrian H. Raudschl’s wonderful article on “How To Measure Impact of Features”. Here, Adrian highlighted how his team tracks and decides which features to focus on — and then maps them against each other in a 2×2 quadrants matrix.

It turned out to be a very useful framework to visualize the impact of UX work through the lens of business metrics.

Let’s see how it works.

1. Target Audience (%)

We start by quantifying the target audience by exploring what percentage of a product’s users have the specific problem that a feature aims to solve. We can study existing or similar features that try to solve similar problems, and how many users engage with them.

Target audience isn’t the same as feature usage though. As Adrian noted, if we know that an existing Export Button feature is used by 5% of all users, it doesn’t mean that the target audience is 5%. More users might have the problem that the export feature is trying to solve, but they can’t find it.

Question we ask: “What percentage of all our product’s users have that specific problem that a new feature aims to solve?”

2. A = Adoption (%)

Next, we measure how well we are “acquiring” our target audience. For that, we track how many users actually engage successfully with that feature over a specific period of time.

We don’t focus on CTRs or session duration there, but rather if users meaningfully engage with it. For example, if anything signals that they found it valuable, such as sharing the export URL, the number of exported files, or the usage of filters and settings.

High feature adoption (>60%) suggests that the problem was impactful. Low adoption (<20%) might imply that the problem has simple workarounds that people have relied upon. Changing habits takes time, too, and so low adoption in the beginning is expected.

Sometimes, low feature adoption has nothing to do with the feature itself, but rather where it sits in the UI. Users might never discover it if it’s hidden or if it has a confusing label. It must be obvious enough for people to stumble upon it.

Low adoption doesn’t always equal failure. If a problem only affects 10% of users, hitting 50–75% adoption within that specific niche means the feature is a success.

Question we ask: “What percentage of active target users actually use the feature to solve that problem?”

3. Retention (%)

Next, we study whether a feature is actually used repeatedly. We measure the frequency of use, or specifically, how many users who engaged with the feature actually keep using it over time. Typically, it’s a strong signal for meaningful impact.

If a feature has >50% retention rate (avg.), we can be quite confident that it has a high strategic importance. A 25–35% retention rate signals medium strategic significance, and retention of 10–20% is then low strategic importance.

Question we ask: “Of all the users who meaningfully adopted a feature, how many came back to use it again?”

4. Satisfaction Score (CES)

Finally, we measure the level of satisfaction that users have with that feature that we’ve shipped. We don’t ask everyone — we ask only “retained” users. It helps us spot hidden troubles that might not be reflected in the retention score.

Once users actually used a feature multiple times, we ask them how easy it was to solve a problem after they used that feature — between “much more difficult” and “much easier than expected”. We know how we want to score.

Using TARS For Feature Strategy

Once we start measuring with TARS, we can calculate an S÷T score — the percentage of Satisfied Users ÷ Target Users. It gives us a sense of how well a feature is performing for our intended target audience. Once we do that for every feature, we can map all features across 4 quadrants in a 2×2 matrix.

Overperforming features are worth paying attention to: they have low retention but high satisfaction. It might simply be features that users don’t have to use frequently, but when they do, it’s extremely effective.

Liability features have high retention but low satisfaction, so perhaps we need to work on them to improve them. And then we can also identify core features and project features — and have a conversation with designers, PMs, and engineers on what we should work on next.

Conversion Rate Is Not a UX Metric

TARS doesn’t cover conversion rate, and for a good reason. As Fabian Lenz noted, conversion is often considered to be the ultimate indicator of success — yet in practice it’s always very difficult to present a clear connection between smaller design initiatives and big conversion goals.

The truth is that almost everybody on the team is working towards better conversion. An uptick might be connected to many different initiatives — from sales and marketing to web performance boost to seasonal effects to UX initiatives.

UX can, of course, improve conversion, but it’s not really a UX metric. Often, people simply can’t choose the product they are using. And often a desired business outcome comes out of necessity and struggle, rather than trust and appreciation.

High Conversion Despite Bad UX

As Fabian writes, high conversion rate can happen despite poor UX, because:

  • Strong brand power pulls people in,
  • Aggressive but effective urgency tactics,
  • Prices are extremely attractive,
  • Marketing performs brilliantly,
  • Historical customer loyalty,
  • Users simply have no alternative.

Low Conversion Despite Great UX

At the same time, a low conversion rate can occur despite great UX, because:

  • Offers aren’t relevant to the audience,
  • Users don’t trust the brand,
  • Poor business model or high risk of failure,
  • Marketing doesn’t reach the right audience,
  • External factors (price, timing, competition).

An improved conversion is the positive outcome of UX initiatives. But good UX work typically improves task completion, reduces time on task, minimizes errors, and avoids decision paralysis. And there are plenty of actionable design metrics we could use to track UX and drive sustainable success.

Wrapping Up

Product metrics alone don’t always provide an accurate view of how well a product performs. Sales might perform well, but users might be extremely inefficient and frustrated. Yet the churn is low because users can’t choose the tool they are using.

We need UX metrics to understand and improve user experience. What I love most about TARS is that it’s a neat way to connect customers’ usage and customers’ experience with relevant product metrics. Personally, I would extend TARS with UX-focused metrics and KPIs as well — depending on the needs of the project.

Huge thanks to Adrian H. Raudaschl for putting it together. And if you are interested in metrics, I highly recommend you follow him for practical and useful guides all around just that!

Meet “How To Measure UX And Design Impact”

You can find more details on UX Strategy in 🪴 Measure UX & Design Impact (8h), a practical guide for designers and UX leads to measure and show your UX impact on business. Use the code 🎟 IMPACT to save 20% off today. Jump to the details.



Video + UX Training

$ 495.00 $ 799.00

Get Video + UX Training

25 video lessons (8h) + Live UX Training.
100 days money-back-guarantee.

Video only

$ 250.00$ 395.00


Get the video course

25 video lessons (8h). Updated yearly.
Also available as a UX Bundle with 3 video courses.

Useful Resources

Further Reading

Smashing Magazine

The Meta Quest 3S is back down to its Cyber Monday all-time low of $250

The Meta Quest 3S is back on sale at its all-time low price of $250. That’s $50 off, or a discount of 17 percent, and matches a deal we saw on Cyber Monday. You can get the deal at Amazon and Best Buy, and the latter offers a $50 gift card with purchase.

The 3S is the more affordable model in the company’s current VR headset lineup. It features the same Snapdragon XR2 processor as the more expensive Meta Quest 3, but with lower resolution per eye and a slightly narrower field of view.

In our hands-on review, we gave the Meta Quest 3S a score of 90, noting how impressive the tech was compared to its price. The headset was comfortable to wear during longer gaming periods, and the performance was quick and responsive thanks largely to the upgraded processor and increased RAM from the Quest 2.

We were big fans of the new controllers, which the 3S shares with the more expensive Quest 3. This new generation of controller sports a more refined design, shedding the motion tracking ring and leaving behind a sleek form factor that fits in your hand like a glove.

We did miss the headphone jack, though most users are probably fine with the built-in speakers. You can wirelessly connect headphones for higher quality sound if you feel the need. The Quest 3S also recycles the old Fresnel lenses from the Quest 2, which can lead to some artifacts.

If you were considering a VR headset for yourself or a loved one this holiday season, the Meta Quest 3S offers an excellent value alongside impressive performance.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/deals/the-meta-quest-3s-is-back-down-to-its-cyber-monday-all-time-low-of-250-144027382.html?src=rssEngadget

Cut MySQL RDS Audit Log Costs by 95% with AWS S3

Detailed MySQL RDS audit logs are non-negotiable for security and compliance standards like PCI-DSS and HIPAA. However, a bloated cloud bill for storing these logs shouldn’t be your default reality.

This blog shows you how to strategically leverage AWS services to maintain full compliance while implementing massive cost savings using the Mydbops RDS LogShift tool. We’ll walk through a real client case where we reduced their annual audit log costs from over $30,000 to under $2,000. The client stayed on Amazon RDS for MySQL as the managed database platform, with no compromise in security or observability.

The $30,000 Story: How We Cut Our Client’s Audit Log Costs by 95%

One of our clients needed to retain MySQL audit logs for five years to meet compliance standards. They had enabled log streaming to Amazon CloudWatch Logs, which seemed like the straightforward solution. However, after seeing their AWS bill climb month after month, they reached out to us for a cost optimization review.

The problem was stark: they were generating 1 TB of audit data monthly, and nobody had looked closely at the retention settings after the initial setup.

Like many AWS users, they had left the CloudWatch Log Group’s default retention policy set to "Never Expire." This meant they were paying premium CloudWatch storage rates indefinitely.

Their Painful Cost Breakdown

CloudWatch Audit Log Cost Breakdown

1 TB MySQL RDS audit logs / month

Cost Component (Monthly for 1 TB) Calculation Annual Cost
CloudWatch Ingestion Fee 1,024 GB × $0.50/GB $6,144.00
CloudWatch Storage Fee 1,024 GB × $0.03/GB $368.64
Total Annual Cost (Recurring)
Key baseline
$6,512.64
Projected Cost (5 Years, Compounding Storage) $32,563.20

Based on 1 TB/month of MySQL RDS audit logs streamed to Amazon CloudWatch Logs with default retention.

If you already stream MySQL RDS logs into CloudWatch, this pattern may look familiar. For a deeper dive into how RDS features impact ongoing cloud cost, you can refer to the Mydbops article on Point-In-Time Recovery in MySQL RDS, which also discusses retention trade-offs and storage impact.

We recommended a different approach: keep only the minimum data required for immediate operational scans in CloudWatch and move everything else to cold storage. Here’s how we cut their RDS audit log costs by 95%.

Step 1: Optimize CloudWatch Retention to the Minimum

The first immediate relief came from capping the high-cost storage by managing the CloudWatch retention policy intelligently. The principle is simple: only keep the data you need for active, real-time operational scanning in CloudWatch Logs Insights. Everything else should be pruned.

We navigated to the Log Group in the AWS Console and changed the retention policy to 30 days. This ensured logs were automatically deleted after they passed their high-utility operational phase.

The Cost Impact of 30-Day Retention

This single change delivered two immediate benefits:

  • Eliminated the perpetual storage cost for any data older than 30 days
  • Minimized the volume of data scanned by Log Insights queries, reducing query costs

Step 2: The S3 Advantage for Long-Term Archival

With the operational window contained to 30 days, the next challenge was capturing and storing the long-term compliance data (5 years) cost-effectively.

The optimal solution is Amazon S3 with lifecycle policies. S3 allows data to move seamlessly through storage tiers, eventually landing in S3 Glacier Deep Archive where storage costs drop to approximately $0.00099 per GB—a 97% reduction compared to CloudWatch storage.

The math is compelling, but the real challenge was implementation: how do we get logs from RDS to S3 without continuing to pay those crushing CloudWatch ingestion fees?

In practice, this means the client could store the same 60 TB of cumulative audit logs over five years at a tiny fraction of what CloudWatch would have charged. If you want to see how Mydbops thinks about backups, long-term durability, and recovery windows on RDS, the blog on migrating MySQL data to RDS/Aurora using XtraBackup and the post on MySQL RDS Point-In-Time Recovery show how S3 is used across backup and restore workflows.

Step 3: Cutting Costs with Mydbops RDS LogShift

The final game-changing step ensured that future log volumes bypass the costly CloudWatch ingestion pipeline altogether and flow directly to S3 for archival. This is where the Mydbops RDS LogShift tool delivered the essential optimization.

By deploying RDS LogShift, we achieved immediate and sustained cost reduction that will compound over the entire 5-year retention period.

How RDS LogShift Achieved a 95% Saving

The core of our optimization lies in how Mydbops RDS LogShift strategically manages log flow, directly addressing the biggest cost drivers:

Bypassing Ingestion Fees (The Critical Save): This is the game-changer. RDS LogShift can either directly retrieve rotated audit logs from the RDS instance itself or pull existing logs within their short retention period in CloudWatch Logs. By doing this, the tool ensures your long-term archival data circumvents the exorbitant $0.50/GB CloudWatch ingestion fee entirely. This process becomes a simple data transfer, turning a major cost center into a minor operational expense.

Compression and Partitioning: The tool efficiently compresses logs (reducing storage volume) and pushes them to S3 with date-based partitioning. This makes it easy to download and query specific logs when needed for compliance audits or security investigations.

The Long-Term Results: Over $30,000 Saved

The cumulative savings achieved for our client over the 5-year retention period are substantial:

Cost overview

CloudWatch vs. optimized storage

Same audit log volume, two retention windows.

Period Cumulative log volume CloudWatch cumulative cost Optimized S3 cumulative cost Total savings
1 Year 12 TB $6,512 $350 $6,162
5 Years
near 95% saved
60 TB $32,563 $1,700 $30,863

By implementing the Mydbops RDS LogShift solution, our client gained full compliance while cutting their log costs by 94.7%. They maintained the same security posture and audit capabilities—just at a fraction of the cost.

Turn Your Audit Log Liability into a Cost-Saving Success Story

If you’re storing MySQL RDS audit logs in CloudWatch without a retention strategy, you’re likely overpaying by thousands of dollars annually. The solution doesn’t require compromising on compliance or security—it just requires smarter architecture.

Ready to see your AWS bill drop while maintaining full compliance? Contact Mydbops today to implement the RDS LogShift solution and start saving immediately.

Planet for the MySQL Community

I replaced all my backup tools with this free open-source one

https://static0.makeuseofimages.com/wordpress/wp-content/uploads/wm/2025/12/using-borgbackup-for-file-restoration.jpg

I think I’ve had one of the messiest backup strategies for years. I’m constantly testing out new tools, and it takes a toll. I’ve used backup tools like Restic and mainstream options like Google Drive, Microsoft OneDrive, and Apple iCloud, to name a few. These options are robust but usually not totally under your control and often require paid plans for ample storage.

I finally came across BorgBackup (Borg for short), and it’s one tool that can replace all the backup options I’ve tried. It’s open source, my data is practically under my control, and it’s free. But more importantly, it’s a backup option that’s robust enough for daily use. It replaces every part of my previous setup with a single unified system.

Borg’s global deduplication

Borg eliminates repeated data across snapshots and machines

Of all the backup tools I’ve used, Borg has the most distinctive and effective approach to handling repeated data. It doesn’t back up entire files or scan for differences at a block level. Borg instead breaks data into variable-sized chunks based on its content. This ensures that even if you make a tiny change inside a massive file, only a few new chunks are stored, and the rest are reused. This approach becomes a long-term space-saving machine, going far beyond incremental backup.

The effect is felt most when you back up several machines to a single repository. It’s agnostic about which system produced the data. For example, if you have two computers that share identical system files, the chunks are referenced by multiple snapshots or machines but stored only once. This is called deduplication, and it recognizes just the data itself and not necessarily how it’s arranged or named.

Borg’s deduplication helps keep costs in check even when the number of devices or snapshots increases. Your storage only grows when something genuinely new is introduced. This is especially valuable to me because I maintain multiple computers.

OS

Linux, macOS, FreeBSD

Price model

Free

BorgBackup or Borg is a command-line, deduplicating archiver with compression and encryption. It offers space-efficient storage of backups.

Fortress-grade security by default

Encryption designed for untrusted servers

Showing repository info including encryption
Afam Onyimadu / MUO

Encryption is a checkbox feature on some backup tools I’ve tried, but it’s a core part of Borg’s design. The data you store is encrypted and cryptographically protected against tampering, and your repository can be initialized with authenticated encryption. This way, the client will detect tampering or modification of repository data, and you get real protection even if someone modifies the raw chunks behind your back. The cloud provider has no insight into your files and, at best, will only see unreadable blobs.

I also appreciate that Borg implements a zero-knowledge architecture, and the encryption happens on my machine. I can then use off-site storage, a rented VPS, or a third-party provider as a mere location to deposit encrypted chunks. They don’t participate in encryption, hold the keys, or decrypt my data, even if compelled to do so.

I also love Borg’s approach to key management. It offers passphrase-protected keys and standalone keyfiles, which are great for different threat models. However, you must make proper key backup part of your workflow because losing the key also means losing the ability to restore data. This security model means you don’t need to fully trust a machine to host your backups. Borg’s encryption keeps it safe.

Instant restores

Instant restores with Borgbackup
Afam Onyimadu / MUO

The file restore process is one reason I dread backups. On some tools, I have to extract entire archives, then wait for gigabytes of data to be processed. After that, I still have to sift through folders to find the one file I actually need. Borg lets you mount your repository via FUSE, largely eliminating this friction. It exposes backups as a directory, so every snapshot is accessible like a normal local folder.

You have to open a specific file before Borg downloads or decrypts it. Although it feels like lazy-loading, it lets you inspect archives instantly. Instant restores make Borg an ideal tool for backing up your entire digital life.

Backups that never bog down your system

Making Borg fast on everything from desktops to low-power NAS devices

Running backups with compresson
Afam Onyimadu / MUO

After the initial full backup, Borg became remarkably lightweight. Deduplication does most of the work, so incremental runs are fast, and Borg is ideal for scheduled, high-frequency backups. I can run it for hours, and it barely touches my disk or network because it’s only moving tiny bits that have changed, not entire files.

You can choose between LZ4, ZSTD, or GZip, and this adds another layer of efficiency. For large and frequently changing directories, I use LZ4 because it favors speed. ZSTD will typically shrink storage, but won’t hurt performance on modern CPUs. Even though GZip is slower, it’s ideal for archival snapshots that won’t be touched again. Many other tools, unlike Borg, won’t allow you to tune compression per job.

BorgBackup also excels at network-aware scheduling. You can apply upload throttling to prevent Wi-Fi bandwidth from being overwhelmed during backups. These optimizations are more evident on Raspberry Pi NAS units, small VMs, older laptops, or other low-power hardware.

Even though Borg does a great job of creating backups, its biggest strength is in how it maintains them. Automated pruning, compaction, and verification workflows offer constant oversight that keeps the repository healthy.

Borg keeps my entire archive consistent. However, it is a command-line tool, and if you’re non-technical or prefer a graphical user interface (GUI), backup tools like Duplicati may be better fits.

MakeUseOf

Introducing Lightweight MySQL MCP Server: Secure AI Database Access

https://askdba.net/wp-content/uploads/2025/12/gemini_generated_image_ilnfp3ilnfp3ilnf.png?w=624


A lightweight, secure, and extensible MCP (Model Context Protocol) server for MySQL designed to bridge the gap between relational databases and large language models (LLMs).

I’m releasing a new open-source project: mysql-mcp-server, a lightweight server that connects MySQL to AI tools via the Model Context Protocol (MCP). It’s designed to make MySQL safely accessible to language models, structured, read-only, and fully auditable.

This project started out of a practical need: as LLMs become part of everyday development workflows, there’s growing interest in using them to explore database schemas, write queries, or inspect real data. But exposing production databases directly to AI tools is a risk, especially without guardrails.

mysql-mcp-server offers a simple, secure solution. It provides a minimal but powerful MCP server that speaks directly to MySQL, while enforcing safety, observability, and structure.

What it does

mysql-mcp-server allows tools that speak MC, such as Claude Desktop, to interact with MySQL in a controlled, read-only environment. It currently supports:

  • Listing databases, tables, and columns
  • Describing table schemas
  • Running parameterized SELECT queries with row limits
  • Introspecting indexes, views, triggers (optional tools)
  • Handling multiple connections through DSNs
  • Optional vector search support if using MyVector
  • Running as either a local MCP-compatible binary or a remote REST API server

By default, it rejects any unsafe operations such as INSERT, UPDATE, or DROP. The goal is to make the server safe enough to be used locally or in shared environments without unintended side effects.

Why this matters

As more developers, analysts, and teams adopt LLMs for querying and documentation, there’s a gap between conversational interfaces and real database systems. Model Context Protocol helps bridge that gap by defining a set of safe, predictable tools that LLMs can use.

mysql-mcp-server brings that model to MySQL in a way that respects production safety while enabling exploration, inspection, and prototyping. It’s helpful in local development, devops workflows, support diagnostics, and even hybrid RAG scenarios when paired with a vector index.

Getting started

You can run it with Docker:

docker run -e MYSQL_DSN='user:pass@tcp(mysql-host:3306)/' \
  -p 7788:7788 ghcr.io/askdba/mysql-mcp-server:latest

Or install via Homebrew:

brew install askdba/tap/mysql-mcp-server
mysql-mcp-server

Once running, you can connect any MCP-compatible client (like Claude Desktop) to the server and begin issuing structured queries.

Use cases

  • Developers inspecting unfamiliar databases during onboarding
  • Data teams writing and validating SQL queries with AI assistance
  • Local RAG applications using MySQL and vector search with MyVector
  • Support and SRE teams need read-only access for troubleshooting

Roadmap and contributions

This is an early release and still evolving. Planned additions include:

  • More granular introspection tools (e.g., constraints, stored procedures)
  • Connection pooling and config profiles
  • Structured logging and tracing
  • More examples for integrating with LLM environments

If you’re working on anything related to MySQL, open-source AI tooling, or database accessibility, I’d be glad to collaborate.

Learn more

If you have feedback, ideas, or want to contribute, the project is open and active. Pull requests, bug reports, and discussions are all welcome.

Planet MySQL

Introducing Lightweight MySQL MCP Server: Secure AI Database Access

https://i0.wp.com/askdba.net/wp-content/uploads/2025/12/gemini_generated_image_ilnfp3ilnfp3ilnf.png?fit=1200%2C655&ssl=1&w=640


A lightweight, secure, and extensible MCP (Model Context Protocol) server for MySQL designed to bridge the gap between relational databases and large language models (LLMs).

I’m releasing a new open-source project: mysql-mcp-server, a lightweight server that connects MySQL to AI tools via the Model Context Protocol (MCP). It’s designed to make MySQL safely accessible to language models, structured, read-only, and fully auditable.

This project started out of a practical need: as LLMs become part of everyday development workflows, there’s growing interest in using them to explore database schemas, write queries, or inspect real data. But exposing production databases directly to AI tools is a risk, especially without guardrails.

mysql-mcp-server offers a simple, secure solution. It provides a minimal but powerful MCP server that speaks directly to MySQL, while enforcing safety, observability, and structure.

What it does

mysql-mcp-server allows tools that speak MC, such as Claude Desktop, to interact with MySQL in a controlled, read-only environment. It currently supports:

  • Listing databases, tables, and columns
  • Describing table schemas
  • Running parameterized SELECT queries with row limits
  • Introspecting indexes, views, triggers (optional tools)
  • Handling multiple connections through DSNs
  • Optional vector search support if using MyVector
  • Running as either a local MCP-compatible binary or a remote REST API server

By default, it rejects any unsafe operations such as INSERT, UPDATE, or DROP. The goal is to make the server safe enough to be used locally or in shared environments without unintended side effects.

Why this matters

As more developers, analysts, and teams adopt LLMs for querying and documentation, there’s a gap between conversational interfaces and real database systems. Model Context Protocol helps bridge that gap by defining a set of safe, predictable tools that LLMs can use.

mysql-mcp-server brings that model to MySQL in a way that respects production safety while enabling exploration, inspection, and prototyping. It’s helpful in local development, devops workflows, support diagnostics, and even hybrid RAG scenarios when paired with a vector index.

Getting started

You can run it with Docker:

docker run -e MYSQL_DSN='user:pass@tcp(mysql-host:3306)/' \
  -p 7788:7788 ghcr.io/askdba/mysql-mcp-server:latest

Or install via Homebrew:

brew install askdba/tap/mysql-mcp-server
mysql-mcp-server

Once running, you can connect any MCP-compatible client (like Claude Desktop) to the server and begin issuing structured queries.

Use cases

  • Developers inspecting unfamiliar databases during onboarding
  • Data teams writing and validating SQL queries with AI assistance
  • Local RAG applications using MySQL and vector search with MyVector
  • Support and SRE teams need read-only access for troubleshooting

Roadmap and contributions

This is an early release and still evolving. Planned additions include:

  • More granular introspection tools (e.g., constraints, stored procedures)
  • Connection pooling and config profiles
  • Structured logging and tracing
  • More examples for integrating with LLM environments

If you’re working on anything related to MySQL, open-source AI tooling, or database accessibility, I’d be glad to collaborate.

Learn more

If you have feedback, ideas, or want to contribute, the project is open and active. Pull requests, bug reports, and discussions are all welcome.

Planet for the MySQL Community

Lowest price ever: M4 MacBook Pro drops to $1,249 ($350 off)

https://photos5.appleinsider.com/gallery/66064-138426-macbook-pro-14-inch-1249-deal-xl.jpgBetter-than-Black Friday pricing has hit Apple’s M4 MacBook Pro, with the 14-inch laptop marked down to $1,249.

14-inch MacBook Pro laptop with a vibrant abstract background displays large white text 'M4 $1,249' on the screen.
This blowout M4 MacBook Pro deal is likely to sell out – Image credit: Apple

The $350 discount beats Black Friday pricing by $50, with the laptop in stock in Silver with delivery by Christmas.

Buy for $1,249 ($350 off)

Continue Reading on AppleInsider | Discuss on our ForumsAppleInsider News

This website literally walks you through the coolest parts of the internet

https://static0.makeuseofimages.com/wordpress/wp-content/uploads/wm/2025/12/discovering-the-web-with-viralwalk.jpg

The World Wide Web is a massive universe that will take multiple lifetimes to completely explore. In fact, by habit, most people confine themselves to just a few selected parts of the web, usually Google, Facebook, YouTube, ChatGPT, and Instagram. That’s why I go out of my way to discover new and exciting websites.

I once discovered a website that allows me to listen to radio stations from around the world for free. But this time, I might have found one that’s even better: Viralwalk, a website that allows me to discover some of the coolest sites on the web. I’d give fair warning: don’t visit this website if you don’t want to waste a few hours.

Viralwalk is the anti-algorithm you didn’t know you needed

A website that lets you wander instead of search

One of the most authentic and refreshing experiences you can get on the modern web comes from landing on a website that has absolutely no idea who you are. One with no history, tracking, or algorithm waiting to nudge you towards what it believes are your favorite online destinations. That is exactly what Viralwalk does. It does not give you a search bar or a limited set of categories; you simply get the Start Walking button. Clicking it opens up the internet in a way you haven’t experienced for ages.

The first time I used Viralwalk, the experience I got was closer to wandering through an unfamiliar city than actually browsing the internet. I wasn’t looking for anything in particular, but stumbled upon actual gems.

One such gem was the Random Things To Do website. This site gives you ideas for things to do when you’re bored. Here, I spent minutes playing random games, then found drawing and painting prompts and projects to build in Minecraft. I would never have known that such a fun but simple site existed.

Exploring the web through moods instead of menus

The Walk, Chill, and Flow modes create different kinds of discoveries

Viralwalk gives you a unique way to explore the web. One of my favorites is the Mood category, which gives you a curated set of search moods. A few of the Mood options stand out and easily resonate with me. I love the Late Night Coding Vibes mood. One of the pins on this mood shows trending GitHub repositories. It became an invaluable resource that I use to find new open-source projects to test and write about. I only discovered it by chance, thanks to Viralwalk.

I also love the Digital Reading Nook mood. It has a helpful catalog of reading and writing tools, some of which I already know and use, and others that were new to me. It also has a few newsletters that I’ve now signed up for.

There are also a few other categories that I love. Flow gives a short overview of a bunch of random websites. You can keep scrolling through Flow until you find a website that catches your attention. Then click Open Site to visit it. You need to be logged in to use Flow.

Chill is also an interesting option on Viralwalk. It’s the option that allows you to relax with ambient visuals. When I need to take a break, I navigate to Chill and leave it in full-screen mode. The visuals constantly change, and it has a calming effect, perfect for a break after a long day’s work.

Collecting the gems you uncover along the way

Liked sites and albums make Viralwalk feel like a digital scrapbook

Viralwalk shows you so many interesting corners of the internet, and your first instinct is to save them. This is where the Like button comes in handy. There’s no browsing history that takes you back to familiar paths, but liking a destination saves it in your profile, and you can always come back to browse the list of liked sites.

It also has an Albums feature, which turns your discoveries into something more personal. I have an album of clever mini-projects and another for beautifully designed websites. Whenever I use Viralwalk and stumble on a website I like, I can tag it by including it in albums that are organized by theme. Anyone can browse my albums if I make them public when I create them.

A minimalist platform that quietly invites you to explore

Viralwalk’s design makes wandering the internet feel peaceful

After logging in for the first time, I saw a simple message: "Good morning." The interface had soft colors and rounded cards, and this calm layout set the tone instantly. It felt like I was opening a small creative studio rather than visiting a website.

It has a Quick Note panel on the welcome page, and I didn’t expect to appreciate it as much as I did. Mid-exploration, I keep referring back to Quick Note to jot down ideas, especially when my exploration sparks ideas that I’d love to revisit. Of course, this isn’t as elaborate as dedicated note-taking apps like Joplin, but it is a helpful little feature.

Viralwalk, however, limits you to 20 discoveries per day, and you’ll need to budget $8 per month if you prefer the pro service, which unlocks unlimited site discoveries in Flow/Walk. But the free plan is more than enough for me, since I don’t plan to spend my entire day on Viralwalk.

Wandering, surprise, and digital serendipity

Viralwalk uniquely brings the feeling of stumbling into something unexpected online. It perfectly recaptures the time when browsing meant exploring, not scrolling. It is one of the few websites I’ve stumbled upon this year. It’s just as much fun as that website that allows you to look through other people’s windows.

MakeUseOf