Best NAS for Plex in 2024

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2024/05/53468292136_92438869da_o.jpg

One of the best ways to store your growing movie collection is with a NAS, especially when you pair it with Plex to act as a front for your favorite films. It’s a much more dedicated solution than, say, repurposing an old computer, since you tend to be limited in the number of HDDs and SSDs you can install. It’s a relatively straightforward option, and you’ll have no problem finding the best NAS for Plex, no matter the budget.

front and back view of the qnap ts-264 NAS
QNAP

Its completeness as a dedicated NAS Plex server makes the QNAP TS-264 such a gem. Running the show is the Intel Celeron N5105/N5095 and 8GB of RAM. That CPU is particularly special because it handles 4K transcoding beautifully.

The QNAP TS-264 has a respectable amount of storage, with two bays. That doesn’t sound like much, but consider this: both bays can handle 22TB 3.5 HDDs in RAID 1. If you’re desperate for more storage, you can pick a model from the same lineup with up to six bays.

Once you get comfortable with the QNAP TS-264, you can tinker and expand. You can add another 8GB of RAM for a total of 16 GB, boosting access performance with its dual M.2. slots. You can also use the PCIe expansion slot to accept QM2 and network cards.

QNAP TS-264

Best Overall

The QNAP TS-264 may be small, but it truly is a force to reckon with. As a NAS for Plex, it’s everything you could ever ask for. Excellent 4K transcoding, 8GB of RAM, SSD slots for cache acceleration, and you can expand its performance via its PCIe expansion slot.

Pros

  • Dual M.2 NVMe SSD slots for cache acceleration
  • 2 expansion bays, but also available with 3, 4, and 6 bays
  • Dual 2.5 Gigabit Ethernet ports
  • Small and compact
Cons

  • Interface isn?t the most intuitive

synology ds223j nas on desk, next to keyboard and monitor
Synology

The Synology DS223j gets our seal of approval for being one of the easiest ways to set up a Plex server. It’s as close as you can get to plug-and-play and, as a bonus, works with Windows, Android, Mac, and iOS.

Another great feature of the Synology DS223j is its expansion bays. You get two, each of which can accept 2.5-inch SSDs and 3.5-inch HDDs. A single bay can accept an HDD as big as 20TB, so you can have up to 40TB of storage. That’s plenty for a healthy library of 4K movies or an even larger library of movies at 1080p.

As a side note, you can save a little more by getting Synology DS120j. However, this does come at the cost of one less bay and reduced performance.

Synology DS223j

Best Budget

Quiet and compact, the Synology DS223j is the perfect entry-level NAS for running a Plex server for yourself and a few people. It’s easy to set up, will hardly make a peep, and is compatible with major operating systems.

Pros

  • Good for at least three people at a time
  • Quiet operation
  • Smaller than an ITX PC case

front view of qnap tvs-h874x nas and available i/o
QNAP

The QNAP TVS-h874X is likely overkill for most people. However, if you have enough media to fill so many bays or want to stream your Plex server to a large group of people, the 12th Gen Intel i9 processor is more than enough to deliver multiple 4K streams simultaneously.

In addition, the QNAP TVS-h874X has dual M.2 Gen4x4 NVMe SSDs to speed up the process. To further ensure a stable connection, you have dual 10 Gigabit Ethernet ports, with the option to expand to a 25 Gigabit Ethernet port via its PCIe Gen4 expansion slot. However, you can just as easily use the expansion slot for GPUs or, better yet, a QM2 car to boost your performance.

QNAP TVS-h874x

Best Premium NAS

The QNAP TVS-h874X is rocking a powerful 12th Gen Intel Core i9 processor, 64GB of DDR4 RAM, and eight bays to populate with either 2.5-inch SSDs or 3.5 HDDs. It’s for streaming 4K video to multiple users and lets you maintain a massive movie library.

Pros

  • Can upgrade RAM to 128GB
  • Built-in HDMI port ready to go
  • PCIe Gen expansion slot opens door for more performance
  • 8 bays to populate, each can handle HDDs over 20TB

asustor lockerstor 10 as6510t on desk, next to keyboard, mouse, tablet, and monitor
Asustor

If you have a movie collection that rivals streaming services, the Asustor Lockerstor 10 AS6510T NAS is what you need. With 10 storage bays to populate—using your choice of SSD or HDD—each of which can handle 18TB, you’ll never want for storage space.

Even better, the Asustor Lockerstor 10 AS6510T works wonderfully with RAID, so if you have space to spare, you can have backups in case of a storage failure. What’s most impressive, other than its high capacity, is the Ethernet ports at the back. You have four Ethernet ports—two 2.5 Gigabit ports and two 10 Gigabit ports. Couple that with the dual M.2 slots for fast caching, and data will rarely, if ever, bottleneck.

Asustor Lockerstor 10 AS6510T

Best High Capacity

The Asustor Lockerstor 10 AS6510 T has 10 bays, each handling 18TB, and offers 180TB of storage. It also has two M.2 NVMe SSD slots for fast caching and four Ethernet ports to ensure a stable connection.

Pros

  • Handles transcoding well over multiple users
  • Can upgrade RAM to 64GB
  • Surprisingly compact
  • Quiet operation and keeps itself cool
Cons

  • Installing more RAM and M.2 is a bother

synology ds723+ 2-bay diskstation NAS on desk next to apple imac
Synology

Windows and Linux aren’t the only operating systems that can run Plex—it’s compatible with MacOS, too. In fact, if you’ve got a spare MacBook or iMac, they make pretty good Plex servers on their own, no NAS required. However, a much more dedicated solution would be the Synology DS723+.

It’s relatively easy to set up the Synology DS723+. The manual is rather wordy, but you’ll have no problem following along. The best aspect of the Synology DS723+ is its customization. It has just two storage bays, but if your movie collection outgrows your HDDs and SSDs, you can pick up the DX517 Expansion Unit for five extra drive bays. Even the 1 Gigabit Ethernet port can be swapped for something faster, like a 10GbE port. Not to mention dual M.2 NVMe slots for storage pooling and fast caching.

Synology DS723+

Best for Mac

Simple, fast, and expandable, the Synology DS723+ is the complete package that grows alongside your movie collection. And it’ll stick around for a while considering RAM, the storage bays, and M.2 can be upgraded when needed.

Pros

  • Flexible customization
  • Can upgrade RAM (16GB maximum)
  • Works with Mac and iOS
  • Fairly priced
Cons

  • Set up is easy but tedious (At least set a day aside)

FAQ

Q: What does NAS stand for?

It’s an acronym for “Network Attached Storage.”

Q: What do I use a NAS for?

Since NAS is simply a storage device that can be accessed over a network, you can use it for just about anything related to storing files. You could backup important documents, images, music, movies, and so on. Essentially, whatever you store on a PC can be stored on a NAS instead.

Q: What is Plex?

Plex is nothing more than software that organizes your media and lets you access it quickly. While Plex has its own movies to stream, it’s more about maintaining your collection.

The fun part is Plex also acts as a front for your NAS. Plex can stream your content to one or more devices by setting it up to access your NAS. It’s like having your own streaming service.

Q: What is RAID?

It’s an acronym for “Redundant Array of Inexpensive Disks.” To put it simple: with RAID you store the same data in multiple locations. If an HDD or SSD were to fail, RAID ensures the data in those storage devices isn’t lost for good.

Q: How many bays do I need for a NAS for Plex?

There’s no hard or fast rule when it comes to the number of bays your NAS needs for Plex. It really depends on the size of your library and the quality of said media.

Let’s put it this way: the average size of a 4K Blu-ray can exceed 20GB. With a 1TB storage device, that’s 50 movies even. On the other hand, FHD (1080p) movies are around 1 to 2GB. Using the same storage device, you’re looking at 500 to 1000 movies.

MakeUseOf

Apple’s durability testing is way more than a YouTuber can manage

https://photos5.appleinsider.com/gallery/59844-122490-000-lede-Drop-Test-xl.jpg

Apple will drop-test thousands of iPhones like this (Source: MKBHD)


Apple has revealed how iPhone and iPad drop tests should really be done — and are being done, thousands of times, in its durability testing labs.

YouTubers will buy one Apple device and smash it to pieces as pathetic clickbait. They always justify it, though, by saying these devices must be tested — and now Apple has politely suggested that they hold its beer.

Marques Brownlee, MKBHD — who doesn’t smash up the devices that his YouTube channel covers — has been shown around Apple’s testing labs for the iPhone. Every test any YouTuber ever makes on a device has been done by Apple first.

In a thread on Twitter, MKBHD shows how in Apple’s labs "there’s an entire room of machines for water and ingress testing." They range from a simulation of light rain to "high pressure spray from a literal firehose."

Then there’s the drop test, so beloved of YouTubers. Except in Apple’s case, industrial robots perform hundreds of drops, and each drop is monitored in slow motion.

Add to this a shake test that can mimic an iPhone being in the pocket of someone on a motorbike, and overall Apple tests to a degree that inconceivable for any individual YouTuber. Then if any one did manage to match Apple’s own testing, they’d also have to buy over 10,000 devices.

That’s how many of a new iPhone model will go through preposterous levels of durability testing. At retail, that’s a minimum of just under half a million dollars of iPhone — if you chose the lowest-cost iPhone SE.

Apple’s head of hardware engineering, John Ternus, says that the company does pay attention to durability issues once a device is on sale, but that this all helps improve the in-house testing.

"We’ve found when we’ll pull units back from the field and we’ll find things and figure out how do we build a test that represents maybe this new use case that somebody’s doing in the field," said Ternus, "and then that becomes a part of our test suite."

Ternus also argued that durability is the best option for the customer and the planet, even if to achieve that, Apple has to make it harder to repair devices.

"It’s objectively better for the customer to have that reliability," he said, "and it’s ultimately better for the planet because the failure rate since we got to that point have just dropped, it’s plummeted."

"So you can actually do the math and figure out there’s a threshold at which if I can make it this durable," continued Ternus, "then it’s better to have it a little bit harder to repair because it’s going to net out ahead."

AppleInsider News

Huge Google Search Document Leak Reveals Inner Workings of Ranking Algorithm

Danny Goodwin reports via Search Engine Land: A trove of leaked Google documents has given us an unprecedented look inside Google Search and revealed some of the most important elements Google uses to rank content. Thousands of documents, which appear to come from Google’s internal Content API Warehouse, were released March 13 on Github by an automated bot called yoshi-code-bot. These documents were shared with Rand Fishkin, SparkToro co-founder, earlier this month.
What’s inside. Here’s what we know about the internal documents, thanks to Fishkin and [Michael King, iPullRank CEO]:
Current: The documentation indicates this information is accurate as of March.
Ranking features: 2,596 modules are represented in the API documentation with 14,014 attributes.
Weighting: The documents did not specify how any of the ranking features are weighted — just that they exist.
Twiddlers: These are re-ranking functions that "can adjust the information retrieval score of a document or change the ranking of a document," according to King.
Demotions: Content can be demoted for a variety of reasons, such as: a link doesn’t match the target site; SERP signals indicate user dissatisfaction; Product reviews; Location; Exact match domains; and/or Porn. Change history: Google apparently keeps a copy of every version of every page it has ever indexed. Meaning, Google can "remember" every change ever made to a page. However, Google only uses the last 20 changes of a URL when analyzing links.
Other interesting findings. According to Google’s internal documents: Freshness matters — Google looks at dates in the byline (bylineDate), URL (syntacticDate) and on-page content (semanticDate).
To determine whether a document is or isn’t a core topic of the website, Google vectorizes pages and sites, then compares the page embeddings (siteRadius) to the site embeddings (siteFocusScore).
Google stores domain registration information (RegistrationInfo).
Page titles still matter. Google has a feature called titlematchScore that is believed to measure how well a page title matches a query.
Google measures the average weighted font size of terms in documents (avgTermWeight) and anchor text.
What does it all mean? According to King: "[Y]ou need to drive more successful clicks using a broader set of queries and earn more link diversity if you want to continue to rank. Conceptually, it makes sense because a very strong piece of content will do that. A focus on driving more qualified traffic to a better user experience will send signals to Google that your page deserves to rank." […] Fishkin added: "If there was one universal piece of advice I had for marketers seeking to broadly improve their organic search rankings and traffic, it would be: ‘Build a notable, popular, well-recognized brand in your space, outside of Google search.’"


Read more of this story at Slashdot.

Slashdot

Audit MySQL Databases in Laravel With the DB Auditor Package

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/futuristic-database-audit-02.jpg

Audit MySQL Databases in Laravel With the DB Auditor Package

The DB Auditor package for Laravel helps you audit your MySQL database standards and provides options to add missing constraints via CLI:

php artisan top
DB Auditor table report

This package can help you identify areas of your database that need work during development to optimize your production database. It offers the following features:

  • Audit and review an existing MySQL database
  • Scan MySQL databases to provide insights of mysql standards and constraints
  • Apply scan results automatically via the command line
  • Show a list of tables that fail audit and don’t follow recommended standards

You can access all of the tools via Laravel’s Artisan console. One command I found interesting is the db:track command, which gives you information about migrations, such as when they were created, the fields created, and which Git user created them.

The project’s readme also includes instructions for enabling a web UI feature to see recommendations from a web browser. On GitHub, you can learn more about this package, get full installation instructions, and view the source code at vcian/laravel-db-auditor.


The post Audit MySQL Databases in Laravel With the DB Auditor Package appeared first on Laravel News.

Join the Laravel Newsletter to get all the latest Laravel articles like this directly in your inbox.

Laravel News

How to use Cloudflare R2 with Laravel file storage

https://assets.njoguamos.me.ke/cdn-cgi/image/f=webp,w=1200,h=630,fit=cover/ogs/how-to-use-cloudflare-r2-with-laravel-file-storage.jpg

Laravel offers a powerful filesystem abstraction using Flysystem, a file storage library for PHP by Frank de Jonge. Flysystem provides one interface to interact with many filesystems, including Amazon Web Services (AWS) S3-compatible object storage. There are multiple S3 API compatible object storage in the market today – starting with Amazon S3, Digital Ocean Spaces, Linode Object Storage, Cloudflare R2 among others.

When deciding which S3 object storage service to use, several factors comes into play. These factors include project requirements, available features, location, and cost. For most users, including myself, the decision often comes down to the cost. I always choose Cloudflare R2 as my preferred option due to its affordability and zero egress fee.

Let me show you how can start using Cloudflare R2 object storage in your Laravel project.

Prerequisites

Before we proceed, you may need to have some basic understanding of Laravel Storage, well at least know what it is and why we need it. To subscribe to Cloudflare R2, you will need a Credit Card or PayPal. R2 comes with a free tier, and you will only pay as you go once you exceed the limit. You can estimate your cost using R2 cost calculator.

Create a Cloudflare account

To create a Bucket, you must purchase R2 from the Cloudflare dashboard. Therefore, you must sign up for a Cloudflare account. If you already have an account, skip this section.

a screenshot of cloudflare sign up page

On signing up, click Explore all products link. You will redirect you to Cloudflare dashboard. You must verify your email before you can start using Cloudflare services.

a screenshot of cloudflare email verification prompt

Create a new R2 bucket

Create a new bucket – a file space on the server containing objects. Open the Cloudflare dashboard and select R2 on the left sidebar. You will be presented with an R2 subscription form. Fill in the billing details and credit card details, or use PayPal. If you already have a bucket, skip this section.

a screenshot of cloudflare r2 subscription form

If the subscription is successful, you will be presented with an R2 dashboard. Click Create a bucket.

a screenshot of cloudflare r2 home page

Provide a bucket name. Use your preferred naming convention. Update or retain the other default options, then click Create Bucket.

a screenshot of cloudflare r2 create bucket form

You should be presented with an empty R2 bucket ready for use upon successful creation.

a screenshot of cloudflare r2 empy bucket

Generate R2 credentials

To manage the bucket contents using Laravel, you must generate credentials for the S3 client. Select R2 from the left sidebar and then choose Manage R2 API tokens.

a screenshot of cloudflare link to manage api tokens

In the next page, select Create API token.

a screenshot of cloudflare link to create api token

Provide your preferred token name and select Object Read & Write permission. Apply the permission to specific buckets only (the bucket we created).

a screenshot of clopudflare-create-api-token-form

After your token has been successfully created, review your Secret Access Key and Access Key ID values. Save the keys as R2_ACCESS_KEY_ID and R2_SECRET_ACCESS_KEY respectively and jurisdiction-specific endpoint for S3 clients as R2_ENDPOINT. You will need these values in the next .

a screenshot of cloudflare r2 s3 access keys

You must record both values before proceeding, as the Secret Access Keys cannot be retrieved later.

Once you save the keys, click Finish.

Setup Laravel to use R2

In your Laravel app, open the .env file and add the following credentials.

R2_ACCESS_KEY_ID=078d********cb09bf********df1
R2_SECRET_ACCESS_KEY=ff36f125b5******0072630595703a5
R2_BUCKET=laravel-demo-prod
R2_ENDPOINT=https://95f*************12ef5.r2.cloudflarestorage.com

Then open ./config/filesystems.php and add a new r2 disk as follows

<?php
 
return [
 
 'default' => env('FILESYSTEM_DISK', 'local'),
 
 'disks' => [
 
 'local' => [ /*...*/ ],
 
	'public' => [ /*...*/ ],
 
 's3' => [ /*...*/ ],
 
 'r2' => [
 'driver' => 's3',
 'key' => env('R2_ACCESS_KEY_ID'),
 'secret' => env('R2_SECRET_ACCESS_KEY'),
 'region' => 'auto',
 'bucket' => env('R2_BUCKET'),
 'url' => env('R2_URL'),
 'endpoint' => env('R2_ENDPOINT'),
 'use_path_style_endpoint' => env('R2_USE_PATH_STYLE_ENDPOINT', false),
 'throw' => false,
 ],
 
 ],
 
 'links' => [ /*...*/ ],
 
];

Finally, install the Flysystem S3 package via the Composer package manager.

composer require league/flysystem-aws-s3-v3 "^3.0" --with-all-dependencies

Testing the integration

To test if R2 integration was successful, try uploading a sample file. For example, you can try to upload the public/favicon.ico, which comes with all Laravel applications.

php artisan tinker

The command should open Psy Shell, where you can run the following code. If the command is successful, the Psy Shell should print true.

> $file = public_path('favicon.ico')
= "/Users/njoguamos/Code/r2/public/favicon.ico"
 
> Storage::disk('r2')->put('favicon.ico',$file)
= true
 

In the code above, we set the $file variable to the absolute path of the ‘favicon.ico’ file in the public directory. Then, use the Storage facade to store the ‘favicon.ico’ file to the ‘r2’ disk

To verify that the file was uploaded, run the following in the same Psy Shell.

> Storage::disk('r2')->allFiles()
= [
 "favicon.ico",
 ]
 

Open the R2 bucket dashboard on Cloudflare to verify the upload.

a screenshot of cloudflare r2 bucket with items

If you have made it this far, congratulations �. You have successfully integrated Cloudflare R2 with your Laravel application.

Conclusion

Integrating Cloudflare R2 with Laravel provides cost-effective and efficient object storage. Follow the guide to set up a Cloudflare R2 bucket and credentials and configure Laravel. R2 offers a free tier, pay-as-you-go options, and zero egress fees, making it an excellent choice for optimizing costs and performance. I hope this article was helpful. Please leave a comment in the discussion section.

Laravel News Links

Grenade Pill Cache

https://theawesomer.com/photos/2024/05/grenade_pill_cache_t.jpg

Grenade Pill Cache

 | Buy

This grenade-shaped aluminum container from edcfans is made for carrying pills, spare cash, matches, or other small items you need to keep dry. Its screw-top lid has a rubber O-ring to seal out moisture and air, and it holds items up to 1.73″ tall x 0.75″ wide. They also make a version that looks like a tiny bomb.

The Awesomer

A Root-Server at the Internet’s Core Lost Touch With Its Peers. We Still Don’t Know Why.

A server maintained by Cogent Communications, one of the 13 root servers crucial to the Internet’s domain name system, fell out of sync with its peers for over four days due to an unexplained glitch. This issue, which could have caused worldwide stability and security problems, was resolved on Wednesday. The root servers store cryptographic keys necessary for authenticating intermediate servers under the DNSSEC mechanism. Inconsistencies in these keys across the 13 servers could lead to an increased risk of attacks such as DNS cache poisoning. Engineers postponed planned updates to the .gov and .int domain name servers’ DNSSEC to use ECDSA cryptographic keys until the situation stabilized. Cogent stated that it became aware of the issue on Tuesday and resolved it within 25 hours. ArsTechnica, which has a great writeup about the incident, adds: Initially, some people speculated that the depeering of Tata Communications, the c-root site outage, and the update errors to the c-root itself were all connected somehow. Given the vagueness of the statement, the relation of those events still isn’t entirely clear.


Read more of this story at Slashdot.

Slashdot