Review: Synology DS-1618+ network attached storage device is the best kind of overkill for most

Review: Synology DS-1618+ network attached storage device is the best kind of overkill for most

https://ift.tt/3c1UcPt

More people than ever are working from home, and local-area network storage needs aren’t going down. Don’t cheap out on a low-end network storage device, and get the Synology DS-1618+ to set you up for the future.

If you have one computer, with one user, doing one task, then the storage space you have or can easily add externally is probably sufficient. But as computers, users, or tasks multiply, so does storage. Add in any kind of large file storage need, like accumulation of videos, and it can get out of hand quickly.

Sure, you can keep adding external drives through RAID enclosures like we have, but that can get unwieldy if you have a lot of data and that’s aggravated by multiple users or computers. Cataloging what’s on which drive can be a pain too.

We’ve said it before — we like home servers, and we like the Mac mini for that task. But we also like network attached storage devices (NAS), a device we can sit in the corner, and just let it serve files.

But, it’s all too easy to buy a network attached storage device that doesn’t have enough power for the future, and have to re-buy. This increases cost, and potentially induces a migration nightmare.

Buy what you need from the start of the project. Get something like Synology’s DS-1618+ — which we’ve been using for some time now.

Set and forget

The DS-1618 is plain. It is a black box, specifically designed to sit unobtrusively in a (well ventilated) area of an office. You don’t really want this to be in your office near your workstation or a bedroom with an office space because of noise — but more on that in a bit.

The unit gives the user six bays to add 3.5-inch hard drives, or 2.5-inch SSDs too — but we recommend the former for cost and data density reasons. If this isn’t sufficient, two DX517 expansion chassis can be installed and easily added to the existing RAID through the pair of eSATA ports on the back of the unit.

The DS-1618+ has three USB 3.1 type A connections for expansion, or to back up the entire RAID, assuming you have a large enough external array to hold the contents of the NAS. If you’re so inclined, you can connect a powered USB-A hub to any of these ports for backup or other expansion. And, if you need to, you can connect a USB-only printer to the Synology to turn it into a network printer.

Ports on the rear of a Synology DS-1618+ network attached storage device

Networking is provided by four Gigabit Ethernet ports, with the unit having support link aggregation — in essence, with some routers, you can use all four ports to increase incoming and outgoing bandwidth. But, this can get expensive, as only some routers support it. Besides, in the home office or small business set up with the unit having hard drives, this is overkill.

Extra expansion possibilities are opened up by a PCI-E x4 expansion slot. You can’t just jam any old PCI-E card in there, but Synology does have a list of compatible cards that give the unit things like 10-gig Ethernet, fiberoptic networking cards, SSD caching for faster random access to things like databases, and the like.

Most users won’t need to use this slot for anything. But, it is a good inclusion for the future. The Mac mini has a 10-gig Ethernet option, and the iMac Pro and Mac Pro have it by default. Routers and network switches capable of the speed are coming down in price, and in the next few years, they will become more ubiquitous.

The whole package is powered by an Intel Atom C3538 CPU, with 4GB DDR4 RAM standard. RAM is expandable to 32GB with two SO-DIMM RAM slots on the underneath of the machine. More on this, and why and when you’d want more RAM in a network peripheral in a bit, though.

Loading a hard drive into a Synology DS-1618+ network attached storage device

The chassis is metal and well-engineered. Drive trays are tool-less, beyond the key that’s included in the system to pop the tray out.

Two plastic rails hold a drive firmly in place in the mostly-metal tray. The tray then slides in, and with the level lock on the tray, there is no doubt that you’ve made a good mechanical connection to the SATA drive connector in the NAS itself.

Drive tray assembled and ready to get installed

But, in operation and under load, we’d like it to be a little quieter. Under full I/O and CPU load, the unit doesn’t vibrate, but between fan noise and drive noise, it hits 61 dBa at three feet from the enclosure.

It is replete with LEDs and incredibly blinky when in use. This is expected, given that it has up to six hard drives, and is important to be able to see at a glance if everything is functioning okay. But again, you probably don’t want it physically near your workstation.

Setting up the Synology DS-1618+

Setup is about the easiest we’ve ever seen for a network attached storage device. The first step is to load up the device with drives. Synology makes it easy to see in advance how much storage you’re going to get from the unit with a tool where you can virtually load it up and see what you get — and we highly recommend fiddling with this before you buy drives.

Drive slots interior to the Synology DS-1618+

Synology has a list of recommended drives for the unit, and as a general rule, we do recommend adhering to those. That said, in the courses of our testing, we’ve used an assortment of drive sizes and manufacturers, and found that heat and data transfer consistency changes little.

Synology also has an online tool so you can see what you’re getting into for DSM software setup before you really get going. After you’ve taken a look at that, and following the drive installation, plug it into power, and use Safari or other browsers to go to find.synology.com.

Synology DS-1618+ loaded with drives

This loads up the configuration page for the device, lets you set up an administrative user, and format the drives in the unit. Synology and AppleInsider recommends Synology Hybrid RAID for flexibility. It also supports RAID 0, RAID 1, RAID 5, RAID 6, RAID 10, RAID 50, RAID 60 in hardware — but the drive requirements for each are left as an exercise for the reader.

After formatting, the interface has you configure the basics of file sharing. Using a URL that Synology provides that the NAS itself keeps up to date with your internet-facing IP address, you can also access your files and some of your services outside of your home network, all secured by encryption and password.

We have seen some probing from the Internet, looking for a Synology. The basic security is robust, assuming you’re using best practices for user and password selection. And, this is enhanced by notifications — the DSM will block IP addresses that hammer on the NAS looking for access automatically, and it will email you that it has done so, if you’ve configured it appropriately.

Under the basic configuration of DSM, file sharing is basic SMB — but this can be tailored to a ridiculous extent. Not only can you add additional services like SFTP, BitTorrent, and the like, times of file availability can be selected, you can lock down specific folders with a password, and you can prevent certain folders from being seen by a user at all.

Synology’s DSM also allows for full-drive AES 256-bit encryption without a large amount of performance loss. But if you do this, don’t then use the machine for anything that needs any notable processing horsepower. You can upgrade the RAM, but you can’t upgrade the processor.

The use of the device goes so much deeper than this, though.

What do I want a network attached storage device for, anyway?

Beyond just serving files, a network attached storage device like the Synology DS-1618+ has an expandable ecosystem, very similar to Apple’s App Store. Software can be added to extend the usability of the device — and you can even install Windows on it.

Synology settings page in Safari — your gateway to installing packages and configuring the unit

For most Apple Mac and iPad users, the most usability beyond SMB file sharing will come from an integrated iTunes sharing package, which is easily configured through the web-based interface. Additionally, it can be set up as network Time Machine targets, even for Macs running macOS El Capitan and older with AFP services.

Other software available for the unit include a Plex server, built-in DLNA video streaming, and integration with Dropbox and other cloud-based storage services.

Regarding that video streaming, though — if you use the iTunes server, all of your videos and music need to be encoded properly for iTunes. Basically, you’re front-loading all the processor work that needs to be done for a video, and keeping that work off the NAS itself.

Services like Plex will transcode just about any media format on the NAS prior to streaming, but this takes some effort from the hardware. This is commonly where lesser NAS devices fall down.

In our testing, we consistently can stream three 1080P videos simultaneously with no dropped frames. But, it will only realistically manage one 4K stream, and the enclosure’s fans are very, very loud during the process.

The DS-1618+ also comes with a two-user license for IP-based cameras, to use the unit as the core of a network-based surveillance system. Up to 40 cameras are supported, at additional cost.

As you add services and load, that 4GB of RAM in the unit is consumed very rapidly. It uses virtual memory like every other modern computer does, but as that footprint increases, performance drops. We didn’t run into this when running a Time Machine backup, an iTunes server, and regular file service. When we added a Plex server, we started seeing some performance hits even before we started streaming anything.

So, if you’re going basic, 4GB is probably enough. But, if you plan on running a lot of services, get more RAM. We put 16GB in our unit and didn’t hit any more performance issues induced by low RAM.

DS-1618+ transfer speeds

The Synology 1618+ will saturate your home network, if you let it. With six 7200 RPM drives installed in the NAS, when copying 20GB of large files, we saw 110.1 megabytes per second read, and 109.1 megabytes per second write speeds. The impact of smaller files varies, but when copying 20GB of MP3 files across the network, we saw that same 110 megabytes per second read, but 81 megabytes per second write.

This changes when using 10-gig Ethernet through a Netgear XS505M switch, and to a Mac mini with that 10-gig option. Using that setup, we got about 400 megabytes per second read on big and small files, and 390 megabytes per second write of large files, and 220 megabytes per second write on the MP3 folder.

Buy what you need for tomorrow, not today

The DS-1618+ is not inexpensive. It is Mac mini-priced, if you’re looking to stay inside the Apple ecosystem for your server needs.

From a price perspective, you’re looking at $799 for either the DS-1618+ or the Mac mini on the low-end, assuming you’re using the 2018 Mac mini. Drive prices vary, depending on what you pick up, but $100 per 4TB isn’t an unrealistic estimation. On top of that, for the Mac mini, you’re looking at $200-ish for a USB 3.2 type C enclosure with the limited macOS software RAID options, and much more if you want hardware RAID support — unless you just want four drives in individual enclosures laying about.

From a performance standpoint, that Mac mini home server is more flexible overall, and more powerful. However, it is also more expensive when you consider those additional expenses, and in some respects, not as easy to set up for network services. And, that PCI-E slot for expansion of the NAS is nice.

In a home with low network storage needs or an office that sees a basic need but isn’t sure where to jump in, the Synology DS-1618+ is overkill. But, as you start adding things like media serving and the like, plus the inevitable creep of what you offload onto a NAS once you get started, the unit is a cost-effective way to get a powerful storage solution not just for now, but for the future as well.

Importantly, though, don’t get complacent with backup. It is far too easy to get a NAS in your office and consider yourself safe. A single-facility failure, say, an office fire, will still wipe out all of your data, if you don’t have some sort of off-site backup.

There are certainly cheaper network attached storage units, but they are easy to outgrow. The 1618+ is an excellent, and expandable, starting point.

  • Power to price ratio is excellent
  • Excellent expandability
  • Software configuration more than just about anybody needs
  • Loud and bright
  • Expansion chassis for more drives are expensive
  • Similar in price to a Mac mini

Where to buy

The Synology DS-1618+ sells for $749 at your choice of retailers, including Amazon, B&H and Adorama.

macintosh

via AppleInsider https://ift.tt/3dGGYcl

May 25, 2020 at 06:34PM

Raw Combat Footage Shows Strafing A-10 Warthog Save Ground Troops

Raw Combat Footage Shows Strafing A-10 Warthog Save Ground Troops

https://ift.tt/2Ty8Rvu

Raw Combat Footage Shows Strafing A-10 Warthog Save Ground Troops

a-10 thunderbolt II warthog cannon

By Staff Sgt. Steve Thurow – A-10 Thunderbolt II, Public Domain, Link

By Travis Smola

When soldiers in our armed forces need air support, they welcome the sound of an A-10 Thunderbolt II, AKA the A-10 Warthog. This awesome plane and her pilots have been in service since 1972, despite repeated efforts over the years to kill it.

The Warthog is a favorite of pilots and troops alike because of its ability to support soldiers on the ground. It’s been called a gun with a plane that was built around it because of the devastating 30mm autocannon in the nose can fire armor-piercing depleted uranium shells at up to 3,900 rounds a minute.

That’s a lot of firepower to the rescue when ground troops are in a pinch. In the video below, a convoy comes under heavy attack when the A-10 comes to their rescue. [NSFW: there’s some harsh language in the video.]

You can hear the relief in the voices of these soldiers at the distinctive sound of the A-10’s cannon pounding the enemy position.

The sound of the A-10 is intense. The rounds hit the ground before you hear the BRRRR buzzing sound of those 30mm cannons. There’s a common saying: “If you hear an A-10 shooting, you weren’t the plane’s intended target.” Imagine the psychological effect this plane must have on anyone on the receiving end of its fire.

A-10 Lightningbolt GAU-8 cannon

By USAF – nationalmuseum.af.mil, Public Domain, Link

Seeing raw combat footage like this reminds us that the movies aren’t accurate when it comes to portraying how things often play out in real life on the battlefield. It just makes us even more thankful for the dangerous job performed by our brave service men and women in uniform.

guns

via The Truth About Guns https://ift.tt/1TozHfp

May 23, 2020 at 04:00PM

Push deploy a Laravel app for free with GitHub Actions

Push deploy a Laravel app for free with GitHub Actions

https://ift.tt/2A4dhDC

Push deploy a Laravel app for free with GitHub Actions

For many teams, it makes sense to use services like Ploi or Forge. They provision servers for you, configure push deploys, deal with backups, and many other things.

You can also use a Platform-as-a-Service like Heroku. PaaS is close to serverless in the sense that you don’t think about servers, but the primary difference between PaaS and serverless is that serverless scales automatically, whereas PaaS merely hides the fact that there are servers to deal with. And — speaking of serverless — you can, of course, use a service like Laravel Vapor and get push deploys.

However, if configuring servers is something you’re used to, you might be interested only in the push to deploy part of these services. And as such, you might not want to pay for the services above only to use a single feature.

Luckily, configuring push deploys is super easy to do yourself — and free! — if you’re using GitHub. Specifically, we’re going to be using GitHub Actions.

Prerequisites

This article assumes you know how to configure webservers, and as such it will only guide you through configuring Continuous Deployment part — not the actual webservers.

  1. A configured webserver running PHP
  2. A GitHub repository

The git setup

  • Some public/ assets are in .gitignore, but are built on GitHub
  • master branch is used for development
  • production is pushed and deployed
  • deploy is created by the Action, by taking production and adding a commit with the built assets

So the code flows like this: master --> production --> deploy.

A few notes

We compile front-end assets inside the GitHub Actions — not your computer, nor your server. This means that you don’t have to run npm locally to deploy, and that the server’s downtime is as short as possible.

We run tests locally, not in the CI Action. The reason for this is to allow you to deploy hotfixes even if they break some tests. If you want to run tests inside the Action, then simply look up some GitHub Actions phpunit workflow and copy the necessary steps. You can use this example.

1. Server deployment script

Add this bash script to your repository, and name it server_deploy.sh.

This script will be executed on the server to pull and deploy the code from GitHub.

#!/bin/sh set -e echo "Deploying application ..." # Enter maintenance mode (php artisan down --message 'The app is being (quickly!) updated. Please try again in a minute.') || true # Update codebase git fetch origin deploy git reset --hard origin/deploy # Install dependencies based on lock file composer install --no-interaction --prefer-dist --optimize-autoloader # Migrate database php artisan migrate --force # Note: If you're using queue workers, this is the place to restart them. # ... # Clear cache php artisan optimize # Reload PHP to update opcache echo "" | sudo -S service php7.4-fpm reload # Exit maintenance mode php artisan up echo "Application deployed!" 

The process explained:

  1. We’re putting the application into maintenance mode and showing a sensible message to the users.
  2. We’re fetching the deploy branch and hard resetting the local branch to the fetched version.
  3. We’re updating composer dependencies based on the lock file. Make sure your composer.lock file is in your repository, and not part of your .gitignore. It makes sure the production environment uses the exact same version of packages as your local environment.
  4. We’re running database migrations.
  5. We’re updating Laravel & php-fpm caches. If you’re not using PHP 7.4, change the version in that command.
  6. We’re putting the server back up.

Note that the server is always on the deploy branch. Also note that we’re putting the server down for the shortest duration possible — only for the composer install, migrations and cache updating. The app needs to go down to avoid requests coming in when the codebase and database are not in sync — it would be irresponsible to simply run those commands without putting the server into maintenance mode first.

And a final note, the reason we’re wrapping the php artisan down command in (...) || true is that deployments sometimes go wrong. And the down command exits with 1 if the application is already down, which would make it impossible to deploy fixes after the previous deployment errored halfway through.

2. Local deployment script

This script is used in your local environment when you want to deploy to production. Ideally, if you work in a team, you’ll also have a CI Action running phpunit as a safeguard for pull requests targeting the production branch. For inspiration, see the link to the Action example in the How it works section above and make it run on pull_request only.

Store the local deployment script as deploy.sh:

#!/bin/sh set -e vendor/bin/phpunit (git push) || true git checkout production git merge master git push origin production git checkout master 

This script is simpler. We’re running tests, pushing changes (if we have not pushed yet; it’s assumed we’re on master), switching to production, merging changes from master and pushing production. Then we switch back to master.

3. The GitHub Action

Store this as .github/workflows/main.yml

name: CD on: push: branches: [ production ] jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 with: token: $ - name: Set up Node uses: actions/setup-node@v1 with: node-version: '12.x' - run: npm install - run: npm run production - name: Commit built assets run: | git config --local user.email "action@github.com" git config --local user.name "GitHub Action" git checkout -B deploy git add -f public/ git commit -m "Build front-end assets" git push -f origin deploy - name: Deploy to production uses: appleboy/ssh-action@master with: username: YOUR USERNAME GOES HERE host: YOUR SERVER'S HOSTNAME GOES HERE password: $ script: 'cd /var/www/html && ./server_deploy.sh' 

Explained:

  1. We set up Node.
  2. We build the front-end assets.
  3. We force-checkout to deploy and commit the assets. The deploy branch is temporary and only holds the code deployed on the server. It doesn’t have a linear history — the asset compilation is never part of the production branch history — which is why we always have to force checkout that branch.
  4. We force-push the deploy branch to origin.
  5. We connect to the server via SSH and execute server_deploy.sh in the webserver root.

Note that you need to store two secrets in the repository:

  1. a Personal Access Token for a GitHub account with write access to the repository
  2. the SSH password

If you want to use SSH keys instead of usernames & passwords, see the documentation for the SSH action.

Usage

With all this set up, install your Laravel application into /var/www/html and checkout the deploy branch. If it doesn’t exist yet, you can do git checkout production && git checkout -b deploy to create it.

For all subsequent deploys all you need to do is run this command from the master branch in your local environment:

./deploy.sh 

Or, you can merge into production. But know that it will not run tests unless you configure the action for that, as mentioned above.

Performance and robustness

This approach is robust since it makes it impossible for a request to be processed when the codebase and database are out of sync — thanks to artisan down.

And it’s also very fast, with the least amount of things happening on the server — only the necessary steps — which results in minimal downtime.

See how this action runs:

The Action running on GitHub and successfully deploying an application.

The Deploy to production step took only 13 seconds, and the period when the application was down is actually shorter than that — part of the 13 seconds is GitHub setting up the appleboy/ssh-action action template (before actually touching your server). So usually, the application would be down for less than 10 seconds.

Filed in: News

programming

via Laravel News https://ift.tt/14pzU0d

May 22, 2020 at 12:23PM

How To Upload Multiple Files In PHP?

How To Upload Multiple Files In PHP?

https://ift.tt/3e6apEM

Hello Friends! In this article you will learn how to upload multiple files in PHP.

This article covers

  1. HTML File Upload Page
  2. Database Design To Store The Image Names
  3. Database Connection In PHP
  4. Rearrange Files Data When Form Submitted
  5. Simple Multiple File Upload
  6. File Upload With Validation

NOTE: I have added the complete code in GitHub repository. You can find it here Multiple File Upload


Prerequisites

Basic knowledge of PHP, HTML & I hope you already know how to do file upload if not then refer this article

How To Uploads Files In PHP?

How To Upload Image In PHP?


1) HTML File Upload Page

The following is the basic HTML code for multiple file uploads make sure you have enctype="multipart/form-data" in form tag,  product_images[] is an array in input type=file tag & multiple must be added to input element.

In the following example I would like to upload multiple product images, this might not be the realistic example but hope it servers for your understanding.

<form action="store_product.php" method="post" enctype="multipart/form-data"> <div> <label for="product_images">Select Product</label> <br> <!-- Basically you get the product list from database. For the sake of demonstration I am hard coding --> <select name="product_id" id="product_id"> <option value="1">Product 1</option> <option value="2">Product 2</option> </select> </div> <br> <div> <label for="product_images">Product Images</label> <br> <input type="file" name="product_images[]" id="product_images" multiple> </div> <br> <div> <input type="submit" value="Upload Product Images"> </div> </form> 

2) Database Design To Store Product & It’s Image Names

NOTE: I would like to store the images/files in disk ie., specific path and not in my database. By saving the images or any files as BLOB in database will increase the complexity, consumes lot of space & even reduces the efficiency of CRUD operations.

Following is the simple design of the products & product_images table inside invoice database.

/** Creating database with name invoice */ CREATE DATABASE `invoice`; /** Once we create we want to switch to invoice so that we can add products table */ USE `invoice`; /** Products table to hold the products data */ CREATE TABLE `products` ( `id` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, `name` VARCHAR(30), `product_invoice` VARCHAR(255), PRIMARY KEY (`id`) ); /** Product Images */ CREATE TABLE `product_images` ( id INT(11) UNSIGNED NOT NULL AUTO_INCREMENT, product_id INT(11) UNSIGNED NOT NULL, product_image VARCHAR(100) NOT NULL, PRIMARY KEY (`id`), FOREIGN KEY (`product_id`) REFERENCES `products` (`id`) ); 

For the sake of demonstration I have the following 2 rows inside products table

INSERT INTO `products` (`id`, `name`, `product_image`) VALUES (1, 'Product 1', '98767461589180160d435607145066fc9c3b5069d336f11a9.jpg'), (2, 'Product 2', '61545675589180160d435607145066fc9c3b5069d336f11a9.jpg'); 

3) Database Connection In PHP

The following is the code to connect to the database which we created above. I am using PDO connection, which is very advanced than mysql or mysqli functions.

<?php $host = 'localhost'; $db_name = 'invoice'; $db_username = 'root'; $db_password = 'root'; $dsn = 'mysql:host='. $host .';dbname='. $db_name; $options = [ PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC, PDO::ATTR_EMULATE_PREPARES => false, ]; try { /** $pdo is the connection object which I will be using for all my datbase operations */ $pdo = new PDO($dsn, $db_username, $db_password); } catch (PDOException $e) { exit($e->getMessage()); } 

4) Rearrange Files Data When Form Submitted

Once you fill the form and submit you will get somewhat similar to the following output.

store_product.php

Lets debug the form uploaded

echo '<pre>'; print_r($_FILES['product_images']); exit; 

Output

Array ( [name] => Array ( [0] => tempro1991266572.png [1] => tempro1913737454.png ) [type] => Array ( [0] => image/png [1] => image/png ) [tmp_name] => Array ( [0] => /Applications/MAMP/tmp/php/php70g7x1 [1] => /Applications/MAMP/tmp/php/phppHPzVH ) [error] => Array ( [0] => 0 [1] => 0 ) [size] => Array ( [0] => 874 [1] => 880 ) ) 

Since looping into them and parsing might be difficult lets format the above. In the below code I am just looping the files and just arranging based on keys ie name, tmp_name etc.

function rearrange_files($files) { $file_array = []; foreach ($files as $file_key => $file) { foreach ($file as $index => $file_value) { $file_array[$index][$file_key] = $file_value; } } return $file_array; } 

Using the above function we will get the rearranged files as follows

Array ( [0] => Array ( [name] => tempro1991266572.png [type] => image/png [tmp_name] => /Applications/MAMP/tmp/php/php3YrETt [error] => 0 [size] => 874 ) [1] => Array ( [name] => tempro1913737454.png [type] => image/png [tmp_name] => /Applications/MAMP/tmp/php/phpm1vF67 [error] => 0 [size] => 880 ) ) 

Ah! This looks clean, very understandable and more easy to work on.


5) Simple Multiple File Upload

Now lets add the bits and pieces to make the basic multiple form upload work

<?php session_start(); require_once 'db.php'; function rearrange_files($files) { /** above I have added function code **/ } if ($_SERVER['REQUEST_METHOD'] == 'POST') { /** array variable to hold errors */ $errors = []; $product_id = $_POST['product_id']; $product_images = $_FILES['product_images']; /** Add form validation */ if (empty($product_images)) { $errors[] = 'Product invoice file required'; } if (empty($product_id)) { $errors[] = 'Select product you want to add image'; } /** Check if the product exists in your database */ $product_stmt = $pdo->prepare(" SELECT id, name FROM `products` WHERE id = :product_id "); $product_stmt->execute([ ':product_id' => $product_id, ]); $product = $product_stmt->fetchObject(); if (!$product) { $errors[] = 'Selected product does not exist!'; } /** If there are any form errors then redirect back */ if (count($errors) > 0) { $_SESSION['errors'] = $errors; header('Location: index.php'); } /** $_FILES will have the upload file details in PHP */ $arranged_files = rearrange_files($_FILES['product_images']); foreach ($arranged_files as $product_image) { /** I am using pathinfo to fetch the details of the PHP File */ $file_name = $product_image['name']; $file_size = $product_image['size']; $file_tmp = $product_image['tmp_name']; $pathinfo = pathinfo($file_name); $extension = $pathinfo['extension']; $file_extensions = ['pdf', 'xls', 'jpeg', 'jpg', 'png', 'svg', 'webp']; /** Since I want to rename the File I need its extension * which will be obtained with above $phpinfo variable * */ /** generate random inage name */ $new_file_name = rand(0, 10000000).time().md5(time()).'.'.$extension; move_uploaded_file($file_tmp, './uploads/product_images/'. $new_file_name); $product_image = $pdo->prepare(" INSERT INTO `product_images` (`product_id`, `product_image`) VALUES (:product_id, :product_image) ") ->execute([ ':product_id' => $product->id, ':product_image' => $new_file_name, ]); } $_SESSION['success'] = 'Products added successfully'; header('location: index.php'); exit; } else { header('location: index.php'); exit; } 

6) File Upload With Validation

In the above code the validation is not added for all the files while doing upload we can perform simple validation as follows

/** File strict validations */ /** File exists */ if (!file_exists($file_tmp)) { $errors[] = 'File your trying to upload not exists'; } /** Check if the was uploaded only */ if (!is_uploaded_file($file_tmp)) { $errors[] = 'File not uploaded properly'; } /** Check for the file size 1024 * 1024 is 1 MB & 1024 KB */ if ($file_size > (1024 * 1024)) { $errors[] = 'Uploaded file is greater than 1MB'; } /** Check File extensions */ if (!in_array($extension, $file_extensions)) { $errors[] = 'File allowed extensions '. implode(', ', $file_extensions); } if (count($errors) > 0) { $_SESSION['errors'] = $errors; header('location: index.php'); exit; } 

The simple validation can be performed within foreach loop as shown below

foreach ($arranged_files as $product_image) { /** The above validation code */ } 

Conclusion

Hope you got some idea on upload multiple files in PHP.

WHATS NEXT?

You might be interest to learn more on composer please find my whole article on it

How To Upload Image In PHP?

How To Uploads Files In PHP?

How To Install Packages Parallel For Faster Development In Composer

What Is Composer? How Does It Work? Useful Composer Commands And Usage

composer.json v/s composer.lock

Composer Install v/s Composer Update

Route Model Binding In Laravel & Change Default Column id To Another Column

How To Run Raw Queries Securely In Laravel

Laravel 7.x Multiple Database Connections, Migrations, Relationships & Querying

How To Install Apache Web Server On Ubuntu 20.04 / Linux & Manage It

How To Create / Save / Download PDF From Blade Template In PHP Laravel

How To Add Free SSL Certificate In cPanel With ZeroSSL & Certbot

How To Securely SSH Your Server & Push Files With FileZilla

How To Push Files To CPanel / Remote Server using FTP Software FileZilla

How To Install Linux, Apache, MYSQL, PHP (LAMP Stack) on Ubuntu

How To Cache Static Files With NGINX Server

Redirect www to a non-www website or vice versa

How To Create Free SSL Certificate With Lets Encrypt/Certbot In Linux (Single / Multiple Domains)

How To Install Linux, NGINX, MYSQL, PHP (LEMP Stack) on Ubuntu

PHP Built-In Web Server & Testing Your Development Project In Mobile Without Any Software

How To Do Google reCAPTCHA Integration In PHP Laravel Forms

Happy Coding 🙂

programming

via Laravel News Links https://ift.tt/2dvygAJ

May 21, 2020 at 04:09PM

How Motorsport Tires Are Made

How Motorsport Tires Are Made

https://ift.tt/2WPSr3M

How Motorsport Tires Are Made

Link

Everyday car tires are made mostly by machine, but the high-end tires used for racing are made by hand. In this clip from Street FX Motorsport TV, they take us inside Michelin Motorsport’s HQ in France for a look at the tire-making process, building up layer by layer of rubber, textiles, steel, and adhesive on spinning drums.

fun

via The Awesomer https://theawesomer.com

May 20, 2020 at 09:30AM

LA-based Brainbase raises another $8 million for IP-licensing management

LA-based Brainbase raises another $8 million for IP-licensing management

https://ift.tt/2Zy5711

Brainbase, the rights management platform that’s helping Hollywood studios manage the licensing rights to their cultural icons, has picked up another $8 million in financing.

Behind every popular story is an attempt to make money off of it, and Brainbase helps Hollywood find new ways to make money off of consumer tastes.

The money came from new investors Bessemer Venture Partners and Nosara Capital, with participation from previous investors Alpha Edison, Struck Capital, Bonfire Ventures, and FJ Labs. Individual investors including Spencer Lazar, Michael Stoppelman, the former senior vice president of engineering at Yelp; Jenny Fleiss, co-founder of Rent The Runway, and David Fraga, president of InVision.

The Los Angeles-based company said the new money would be used to build a payments feature to speed up the process of wringing payments from licensees and to continue building its Marketplace product that connects celebrities, athletes and social media stars of all stripes with new and emerging brands.

“We need to stay focused on building the best platform for brands that own and license their IP,” said Brainbase co-founder and CEO Nate Cavanaugh, in a statement. “With a strong bench of investors and advisors who believe in our vision to make the intellectual property industry more open, efficient and accessible, we are prepared for our next stage of growth. In 2020, Brainbase plans to nearly double in size, making key hires across sales, product, and engineering in the U.S. and Europe.”

The new financing comes as Brainbase brings new brands and spokespeople into the fold including Buzzfeed, the model-turned-shopping network celebrity and brand ambassador extraordinaire Kathy Ireland, MDR Brand Management, and Bonnier. These new branding megaliths join a roster that includes Sanrio, the owner of the ubiquitous Hello Kitty character.

“Brainbase is bringing the archaic, paper shuffling world of IP management into the 21st century. We’re thrilled to partner with this team as they help owners of IP assets capture more value while saving a boatload of time and effort,” stated Kent Bennett, partner at Bessemer Venture Partners.

technology

via TechCrunch https://techcrunch.com

May 20, 2020 at 10:11AM

6 Enterprise Mobile Application Development Platforms in 2020

6 Enterprise Mobile Application Development Platforms in 2020

https://ift.tt/2yXNcWD

Which mobile application development platform should I opt for?

What are the prominent advantages of choosing that platform?

Will it be the best choice for my app?

I am sure that there are so many questions that arise in the mind when it comes to choosing an enterprise mobile app development platform. Given the plenty of available options, one is bound to feel baffled. But selecting the most appropriate platform holds utmost importance. 

To help you out, we have whittled a list of the top six enterprise mobile application development platforms that are leading the charts in 2020. You can learn about these in detail to choose the best one for your app. Let’s begin. 

Appcelerator

Appcelerator makes use of a single JavaScript codebase to build strong native apps. It has an open and extensible environment that allows you to produce apps for Android, iOS, Blackberry, HTML5 and hybrid apps. Its open-source SDK supports over 5,000 devices.

Pros

  • It offers rapid prototyping. The app development process is greatly accelerated and a prototype is built by investing minimum time and effort to evaluate the user interaction with UI.
  • It comprises ArrowDB, a schema-less data store that seeks to deploy data models with almost no setup efforts.
  • You can seamlessly integrate it to the existing delivery systems such as MDM and SCM solutions.
  • It consists of pre-built connectors for MS SQL, MongoDB, Box, Salesforce, MS Azure and many more.

Cons

  • It is quite buggy. Even though the newer versions are more stable, it is not very suitable for production use. The more complex your app gets, the more often you will have to face technical issues such as annoying bugs, random crashes, weird behaviour.
  • There is poor support extended from the Appcelerator’s developer’s community.

PhoneGap

PhoneGap is an amazing cross-platform framework, allowing app developers to build apps that operate smoothly on multiple mobile platforms. It has a powerful backend system that greatly accelerates the development process. It is best suited for developing simple mobile apps that do not extensively use the mobile’s native features.

The PhoneGap community comprises latest modules and codes that are available for free, owing to its Open Source License. It offers tremendous flexibility and app developers having a basic knowledge of JavaScript, HTML5, and CSS3 can get started with development, without the need of learning any additional languages.

Pros

  • A great level of uniformity is maintained as the apps developed can be used for multiple mobile platforms. The apps exhibit minimalistic differences when viewed on different platforms.
  • PhoneGap works on JavaScript, HTML5 and CSS3, the most common and very popular web technologies. 
  • It allows you to use in-app integrated payment systems via Google Play Store for Android, App Store for iOS, etc.
  • The app developers can make use of old JavaScript or some other libraries such as Prototype, jQuery, MooTools, Sencha Touch and more to manage the interaction.

Cons

  • PhoneGap doesn’t support all functionalities
  • It may prove to be ineffective at times, such as, while working with native apps
  • The capacity of cross platform apps is somewhat low-key when compared to other apps built for independent platforms
  • With PhoneGap, you can develop an app for once only. Thereafter, you will be charged some monthly fees.

Sencha

Sencha is believed to be an ideal framework for developing data-rich cross-platform applications powered by hardware acceleration methods. It is a warehouse of 115+ high-performing integrated UI components, including charts, grids, calendar, etc. 

HTML5 utilization can be easily unleashed on all modern browsers by this platform. Also, developers can use Sencha Ext JS for developing ground-breaking apps that leverage the potential of Business Intelligence for Analytics and data visualization. 

Pros

  • Sencha comes with a plethora of built-in themes that work on all major platforms
  • The platform is supported by a back-end data package that operates independently with different data sources
  • Apps created with Sencha can be easily integrated with PhoneGap / Cordova for packaging and native API access
  • Currently, Sencha is supported on WebKit browser, which includes the popular Google Android and iOS platforms
  • Sencha mobile apps can be easily scaled to different resolutions for achieving maximum compatibility with different devices

Cons

  • Some commercial versions of Sencha are braved with licensing complexity challenges
  • Animated themes for many targeted platforms are limited

Xamarin

Xamarin helps to develop native apps that work on multiple platforms by using a shared C# code base. The platform enables the developers to use the same IDE, APIs and language everywhere. Also, the Git integration can be directly launched into the Xamarin Studio. Owing to the unprecedented benefits of this platform, it has been adopted by some renowned names like Microsoft, IBM, Foursquare, etc.

Pros

  • Xamarin apps are very neatly written and thus, they can be used for reference as well. 
  • The Xamarin Component Store contains cross-platform libraries, UI controls and third-party libraries. 
  • As much as 75% of the developed code can be shared across major mobile platforms, which reduce the time-to market as well as bring down the cost of development
  • Xamarin offers quality assurance and functionality testing for various devices. This ensures fewer bugs and an efficient deliverable

Cons

  • The free version of the software comprises limited features
  • Developers cannot take full advantage of open-source libraries owing to some compatibility issues

Ionic

Ionic is a 100% free and open-source framework that is best suited for cross-platform mobile app development. The framework helps to create native functionality in apps that can seamlessly operate on multiple devices and operating systems. With native functionalities, exhaustive gestures and highly customizable tools, Ionic apps can help to augment user experience. 

Pros

  • The framework enables the developers to build apps for multiple app stores with a single code base, thus reducing development cost and timeline
  • The use of AngularJS helps to create a powerful SDK for building feature-rich and robust applications
  • The framework comes with many CSS and JavaScript components that account for minimal maintenance

Cons

  • In-app performance is not as efficient and quick as that of native apps
  • The use of AngularJS necessitates the developers to possess a specific skillset needed to build complex apps
  • It is difficult to achieve smooth in-app navigation since the UI-route is very tricky

NativeScript

This is an open-source platform that facilitates cross-platform app development with a rich, native-like user interface. With this platform, the developers can easily access native APIs through JavaScript to build highly interactive apps. Native mobile apps for iOS and Android can be created using a single codebase. 

Pros

  • A large number of NativeScript plugins are available that facilitate the creation of native mobile apps
  • Developers can reuse the accessible plugin NPM any number of times in all NativeScript projects
  • NativeScipt offers complete support for AngularJS 2 and TypeScript
  • The platform provides unrestrained access to native libraries, including CocoaPods and Gradle

Cons

  • Multi-threading in NativeScript is a possible issue
  • There is no adequate information available on the use of different features of NativeScript

Final WordsThese are the top 6 enterprise mobile application development platforms that are ruling the charts in 2020. You can get in touch with a reliable mobile application development agency to discover the most suitable platform for your precise needs. Choosing the right platform will ensure that you get a technically-sound deliverable as well as save on the time and effort involved in the process. 

via Noupe https://www.noupe.com

May 18, 2020 at 03:35AM

Multi-tenancy in Laravel

Multi-tenancy in Laravel

https://ift.tt/2WXbmZg

I recently started a deep dive into multi-tenancy in Laravel, specifically the database-per-tenant approach, and created a video on how to achieve multi-tenancy without any packages. 

The video covers switching DB connections at runtime, writing migrations, seeding, and testing. You can watch it here:

Session Hijacking 

After the video was posted, one of the concerns that people shared was related to preventing session data that belongs to one tenant from leaking to another.

 Basically, a user in tenant A can modify the session cookie and log himself into a user on tenant B if that user has the same user ID. Horrifying!

 I decided to address this issue by making a video that explains the problem, why it happens, and how to deal with it.

 Changing Configuration At Runtime

 When you update your application’s configuration at runtime you need consider two things:

  1. The updates happen before the component is used.
  2. Any cached instance of the component is flushed after the updates.

To explain this in detail, I made a video where I show you how to handle the cache component and make sure each tenant has its separate cache store by applying a prefix.

 Using the same approach you can update the configuration of any component in your application while switching between tenants. For example; you can change the mail “from” address to match the tenant’s, you can configure a different slack channel for each tenant’s error notifications, and so on.

Single Database vs. Database Per Tenant

 The good Laravel community on Twitter started a very interesting discussion on the single DB vs. per-tenant DB approaches.

The db-per-tenant approach guarantees customer isolation, it also makes it easier to move the dataset to different locations since it can be easily extracted. However, it requires more dev-ops work to configure replication, backups, etc… 

The single database approach requires no extra dev-ops work. However, to achieve data isolation you need to focus while writing every single query to make sure it’s correctly scoped. In addition to this, the indices will grow large in size and a tenant with a large dataset will affect other tenants with smaller datasets.

It’s a tough call indeed. If it’s a business requirement to have separate databases then the decision is made for you already. But if you’re choosing between the two approaches, I recommend that you go for the single database approach and scope your queries.

The multi-database approach is sexy I know, it means you’ll just do Order::find() without having to use whereTenantId(). However, if you’re handling dev-ops yourself, having to deal with hundreds of databases is not sweet I promise.

Let me know what you think. Also here’s a link to part of the discussion: https://twitter.com/barryvdh/status/1257750063864045568 

Multi-tenancy Packages & Resources

If you want to use a package instead of building your own way to multi-tenancy, here’s a list of some noteworthy packages:

There’s also a talk by Tom Schlick at Laracon US 2017 that I highly recommend. You can check it out here: https://multitenantlaravel.com/ 

Do I need all this? 

No! You don’t need to change configuration files or jump between databases or use any packages if you’re building a multi-tenant application. You can build a multi-tenant application by just making sure your code is scoped for the tenant, the same way you scope your code for the logged in user.

Multi-tenancy seems confusing mainly because we want to write our code without worrying about the current tenant and have our application magically figure everything out for us. While this sounds cool, but if you want magic I highly recommend that you spend some time to understand how it works.

The downside of magic is it makes debugging hard. It also covers 99% of the use cases, once you hit an edge case that isn’t covered you’ll find that magic will fail you.

Whatever approach to multi-tenancy you choose, make sure you cover your application with enough tests so you can sleep well at night.

programming

via Laravel News Links https://ift.tt/2dvygAJ

May 14, 2020 at 09:57PM

Outdoor Photography is a Great COVID-19 Pastime

Outdoor Photography is a Great COVID-19 Pastime

https://ift.tt/2Z71n62

If you are looking for something fun and challenging to do during the COVID-19 sheltering orders, try your hand at some outdoor photography. Every yard, neighborhood, or residential area has plenty of subjects to model for your camera. Songbirds, small animals like rabbits and squirrels, and even bigger game such as white-tailed deer can be caught on digital images throughout the country.

It doesn’t necessarily take professional camera equipment to catch images of wildlife. Close-ups can be captured around the house with a little time and patience. All you need is a decent digital 35mm camera ideally with some telephoto capabilities. This would be a great way to get started.

Scan your yard by spying out windows to see where birds and animals seem to gather. A great way to congregate birds is to set up a feeder or a couple of them in different places. Proximity to trees, shrubs, or such will give birds an extra sense of safety. It’s the same with small game if there are high grass areas, heavy shrubbery or bushes to hide in. Watch these areas for animal activity.

If you have a back porch, patio, or outdoor yard sitting area, post yourself a good seat for an observation post. Have a set of good binoculars to help you spot lofty birds or animals lurking around the yard. You have to remain quiet and develop a strong sense of patience to collect some good photo shots.

Don’t be disappointed if all of your shots do not turn out every time. Often birds move so fast or flutter around that wing motion or normal movement causes blurring. If your camera has an automatic focus setting, try that to see the quality of shots you get. Once you advance in your skills, you can try manual settings. If you are a total novice to photography get a basic book on the subject or look up related topics on the net. Soon you’ll be capturing some neat shots of all your subjects.

When reviewing your photo results, look at them with a critical eye. If you like the photo, forget the critics. This is a fun hobby for yourself or to share photos with family and friends. Look at the photo backgrounds. Did you catch a neighbor photo-bombing you or some other clutter in the background that is out of place? Next time, be aware of what’s beyond your subject.

These are the skills that you can practice over time. Remember, outdoor photography is meant to be fun during otherwise trying times.

The post Outdoor Photography is a Great COVID-19 Pastime appeared first on AllOutdoor.com.

guns

via All Outdoor https://ift.tt/2yaNKUu

May 13, 2020 at 09:23AM