Create your own GitHub Actions using Fly Machines

https://fly.io/laravel-bytes/ad-hoc-tasks/assets/on-demand-cover.webp

Machines directing other machines tasks

What if you could spin up a VM, do a task, and tear it all down super easily? That’d be neat, right? Let’s see how to spin up Laravel application on Fly.io and run some ad-hoc tasks!

We’re going to create a setup where we can instruct Fly.io to spin up a VM and run some code. The code will read an instructions.yaml file and follow its instructions. The neat part: We can create the YAML file on-the-fly when we spin up the VM.

This lets us create one standard VM that can do just about any work we need. I structured the YAML a bit like GitHub Actions.

In fact, I built a similar thing before:

It uses Golang, but here we’ll use Laravel 🐐.

Here’s a repository with the code discussed.

The Code

The code is pretty straight forward (partially because I ask you to draw the rest of the owl). Within a Laravel app, we’ll create a console command that does the work we want.

Here’s what I did to spin up a new project and add a console command to it:

composer create-project laravel/laravel ad-hoc-yaml
cd ad-hoc-yaml

composer require symfony/yaml

php artisan make:command --command="ad-hoc" AdHocComputeCommand

We need to parse some YAML, so I also included the symfony/yaml package.

And the command itself is quite simple 🦉:

<?php

namespace App\Console\Commands;

use Symfony\Component\Yaml\Yaml;
use Illuminate\Console\Command;
use Illuminate\Support\Facades\Log;

class AdHocComputeCommand extends Command
{
    protected $signature = 'ad-hoc';

    protected $description = 'Blindly follow some random YAML';

    public function handle()
    {
        try {
            $instructions = Yaml::parseFile("/opt/instructions.yaml");
        } catch(\Exception $e) {
            Log::error($e);
            $this->error($e->getMessage());
            return SELF::FAILURE;
        }


        foreach($instructions['steps'] as $step) {
            // draw the rest of the <bleep>ing owl
        }

        return SELF::SUCCESS;
    }
}

For a slightly-more-fleshed out version of this code, see this command class here.

Package it Up

Fly uses Docker images to create (real, non-container) VMs. Let’s package our app into a Docker image that we can deploy to Fly.io.

We’ll borrow the base image Fly.io uses to run Laravel apps (fideloper/fly-laravel:${PHP_VERSION}). We’ll just assume PHP 8.2:

FROM fideloper/fly-laravel:8.2

COPY . /var/www/html

RUN composer install --optimize-autoloader --no-dev

CMD ["php", "/var/www/html/artisan", "ad-hoc"]

Once that’s created, we can build that Docker image and push it up to Fly.io. One thing to know here: You can only push images to Fly.io’s registry for a specific app. The image name to use must correspond to an existing app, e.g. docker push registry.fly.io/some-app:latest. You can use any tags (e.g. latest or 1.0) as well as push multiple tags.

So, to push an image up to Fly.io, we’ll first create an app (an app is just a thing that houses VMs) and authenticate Docker against the Fly Registry. Then, when we spin up a new VM, we’ll use that image that’s already in the registry.

This is different from using fly deploy... which builds an image during deployment, and is meant more for hosting your web application. Here we’re more using Fly.io for specific tasks rather than hosting a whole application.

The following shows creating an app, building the Docker image, and pushing it up to the Fly Registry:

APP_NAME="my-adhoc-puter"

# Create an app
fly apps create $APP_NAME

# Build the docker image
docker build \
    -t registry.fly.io/$APP_NAME \
    .

# Authenticate with the Fly Registry
fly auth docker

# Push the docker image to the Fly Registry
# so we can use it when creating a new VM
docker push registry.fly.io/$APP_NAME

This article is great at explaining the fun things you can do with the Fly Registry.

We make 2 assumptions here:

  1. You have Docker locally
  2. You’re on an Intel-based CPU

Pro tip: If you’re on a ARM based machine (M1/M2 Macs), you can actually VPN into your Fly.io private network and use your Docker builder (all accounts have a Docker builder, used for deploys) via DOCKER_HOST=<ipv6 of builder machine> docker build ....

Fly.io ❤️ Laravel

Fly your servers close to your users—and marvel at the speed of close proximity. Deploy globally on Fly in minutes!


Deploy your Laravel app!  

Run It

To run our machine, all we need is a YAML file and to make an API call to Fly.io.

As mentioned before, Machines (VMs) spun up via API call let you create files on-the-fly! What you do is provide the file name and the base64’ed file contents. The file will be created on the Machine VM before it runs your stuff.

Here’s what the code to make such an API request would look like within PHP / Laravel:

# Some YAML, simliar to GitHub Actions
$rawYaml = '
name: "Test Run"

steps:
  - name: "Print JSON Payload"
    uses: hookflow/print-payload
    with:
      path: /opt/payload.json

  - name: "Print current directory"
    run: "ls -lah $(pwd)"

  - run: "echo foo"

  - uses: hookflow/s3
    with:
      src: "/opt/payload.json"
      bucket: "some-bucket"
      key: "payload.json"
      dry_run: true
    env:
      AWS_ACCESS_KEY_ID: "abckey"
      AWS_SECRET_ACCESS_KEY: "xyzkey"
      AWS_REGION: "us-east-2"
';

$encodedYaml = base64_encode($rawYaml);

# Some random JSON payload that our YAML
# above references
$somePayload = '
{
    "data": {
        "event": "foo-happened",
        "customer": "cs_1234",
        "amount": 1234.56,
        "note": "we in it now!"
    },
    "pages": 1,
    "links": {"next_page": "https://next-page.com/foo", "prev_page": "https://next-page.com/actually-previous-page"}
}
';

$encodedPayload = base64_encode($somePayload);

# Create the payload for our API call te Fly Machines API
$appName = 'my-adhoc-puter';
$requestPayload = json_decode(sprintf('{
    "region": "bos",
    "config": {
        "image": "registry.fly.io/%s:latest",
        "guest": {"cpus": 2, "memory_mb": 2048,"cpu_kind": "shared"},
        "auto_destroy": true,
        "processes": [
            {"cmd": ["php", "/var/www/html/artisan", "ad-hoc"]}
        ],
        "files": [
            {
                "guest_path": "/opt/payload.json",
                "raw_value": "%s"
            },
            {
                "guest_path": "/opt/instructions.yaml",
                "raw_value": "%s"
            }
        ]
    }
}
', $appName, $encodedPayload, $encodedYaml));

// todo 🦉: create config/fly.php 
// and set token to ENV('FLY_TOKEN');
$flyAuthToken = config('fly.token');

use Illuminate\Support\Facades\Http;

Http::asJson()
    ->acceptJson()
    ->withToken($flyAuthToken)
    ->post(
        "https://api.machines.dev/v1/apps/${appName}/machines", 
        $requestPayload
    );

I created an artisan command that does that work here.

In your case, you might want to trigger this in your own code whenever you want some work to be done.

After you run some tasks, you should see the Machine VM spin up and do its work! Two things to make this more fun:

  1. Liberally use Log::info() in your code so Fly’s logs can capture what’s going on (helpful for debugging)
  2. Set your Logger to use the stderr logger so Fly’s logging mechanism can get the log output

Assuming that’s setup, you can then run fly logs -a <app-name> to see the log output as the Machine VM boots up, runs your code, and then stops.

Laravel News Links

Use several databases within your Laravel project

https://capsules.codes/storage/canvas/images/himICQw4vdilQhOi08bKqtpSTwFVabBiYq6NyevB.jpg

TL;DR: How to use multiple databases within your Laravel project and manage database separated records.

You can find a sample Laravel Project on our Github Repository.

In an effort to maintain clarity for each of my projects, I separate my databases based on the role they play. This blog, for instance, includes several databases: one specifically for the blog and another for analytics. This article explains how to go about it.

A new Laravel project already contains, in its .env file, information related to the database, including the default mysql connection. We’ll be working with two databases: one and two. There will also be a connection to one [ optional ].

.env

Before

DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=<database-name>
DB_USERNAME=
DB_PASSWORD=

After

DB_CONNECTION=one

DB_ONE_HOST=127.0.0.1
DB_ONE_PORT=3306
DB_ONE_DATABASE=one
DB_ONE_USERNAME=
DB_ONE_PASSWORD=

DB_TWO_HOST=127.0.0.1
DB_TWO_PORT=3306
DB_TWO_DATABASE=two
DB_TWO_USERNAME=
DB_TWO_PASSWORD=

The default .env file informations is reflected in the database.php configuration file.

config/database.php

'connections' => [

		'mysql' => [
			  'driver' => 'mysql',
			  'url' => env('DATABASE_URL'),
			  'host' => env('DB_HOST', '127.0.0.1'),
			  'port' => env('DB_PORT', '3306'),
			  'database' => env('DB_DATABASE', 'forge'),
			  'username' => env('DB_USERNAME', 'forge'),
			  'password' => env('DB_PASSWORD', ''),
			  'unix_socket' => env('DB_SOCKET', ''),
			  'charset' => 'utf8mb4',
			  'collation' => 'utf8mb4_unicode_ci',
			  'prefix' => '',
			  'prefix_indexes' => true,
			  'strict' => true,
			  'engine' => null,
			  'options' => extension_loaded('pdo_mysql') ? array_filter([
			      PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
			  ]) : [],
		],
    ...
]

We’ll duplicate this connection information as many times as there are connections.

'connections' => [

        'one' => [
            'driver' => 'mysql',
            'url' => env('DATABASE_URL'),
            'host' => env('DB_ONE_HOST', '127.0.0.1'),
            'port' => env('DB_ONE_PORT', '3306'),
            'database' => env('DB_ONE_DATABASE', 'forge'),
            'username' => env('DB_ONE_USERNAME', 'forge'),
            'password' => env('DB_ONE_PASSWORD', ''),
            'unix_socket' => env('DB_ONE_SOCKET', ''),
            'charset' => 'utf8mb4',
            'collation' => 'utf8mb4_unicode_ci',
            'prefix' => '',
            'prefix_indexes' => true,
            'strict' => true,
            'engine' => null,
            'options' => extension_loaded('pdo_mysql') ? array_filter([
                PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
            ]) : [],
        ],

        'two' => [
            'driver' => 'mysql',
            'url' => env('DATABASE_URL'),
            'host' => env('DB_TWO_HOST', '127.0.0.1'),
            'port' => env('DB_TWO_PORT', '3306'),
            'database' => env('DB_TWO_DATABASE', 'forge'),
            'username' => env('DB_TWO_USERNAME', 'forge'),
            'password' => env('DB_TWO_PASSWORD', ''),
            'unix_socket' => env('DB_TWO_SOCKET', ''),
            'charset' => 'utf8mb4',
            'collation' => 'utf8mb4_unicode_ci',
            'prefix' => '',
            'prefix_indexes' => true,
            'strict' => true,
            'engine' => null,
            'options' => extension_loaded('pdo_mysql') ? array_filter([
                PDO::MYSQL_ATTR_SSL_CA => env('MYSQL_ATTR_SSL_CA'),
            ]) : [],
        ],

Then, it is necessary to instruct the migrations to migrate to the different databases created:

  • one2023_08_31_000000_create_foos_table.php
  • two2023_08_31_000001_create_bars_table.php

The static function connection('<connection-name>') of the Schema Facade allows for this, which we add in the up() and down() functions.

2023_08_31_000000_create_foos_table.php

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up() : void
    {
        Schema::connection( 'one' )->create( 'foos', function( Blueprint $table )
        {
            $table->id();
            $table->timestamps();
        });
    }

    public function down() : void
    {
        Schema::connection( 'one' )->dropIfExists( 'foos' );
    }
};

2023_08_31_000001_create_bars_table.php

<?php

use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up() : void
    {
        Schema::connection( 'two' )->create( 'bars', function( Blueprint $table )
        {
            $table->id();
            $table->timestamps();
        });
    }

    public function down() : void
    {
        Schema::connection( 'two' )->dropIfExists( 'bars' );
    }
};

Next, the models related to the migrations need to be modified to indicate their connection with the database via the $connection attribute.

App\Models\Foo.php

 <?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Foo extends Model
{
	 protected $connection = 'one';
}

App\Models\Bar.php

 <?php

namespace App\Models;

use Illuminate\Database\Eloquent\Model;

class Bar extends Model
{
	 protected $connection = 'two';
}

We can now launch the migration php artisan migrate. By default, this command uses the value given by DB_CONNECTION. If it’s not defined in the .env file, then it has to be indicated in the command php artisan migrate --database=one.

In order to test the functionality, we can quickly implement an anonymous function when calling the main route.

web.php

<?php

use Illuminate\Support\Facades\Route;
use App\Models\Foo;
use App\Models\Bar;

Route::get( '/', function()
{
    $foo = Foo::create();
    $bar = Bar::create();

    dd( $foo, $bar );
});

The values are then created in the respective databases and visible in the browser.

In case a database refresh is needed using the command php artisan migrate:fresh, it’s worth noting that only the default database, i.e. the one specified by DB_CONNECTION, will be refreshed. Unfortunately, Laravel does not yet support the refreshing of multiple databases at the same time.

To refresh a database that is not the default one, it is necessary to use the command php artisan db:wipe --database=<database-name>. This command can be repeated for each additional database. Once all databases have been properly wiped with db:wipe, you can then proceed without errors with php artisan migrate:fresh.

You can also develop your own command that would automate the various tasks needed to clean your database.

Glad this helped.

Laravel News Links

Generate Laravel Factories using ChatGPT

https://res.cloudinary.com/benjamin-crozat/image/upload/v1693318154/chatgpt-code-generation_ily1el.png

How to generate Laravel Factories using ChatGPT

Updated on

Table of contents:

How to generate Laravel Factories using ChatGPT

Generating quality code using a Large Language Model such as GPT requires a basic understand of the technology. And you can quickly learn about it here: How do language-based AIs, such as GPT, work?

That being said, you could also follow this tutorial, copy and paste my prompts, and be done with it!

Before I forget, I recommend using GPT-4 for better results, as it’s way smarter than GPT-3.5. Also, remember there’s a lot of randomness and consistency accross prompts cannot not be ensured. That being said, the time you save will make up for it!

So, what problem are we trying to solve here?

During my freelance career, I stumbled upon a lot of codebases that weren’t leveraging Laravel Factories at all. That’s a bummer, because they can help you:

  1. Write tests with randomized inputs for your code.
  2. Set up a good local environment filled with generated data.

In a big codebase, there may be dozens of models, and writing factories for each of them all by yourself could take days of hard work.

Unless we leverage the power of AI, right?

By asking ChatGPT to think step by step and detail its reasoning, we can ensure better quality answers. But first, the requirements:

  1. The model’s table schema.
  2. The model’s code.
The model's table schema: <the model's table schema>

The model's code: <the model's code>

Goal: Use the information above to generate a Laravel Factory.

Instructions:
* Don't include attributes that are automatically handled by Laravel.
* Faker no longer recommends calling properties. Instead, call methods. For instance, "$this->faker->paragraph" becomes "$this->faker->paragraph()".
* Include a method for each many-to-many relationship using factory callbacks.

Review each of my instructions and explain step by step how you will proceed in an existing Laravel installation without using Artisan. Then, show me the result.

OpenAI enabled ChatGPT users to share their conversations with GPT publicly in read-only mode, which is a great way to share my experiments with you.

See what a Laravel Factory generated by ChatGPT looks like.

Laravel News Links

Multiple OpenAi Functions PHP / Laravel

https://miro.medium.com/v2/resize:fit:1200/1*X4NgzhgmPtOpdDdPPBiDlw.pngThis article will hopefully help you to understand how to build a system that can work with multiple OpenAi API function calls!Laravel News Links

Multiple OpenAi Functions PHP / Laravel

https://miro.medium.com/v2/resize:fit:1200/1*X4NgzhgmPtOpdDdPPBiDlw.pngThis article will hopefully help you to understand how to build a system that can work with multiple OpenAi API function calls!Laravel News Links

Multiple OpenAi Functions PHP / Laravel

https://miro.medium.com/v2/resize:fit:1200/1*X4NgzhgmPtOpdDdPPBiDlw.pngThis article will hopefully help you to understand how to build a system that can work with multiple OpenAi API function calls!Laravel News Links

Build Your Own DIY NAS Server Using Raspberry Pi 4

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/how-to-build-a-diy-nas-using-raspberry-pi-4-and-owncloud.jpg

Whether you are a professional photographer with thousands of high-resolution images, a small business owner with critical data, or a movie enthusiast with an extensive collection, having a reliable and secure storage solution is essential. The same goes for any individual who wants to safely store and access their data with complete privacy. This is where network-attached storage (NAS) comes into play.

While commercial versions are available, you can also build your own NAS using a Raspberry Pi 4 and ownCloud—which is more cost-effective and customizable.

Why Build Your Own NAS Using Raspberry Pi and ownCloud?

Building your own NAS provides several advantages over buying a pre-built solution:

  • You can customize the storage capacity as per your specific needs.
  • You have complete control over your data, it is stored locally and securely.
  • You can use the NAS server to back up data from all devices and safeguard against accidental data loss.
  • Cost-effective and energy-efficient since we are using a Raspberry Pi 4 that consumes 15W at max.
  • You can also use the server for other services, such as Plex

ownCloud is a popular open-source software solution that allows you to create your own cloud storage. It provides a secure and easy-to-use interface for managing and accessing your files from anywhere, using any device—including Android, iOS, macOS, Linux, and Windows platforms.

You can also sync your files across multiple devices and share them with others. It also supports a wide range of plugins and extensions, enabling you to extend its functionality and enable two-factor authentication for additional security.

In addition, you can build a personal DIY cloud storage with remote access, or a web server and host a website on your Raspberry Pi 4.

Things You Will Need

To build your own NAS with Raspberry Pi 4 and ownCloud, you will need the following:

  • Raspberry Pi 4 with 4GB or 8GB RAM for optimum performance
  • NVME or SATA SSD with a USB enclosure/connector
  • Class 10 16GB or 32GB microSD card
  • Power supply for the Raspberry Pi 4
  • Reliable Gigabit network (router) to connect your NAS to your local network for high-speed data transfer

Step 1: Set Up Raspberry Pi 4 for NAS

Firstly, you need to download the official Raspberry Pi Imager tool and then follow these steps to install the operating system.

  1. Launch the Raspberry Pi Imager tool.
  2. Click Choose OS and select Raspberry Pi OS (Other) > Raspberry Pi OS Lite (64-bit).
  3. Click Choose Storage and select your SD card.
  4. Click on the gear icon (bottom right) and enable SSH. Enter a username and password for SSH and click Save.
  5. Click Write. Select Yes to confirm.

After flashing the microSD card, insert it into the Raspberry Pi 4 and connect the power supply. The Raspberry Pi 4 will boot into the Raspberry Pi OS Lite.

You can now check the router’s DHCP setting to find the IP address of the Raspberry Pi, or use the Fing app on your smartphone (iOS and Android). Alternatively, connect a keyboard, mouse, and display to the Pi and then run the following command to find its IP address:

 hostname -I 

Step 2: Install and Configure ownCloud on Raspberry Pi 4

To set up ownCloud on Raspberry Pi 4, you will need to install the following:

  • A web server (NGINX or Apache)
  • PHP
  • MariaDB database

To install these services, install and run the PuTTY app on Windows, or use the Terminal app on macOS, and connect to the Raspberry Pi via SSH.

Then run the following commands:

 sudo apt-get update
sudo apt-get upgrade

Wait for the upgrade to finish. Press Y and hit Enter when prompted. After the update, run the following commands to install the required packages.

 sudo apt-get install apache2
sudo apt install apache2 libapache2-mod-php7.4 openssl php-imagick php7.4-common php7.4-curl php7.4-gd php7.4-imap php7.4-intl php7.4-json php7.4-ldap php7.4-mbstring php7.4-mysql php7.4-pgsql php-smbclient php-ssh2 php7.4-sqlite3 php7.4-xml php7.4-zip

After installing the required packages, restart the Apache server.

 sudo service apache2 restart 

Then run the following command to add the user to the www-data group.

 sudo usermod -a -G www-data www-data 

Next, we can download and install the ownCloud on the Raspberry Pi 4 using the following commands:

 cd /var/www/html
sudo wget https:
sudo unzip owncloud-complete-latest.zip

Create a directory to mount an external SSD and change the ownership of the ownCloud directory:

 sudo mkdir /media/ExternalSSD
sudo chown www-data:www-data /media/ExternalSSD
sudo chmod 750 /media/ExternalSSD

Fix permissions to avoid issues:

 sudo chown -R www-data: /var/www/html/owncloud
sudo chmod 777 /var/www/html/owncloud
sudo mkdir /var/lib/php/session
sudo chmod 777 /var/lib/php/session

Next, you need to configure the Apache web server. Open the config file:

 sudo nano /etc/apache2/conf-available/owncloud.conf

Then add the following lines to it:

 Alias /owncloud "/var/www/owncloud/"

<Directory /var/www/owncloud/>
  Options +FollowSymlinks
  AllowOverride All

 <IfModule mod_dav.c>
  Dav off
 </IfModule>

 SetEnv HOME /var/www/html/owncloud
 SetEnv HTTP_HOME /var/www/html/owncloud

</Directory>

Save and exit nano with Ctrl + O then Ctrl + X. Then enable the Apache modules:

 sudo a2enconf owncloud
sudo a2enmod rewrite
sudo a2enmod headers
sudo a2enmod env
sudo a2enmod dir
sudo a2enmod mime

Install the MariaDB database:

 sudo apt install mariadb-server 

Create a database for users:

 sudo mysql

CREATE DATABASE owncloud;
CREATE USER 'ownclouduser'@'localhost' IDENTIFIED BY 'YourPassword';
GRANT ALL PRIVILEGES ON owncloud.* TO 'ownclouduser'@'localhost';
FLUSH PRIVILEGES;
Exit;

Reboot the Raspberry Pi:

 sudo reboot 

Step 3: Add External Storage

You can add multiple USB storage devices to Raspberry Pi 4 via the USB 3.0 ports. Connect one of your SSDs or hard drives to the USB port and follow the steps below to mount the external storage device to a directory in the file system and add storage to your DIY NAS.

We have already created the /media/ExternalSSD directory for mounting the external storage. Make sure the SSD or HDD is NTFS formatted. Then follow these steps to mount it:

 sudo apt-get install ntfs-3g 

Then get the GID, UID, and UUID:

 id -u www-data
id -g www-data
ls -l /dev/disk/by-uuid

Note down the UUID, GID, and UID. In our example, the sda1 is the external NTFS formatted SSD disk. Next, we will add the drive to the fstab file.

 sudo nano /etc/fstab 

Add the following line:

 UUID= 01D9B8034CE29270 /media/ExternalSSD auto nofail,uid=33,gid=33,umask=0027,dmask=0027,noatime 0 0 

To mount the external storage device, you need to find its device identifier. Use the following command to list all connected storage devices:

 lsusb 

At this stage, you can restart the Raspberry Pi to auto-mount the external storage, or mount it manually:

 sudo mount /dev/sda1 /media/ExternalSSD 

All your files on the NTFS drive should be visible in the /media/ExternalSSD directory.

The drive currently contains only System Volume Information and RECYCLE.BIN hidden folders. Reboot the system.

 sudo reboot 

4. Configure ownCloud

After the reboot, visit the IP address of the Raspberry Pi in a web browser to access your ownCloud.

Enter a username and password of your choice. Click on Storage & database and enter the MariaDB database details as shown below.

If you are using an external drive to store data, make sure to change the Data folder path to /media/ExternalSSD from default /var/www/html/owncloud/data. In future, if you want to add new drive or more storage, follow this ownCloud guide to update the directory path.

Click Finish Setup. After a while, you can log in to ownCloud.

You can download the ownCloud app on your smartphone or computer to sync your files. But before you start the sync or file upload, add external HDD or SSD storage.

If you have followed each step carefully, you should be good to go and ready to upload the files to your ownCloud NAS.

Using Your New Raspberry Pi 4 NAS

A NAS allows you to centralize and access your data from multiple devices on your local network. It’s a convenient and efficient way to store, share, and back up your files at home or the office. Create more users and assign them their ownCloud account to let them upload and secure their data.

Building your own NAS with Raspberry Pi 4 and ownCloud offers a cost-effective and customizable solution to meet your storage needs and take control of your data!

MakeUseOf