How to Play ‘The Sims 4’ for Free Right Now (and Forever)

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/dd7a10a4d9ca17f669294209c6def75b.jpg

Like many people, I have fond memories of being addicted to The Sims. My game of choice was The Sims 2, but I also dabbled in The Sims 3 for a time. Somehow, I kicked the habit before The Sims 4 dropped back in 2014, and haven’t played much of the franchise since. It seems EA might have pulled me back in, though, as The Sims 4 is now permanently free to play.

EA announced the news last month, proclaiming they’d make the most recent entry in their Sims catalog totally free as of this past Tuesday. This isn’t a trial: You can download and play the base game without spending a dime. It’s a nice, if not shocking, development; Sims fans blasted the game before launch, as it was missing content found in previous entries, and while downloadable content added plenty of features old and new, some felt the base game was never worth the full price. In a way, The Sims 4 was made to be a free game, with the option to add on via expansion and game packs.

Of course, some of us have paid for The Sims 4 already, and EA doesn’t want to leave you hanging out to dry. If you’re an EA Play or EA Play Pro member, you’ll receive special upgrades for The Sims 4. The EA Play Edition of the game features the Get to Work Expansion Pack, while the EA Play Pro Edition features both it and the Toddler Stuff Pack as well.

To be clear, this is the base game of The Sims 4. While that’s enough Sims to get you up and running, you’ll only experience a slice of what EA has to offer. True to The Sims’ formula, EA developed expansion packs to supplement gameplay with additional furniture, clothes, pets, situations, storylines, weather, and much more. Expansion packs include:

  • Cats & Dogs
  • City Living
  • Cottage Living
  • Discover University
  • Eco Lifestyle
  • Get Famous
  • Get Together
  • Get to Work
  • High School Years
  • Island Living
  • Seasons
  • Snowy Escape

There are also twelve game packs, which are more incremental upgrades to your game when compared to expansion packs:

  • Dine Out
  • Dream Home Decorator
  • Jungle Adventure
  • My Wedding Stories
  • Outdoor Retreat
  • Parenthood
  • Realm of Magic
  • Spa Day
  • Star Wars: Journey to Batuu
  • StrangerVille
  • Vampires
  • Werewolves

By opening The Sims 4 to everyone for free, EA will earn itself a pool of potential customers for these expansion and game packs (ah, there’s that other shoe). Sure, you might never intend to spend a dime on the game, but maybe you want to see the seasons while you play, or add restaurants to your experience, or send your sim to high school. All of a sudden, you’re hundreds of dollars (and hours) deep. Classic Sims.

G/O Media may get a commission

How to download and play The Sims 4 for free

Chances are, you own something that can play The Sims 4. The game is available on PC, Mac, PS5, PS4, Xbox Series X/S, and Xbox One.

On PlayStation and Xbox, downloading the game is obvious. You’ll find the title in both the PlayStation Store and Xbox Store available to download for free. On desktop, however, it’s a little different.

On both Mac and PC, head to EA’s website for The Sims 4 here. Hover your cursor over the blue “Play for Free” button in the top right, then choose either “EA app for Windows,” “Origin for Mac,” or “Steam.” If you choose one of the first two options, you’ll need to scroll down, click on the appropriate link for your computer (The EA app for Windows and Origin for Mac), then install and open the program.

At this time, The Sims 4 on Steam is only available for Windows. You can play The Sims 4 directly from Steam without needing to download the game first.

Lifehacker

Log Viewer after 2 months


It has only been 2 months since the Log Viewer was launched to the public as an Open-Source log viewer for the Laravel framework. I did not think much of it in the beginning – maybe it would help 10-20 developers make their life easier, including myself, but the response was much more significant that I expected.

GitHub – opcodesio/log-viewer: Fast and beautiful Log Viewer for Laravel

Fast and beautiful Log Viewer for Laravel. Contribute to opcodesio/log-viewer development by creating an account on GitHub.

Quick stats so far

  • 1,900+ stars on GitHub
  • ~65,000 installs
  • 64 Pull Requests made (24 of them made by contributors other than me)
  • 28 issues closed (fixed or explained as non-issue)

Not only did the package receive a lot more love than I imagined, it also received contributions from the community to make it better! Thank you all, bravo 👏

New features

I have a limited amount of free time – usually Fridays and weekends – to contribute to this project. The attention this package got definitely kept me going and wanting to improve it even further. I also had tons of fun working on it!

I had some cool ideas for features that no other log viewer for Laravel does. So, for the last 2 months, Log Viewer has been my primary focus during my free time and together with the community we’ve built some cool features 🤓 Here’s a quick look at them.

Dark Mode

Although this hasn’t shipped with the initial version of the Log Viewer, it quickly became the number one requested feature. Who could’ve guessed, right? 😅

Image of Log Viewer showing how to enable Dark Mode
Turn on Dark Mode by clicking on the Settings button, and selecting a Dark theme

I believe one of the first community PRs was to add a Dark Mode. Log Viewer is a tool for developers – it needs a dark mode.

You can also choose a System theme, which will switch between Light/Dark mode based on your system preferences. That is set by default.

Improved severity selection

The new dropdown design of selecting log severity levels is much more space-efficient, leaving room for other features later. It now also shows the total count of logs found.

Screenshot of Log Viewer's severity selection dropdown, showing different log severities with their log counts
Click on the dropdown to see a more granular overview, as well as control what logs you see

Log groups (folders)

If you include logs from multiple places, you will see them grouped in the UI. Expand/collapse, as well as perform any actions (download, delete, etc) on the groups – just like regular log files.

Screenshot of the Log Viewer showing log file groups, which can be expanded, collapsed, deleted, downloaded, and so on.
Log files inside the same folder will be grouped in the UI. The app’s main logs will default to the “root” group.

It is not based on the glob patterns defined, but rather the actual folder the logs are in. If multiple patterns resolve to the same folder – those log files will still be grouped under the same group.

On unix systems, the user’s home path will be replaced with a tilde ~ character for a cleaner design.

Searching across all files

Another heavily requested feature, as voted in the Twitter pool.

Screenshot of Log Viewer's search input, with the button that reads "Search all files"
Make sure no particular log file is selected and you’ll be searching across all files.

This comes useful especially if you use the daily log driver, which creates a new log file every day. If you’re looking for something, you may not know what day it happened, or what log file it is in.

Here’s how it works:

  • If you have selected a log file – it will search that log file only.
  • If you have not selected any log file – it will search across every log file known to Log Viewer.

In case of a global search, sometimes it may take a while if you have gigabytes of logs across hundreds of log files. For this purpose, there’s a progress bar that will update you on the search’s progress:

Screenshot of Log Viewer's search component with a progress bar while searching across a large number of logs.
Progress bar is displayed if there’s more than 200 MB of logs to search from.

Chunked log indices

The way the Log Viewer is faster than other log viewers for Laravel is by indexing the logs first before viewing them. Once we know the timestamp, severity, and the position in file for each log, we can paginate over them much more efficiently – in milliseconds in fact.

Previously, when working with large log files (more than 1 million logs inside a single file), the Log Viewer’s index could grow really big (reaching well over 100 MB in size for a multi-dimensional array in PHP) and thus increasing the memory usage for each request.

While it’s not very common to have huge log files, especially using the daily driver, it can still happen – you might get an infinite loop going on writing millions of logs, or an external service might be down during the night causing hundreds of thousands of exceptions. You don’t want to lose any of that. Your log files should be readable and accessible no matter the size.

So, in favor of an even higher scalability than previously available, I’ve rebuilt the indexing engine to split a single large index into smaller, more manageable chunks.

A single 1 GB log file might be indexed with ~200 small chunks stored in the cache. Once we know what severity levels you’re interested in, and what page you’re currently on – we can figure out the exact chunk to pull and read from. Notice the memory usage when browsing a 1 GB log file – it’s only 6 MB! That’s the beauty of not loading the full log file into memory every time you read it.

Screenshot of Log Viewer's memory usage on a large log file.
Paginating over a 1 GB log index takes only 6 MB of RAM

What’s next?

So, what’s next on the roadmap?

There’s still a number of features I’d like to build into the Log Viewer, namely:

  • Date filters. With the global search now available, it makes perfect sense to limit your search to a particular date range. Not only would it be faster, but also give you only the results you need.
  • Mobile-friendly UI. I have underestimated the amount of developers checking their logs on-the-go. I never do that myself, so I did not think much of it. It was a “nice to have” feature that I did not prioritize for the first release.
  • Custom Log Parsers. You most likely have logs other than just Laravel. Apache/nginx, MySQL/PostgreSQL, Supervisor, and so on. There are many more logs that you might need to read for debugging. Wouldn’t it be nice if all of these logs could be viewed in the Log Viewer?

If there’s another feature you’d like to see as part of the Log Viewer, be sure to check our GitHub Discussions page where you can upvote existing feature requests or suggest your own!

You can also open up a Pull Request with your feature implemented – I welcome all code suggestions and improvements and would love your help to make the Log Viewer even better.


Don’t forget to visit the Log Viewer’s GitHub page and ⭐️ star it!

GitHub – opcodesio/log-viewer: Fast and beautiful Log Viewer for Laravel

Fast and beautiful Log Viewer for Laravel. Contribute to opcodesio/log-viewer development by creating an account on GitHub.

If you’d like to know more about what I do, subscribe to the mailing list here and follow me on Twitter – https://twitter.com/arukomp

Thank you to all the contributors and supporters, I appreciate you all!

Laravel News Links

Consuming APIs in Laravel with Guzzle

https://ondemand.bannerbear.com/signedurl/nZ52rq9EkQ6V3bp1Lj/image.jpg?modifications=W3sibmFtZSI6InRpdGxlIiwidGV4dCI6IkNvbnN1bWluZyBBUElzIGluIExhcmF2ZWwgd2l0aCBHdXp6bGUiLCJjb2xvciI6bnVsbCwiYmFja2dyb3VuZCI6bnVsbH0seyJuYW1lIjoiaGVhZHNob3QiLCJpbWFnZV91cmwiOiJodHRwczovL3d3dy5ob25leWJhZGdlci5pby9pbWFnZXMvaGVhZHNob3RzL2Z1bmtlZmFpdGhvbGFzdXBvLnBuZyJ9LHsibmFtZSI6InN1bW1hcnkiLCJ0ZXh0IjoiQVBJcyBhcmUgZXZlcnl3aGVyZSEgSW4gdGhpcyBhcnRpY2xlLCB5b3UnbGwgbGVhcm4gaG93IHRvIGNvbnN1bWUgZXh0ZXJuYWwgQVBJcyBpbiBMYXJhdmVsIHVzaW5nIEd1enpsZS4iLCJjb2xvciI6bnVsbCwiYmFja2dyb3VuZCI6bnVsbH0seyJuYW1lIjoiYXV0aG9yIiwidGV4dCI6IkJ5ICpGdW5rZSBGYWl0aCBPbGFzdXBvKiIsImNvbG9yIjpudWxsLCJiYWNrZ3JvdW5kIjpudWxsfSx7Im5hbWUiOiJ0YWdzIiwidGV4dCI6IiNwaHAgI2xhcmF2ZWwgI2d1enpsZSIsImNvbG9yIjpudWxsLCJiYWNrZ3JvdW5kIjpudWxsfV0&s=a5d5acb6fefeec8550d5d121a90f5dd7503d84cd986e5b79bea4b844381ab5ef

Guzzle is a PHP HTTP client that Laravel uses to send outgoing HTTP requests to communicate with external services. Guzzle’s wrapper in Laravel focuses on the most popular use cases while providing a great development experience. Using Guzzle will save you time and reduce the number of lines of code in your application, making it more readable.

A common use case is when two Laravel applications are developed, and one functions as the server while the other is the client. They will need to make requests to each other. Here are a few things we’ll cover in this article:

  • What is an API?
  • Difference between external and internal APIs.
  • What is Guzzle?
  • Laravel’s HTTP Client.
  • Making requests.
  • Inspecting the response format.

Prerequisites

The following will help you keep up with this tutorial:

What Is an API?

An API is a software-to-software interface that enables two applications to exchange data. APIs are propelling a new era of service-sharing innovation. They are used by nearly every major website you can think of, including Google, Facebook, and Amazon. All of these websites and tools use and provide ways for other websites and products to consume and extend each other’s data and services. You’ve been in the presence of an API if you’ve ever signed into an app or service using your Facebook or Google credentials.

They serve as a bridge between developers and the different programs that people and organizations use on a regular basis, allowing them to create new programmatic interactions. Companies can open up their applications’ data and functionality to external third-party developers, commercial partners, and internal departments through an application programming interface, or API. Through a documented interface, services and products can communicate with one another and benefit from each other’s data and capability.

Here are some common examples of APIs:

  • Pay with PayPal
  • Twitter Bots
  • Sign In with Google

How Do APIs Work?

APIs operate as an intermediary layer between an application and a web server, facilitating data transfer across systems. To retrieve information, a client application makes an API call, often known as a request. This request, which contains a request verb, headers, and sometimes a request body, is sent from an application to the web server via the API’s uniform resource identifier (URI).
The API makes a call to the external program or web server after receiving a valid request. The server responds to the API with the requested data. The data are transferred to the requesting application via the API.

Internal vs. External APIs

Internal APIs provide a safe environment for teams to share data and break down traditional data silos. An internal API is an interface that allows developers to gain access to a company’s backend information and application functionality. The new apps built by these developers can subsequently be shared with the public, although the interface is hidden from anyone who isn’t working directly with the API publisher.
Internal APIs can help cut down on the amount of time and resources required to develop apps and the resources. Developers can leverage a standardized pool of internal software assets instead of designing apps from scratch.

External APIs provide secure communication and content sharing outside of an organization, resulting in a more engaging customer experience. An external or open API is an API designed for access by a larger population, as well as by web developers. Thus, an external API can be easily used by developers inside the organization (that published the API) and any other developer from the outside who desires to register with the interface.

Benefits of APIs

Abstraction

An API makes programming easier by abstracting the implementation and just exposing the objects or actions that the developer needs.

Security

They enable secure communication of abstracted data to be displayed or used as required.

Automation

APIs allow computers to manage tasks rather than individuals. Agencies can use APIs to update workflows to make them more efficient.

Personalization

An API can be used to establish an application layer for use in disseminating information and services to new audiences and can be customized to create unique user experiences.

What Is a REST API?

REST is an acronym for representational state transfer. The first distinction to make is that API is the superset, whereas REST API is the subset. This implies that while all REST APIs are APIs, not all APIs are REST APIs. REST is a software architectural style developed to guide the design and development of the World Wide Web’s architecture. REST API is an API that follows the design principles of the REST architectural style. REST APIs are sometimes referred to as RESTful APIs because of this. For developers, REST provides a considerable amount of flexibility and independence. REST APIs have become a popular approach for connecting components and applications in a microservices architecture because of their flexibility.

REST APIs use HTTP requests to perform common database activities, including creating, reading, updating, and deleting records (also known as CRUD). A REST API might, for example, use a GET request to retrieve a record, a POST request to create one, a PUT request to update one, and a DELETE request to delete one.

What is Guzzle?

Guzzle is a PHP HTTP client that makes sending HTTP requests with data and headers easy. It also makes integrating with web services simple. It offers a simple yet powerful interface for sending POST requests, streaming massive uploads and downloads, using HTTP cookies, and uploading JSON data, among other things. One amazing feature is that it allows you to make synchronous and asynchronous requests from the same interface.

Previously, we relied on cURL for similar tasks. However, as time passed, more advancements have been made. Guzzle is a result of one of these advancements.

Laravel’s HTTP Client

Laravel wraps the Guzzle HTTP client in an expressive, basic API, allowing one application to swiftly make outgoing HTTP requests to communicate with other web apps efficiently. Requests can be sent, and responses can be retrieved using a HTTP Client. Guzzle is a powerful HTTP client, but when performing a simple HTTP GET or retrieving data from a JSON API, the 80% use-case seems difficult. Here are some popular features the Laravel HTTP Client provides:

  • JSON response data is easily accessible.
  • There’s no need for a boilerplate setup to make a simple request.
  • Failed requests are retried.

You may need to use Guzzle directly for more complicated HTTP client tasks. However, Laravel’s HTTP Client includes everything you’ll need for most of your applications.

Making HTTP Requests to an API Using Laravel’s HTTP Client

For us to explore various options and use cases for Laravel’s HTTP Client, we’ll build a Laravel application that shows how to make requests to an external API using Laravel’s HTTP Client.

Note: The external API being used here is hosted on my local server. You can use any API you desire; its important to ensure you get the correct URL to make request to the API.

The URL that we will be hitting on the user API for this article is the following:

Create a New Laravel Project

You can create a new Laravel project via the Composer command or the Laravel installer:

laravel new project_name   
    or
composer create-project laravel/laravel project_name

Install the Package

You need make sure that the Guzzle package is installed as a dependency of your application. You can install it using Composer:

composer require guzzlehttp/guzzle

Set Up the URL in .env and in the config/app Directory

Considering that you’ll be using this URL more than once, it’s a good idea to store it as one of your environment variables. This is beneficial because, if the URL needs to be changed later on, only the .env file will be modified rather than all the points at which the URL is called in the application. Store the URL as a variable in the .env file.

GUZZLE_TEST_URL=http://127.0.0.1:8000

Add it to the return[] array in config/app.php directory.

'guzzle_test_url' => env('GUZZLE_TEST_URL'),

Next, clear your configuration and application cache by running these commands:

php artisan config:cache
php artisan cache:clear

Set up the Controller

You’ll be making GET, POST, and DELETE requests to the API in the controller.

Create the controller with this Artisan command:

php artisan make:controller TestController

Import the Http Facade

use Illuminate\Support\Facades\Http;

Note: The base URL is being fetched from config/app.php, and the specific endpoint for a request is appended to it.

Make a POST Request

public function createUser(Request $request){
$theUrl     = config('app.guzzle_test_url').'/api/users/create';
  $response= Http::post($theUrl, [
      'name'=>$request->name,
      'email'=>$request->email
  ]);
  return $response;
}

Make a GET Request

public function getUsers(){
   $theUrl     = config('app.guzzle_test_url').'/api/users/';
   $users   = Http ::get($theUrl);
   return $users;
}

Make a DELETE Request

public function deleteUser($id){
$theUrl     = config('app.guzzle_test_url').'/api/users/delete/'.$id;
     $delete   = Http ::delete($theUrl);
    return $delete;
}

Set Up the Route

Since this is an API, the routes are defined in the routes/api.php directory.

Route::post('post/users', [TestController::class, 'createUser']);
Route::get('users/', [TestController::class, 'getUsers']);
Route::delete('delete/users/{id}', [TestController::class, 'deleteUser']);

Serve the Project

Run this Artisan command to serve the project:

Note: I am using port 9000; you can use your desired port.

php artisan serve --port 9090

Testing

We’ll be using Postman to test the API. These are the results for the requests, and they return successful for every request.

POSTGETDELETE

Inspect the Response Format

Laravel’s HTTP Client provides a list of options containing various formats in which response can be returned. Further queries can be performed on the response, depending on the format it is returned in. It returns a string by default.

String

Object

JSON

Check out the official Laravel documentation to learn more about Laravel’s HTTP Client. It provides various options, such as adding Authentication with Bearer Token,Headers, Timeout, and many other options, to make requests more flexible.

Conclusion

In this tutorial, you’ve learned how to consume APIs in Laravel using Laravel’s HTTP Client. Consuming APIs in Laravel is a broad concept on its own, but this tutorial can serve as a great starter guide. More information can be found in the official Laravel documentation. The code for this project is open-source and available on GitHub.

I am open to questions, contributions, and conversations on better ways to implement APIs, so please comment on the repository or DM me @twitter.

Thanks for reading 🤝.

Honeybadger has your back when it counts.

We combine error tracking, uptime monitoring, and cron & heartbeat monitoring into a simple, easy-to-use platform. Our mission: to tame production and make you a better, more productive developer.

Learn more

Laravel News Links

Require Signatures and Associate Them With Eloquent Models

https://laravelnews.imgix.net/images/laravel-signature-pad.jpg?ixlib=php-3.3.1

Laravel pad signature is a package to require a signature associated with an Eloquent model and optionally generate certified PDFs.

This package works by using a RequiresSignature trait (provided by the package) on an Eloquent model you want to associate with a signature. Taken from the readme, that might look like the following:

1namespace App\Models;

2 

3use Creagia\LaravelSignPad\Concerns\RequiresSignature;

4use Creagia\LaravelSignPad\Contracts\CanBeSigned;

5 

6class Delivery extends Model

7{

8 use RequiresSignature;

9 // ...

10}

You can also generate PDF documents with the signature by implementing the package’s ShouldGenerateSignatureDocument interface. You can use a blade file or a PDF file as the basis for the signature document template. See the readme for details.

Once you have the package set up, a blade component provides the needed HTML to render the signature pad (shown with customizations):

1<x-creagia-signature-pad

2 border-color="#eaeaea"

3 pad-classes="rounded-xl border-2"

4 button-classes="bg-gray-100 px-4 py-2 rounded-xl mt-4"

5 clear-name="Clear"

6 submit-name="Submit"

7/>

Once you’ve collected signatures from users, you can access the signature image and document on the model like so:

1// Get the signature image path

2$myModel->signature->getSignatureImagePath();

3 

4// Get the signed document path

5$myModel->signature->getSignedDocumentPath();

This package also supports certifying the PDF, and instructions are provided in the readme. You can learn more about this package, get full installation instructions, and view the source code on GitHub.


This package was submitted to our Laravel News Links section. Links is a place the community can post packages and tutorials around the Laravel ecosystem. Follow along on Twitter @LaravelLinks

Laravel News

Laravel 9 Model Events Every Developer Should Know


Laravel 9 Model Events Every Developer Should Know

Posted

Mahedi Hasan

Category
Laravel 9

Published
September 14, 2022

Hello artisan,

If you work on react js, Vue js, or WordPress, you will notice that there are some basic hooks to functionalize the corresponding language. Laravel has some model events that are automatically called when a model makes some changes. 

In this tutorial, I will share with you some knowledge of Laravel model events and how we can use them in our code to make it dynamic. Assume, you would like to trigger something having changed a model data. You know, we can call model events in this situation. 

There are so many model events in Laravel 9. Such as creatingcreatedupdatingupdatedsavingsaveddeletingdeletedrestoringrestored. Events allow you to easily execute code each time a specific model class is saved or updated in the database.

Take a brief look:

  • creating: Call Before Create Record.
  • created: Call After Created Record.
  • updating: Call Before Update Record
  • updated: Class After Updated Record
  • deleting: Call Before Delete Record.
  • deleted: Call After Deleted Record
  • retrieved: Call Retrieve Data from Database.
  • saving: Call Before Creating or Updating Record.
  • saved: Call After Created or Updated Record.
  • restoring: Call Before Restore Record.
  • restored: Call After Restore Record.
  • replicating: Call on replicate Record

 

laravel-9-model-events-example

 

Let’s see an example of the use case of model events in Laravel:

namespace App\Models;
  
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
use Log;
use Str;
   
class Product extends Model
{
    use HasFactory;

    public static function boot() {
        parent::boot();

        static::creating(function($item) {            
            Log::info('This event will call before creating data'); 

            //you can write you any logic here like:
            $item->slug = Str::slug($item->name);
        });
  
        static::created(function($item) {           
            Log::info('This event will call after creating data'); 
        });
  
        static::updating(function($item) {            
            Log::info('This event will call before updating data'); 
        });
  
        static::updated(function($item) {  
            Log::info('This event will call after updating data'); 
        });

        static::deleted(function($item) {            
            Log::info('This event will call after deleting data'); 
        });
    }
}

 

You can also use $dispatchesEvents property of a model to call an event like:

protected $dispatchesEvents = [
   'saved' => \App\Events\TestEvent::class,
];

 

This TestEvent will call after saving the data. You write your logic inside TestEvent the class. Hope it can help you.

 

Read also: How to Use Laravel Model Events

 

Hope it can help you.

 

Laravel News Links

Top Gun but with Cats

https://theawesomer.com/photos/2022/10/top_gun_with_cats_t.jpg

Top Gun but with Cats

Link

When the local pigeons wreak havoc on his hometown, Prince Michael and his furry pals decide to take action. After a failed attempt at a ground offensive, the cats (and dog) of Aaron’s Animals take to the skies in tiny fighter jets on a mission to defeat their avian, poop-bombing foes.

The Awesomer

Learn how to upload files in Laravel like a Pro

https://laravelnews.imgix.net/images/uploading-files-like-a-pro.png?ixlib=php-3.3.1

One of the things that I see many people struggling with is file uploads. How do we upload a file in Laravel? What is the best way to upload a file? In this tutorial, I will walk through from a basic version using blade and routes, getting more advanced, and then how we might handle it in Livewire too.

To begin with, let’s look at how we might do this in standard Laravel and Blade. There are a few packages that you can use for this – however, I am not a fan of installing a package for something as simple as uploading a file. However, suppose you want to upload a file and associate it with a model and have different media collections for your model. In that case, Spatie has a great package called MediaLibrary and MediaLibrary Pro, which takes a lot of the hassle out of this process for you.

Let’s assume we want to do this ourselves and not lean on a package for this. We will want to create a form that allows us to upload a file and submit it – and a controller will accept this form, validate the input and handle the upload.

Before that, however, let us create a database table in which we can store our uploads. Imagine a scenario where we want to upload and attach files to different models. We wish to have a centralized table for our media that we can attach instead of uploading multiple versions for each model.

Let’s create this first using the following artisan command:

1php artisan make:model Media -m

This will create the model and migration for us to get started. Let’s take a look at the up method on the migration so we can understand what we will want to store and understand:

1Schema::create('media', function (Blueprint $table) {

2 $table->id();

3 

4 $table->string('name');

5 $table->string('file_name');

6 $table->string('mime_type');

7 $table->string('path');

8 $table->string('disk')->default('local');

9 $table->string('file_hash', 64)->unique();

10 $table->string('collection')->nullable();

11 

12 $table->unsignedBigInteger('size');

13 

14 $table->timestamps();

15});

Our media will require a name so that we can pull the client’s original name from the upload. We then want a file name, which will be a generated name. Storing uploaded files using the original file name can be a significant issue regarding security, especially if you are not validating strongly enough. The mime type is then required so that we can understand what was uploaded, whether it was a CSV or an image. The path to the upload is also handy to store, as it allows us to reference it more easily. We record the disk we are storing this on so that we can dynamically work with it within Laravel. However, we might be interacting with our application. We store the file hash as a unique column to ensure we do not upload the same file more than once. If the file changes, this would be a new variation and ok to upload again. Finally, we have collection and size, where we can save a file to a collection such as “blog posts”, creating a virtual directory/taxonomy structure. Size is there for informational purposes mainly but will allow you to ensure that your digital assets aren’t too large.

Now we know where we want to store these uploads, we can look into how we want to upload them. We will start with a simple implementation inside a route/controller and expand from there.

Let’s create our first controller using the following artisan command:

1php artisan make:controller UploadController --invokable

This will be where we route uploads to, for now, an invokable controller that will synchronously handle the file upload. Add this as a route in your web.php like so:

1Route::post('upload', App\Http\Controllers\UploadController::class)->name('upload');

Then we can look at how we want this process to work. To start with, like all other endpoints – we want to validate the input early on. I like to do this in a form request as it keeps things encapsulated nicely. You can do this part however you feel appropriate; I will show you the rules below I use:

1use Illuminate\Validation\Rules\File;

2 

3return [

4 'file' => [

5 'required',

6 File::types(['png', 'jpg'])

7 ->max(5 * 1024),

8 ]

9];

So we must send a file across in our request, and it must be either a PNG or JPG and not be any bigger than 5Gb. You can use configuration to store your default rules for this if you find it more accessible. However, I usually create a Validator class specific for each use case, for example:

1class UserUploadValidator

2{

3 public function avatars(): array

4 {

5 return [

6 'required',

7 File::types(['png', 'jpg'])

8 ->max(5 * 1024),

9 ];

10 }

11}

Once you have your validation in place, you can handle this in your controller however you need to. Assume that I am using a Form Request and injecting this into my controller, though. Now we have validated, we need to process. My general approach to controllers is:

In an API, I process in the background, which usually means dispatching a job – but on the web, that isn’t always convenient. Let’s look at how we might process a file upload.

1class UploadController

2{

3 public function __invoke(UploadRequest $request)

4 {

5 Gate::authorize('upload-files');

6 

7 $file = $request->file('file');

8 $name = $file->hashName();

9 

10 $upload = Storage::put("avatars/{$name}", $file);

11 

12 Media::query()->create(

13 attributes: [

14 'name' => "{$name}",

15 'file_name' => $file->getClientOriginalName(),

16 'mime_type' => $file->getClientMimeType(),

17 'path' => "avatars/{$name}"

18,

19 'disk' => config('app.uploads.disk'),

20 'file_hash' => hash_file(

21 config('app.uploads.hash'),

22 storage_path(

23 path: "avatars/{$name}",

24 ),

25 ),

26 'collection' => $request->get('collection'),

27 'size' => $file->getSize(),

28 ],

29 );

30 

31 return redirect()->back();

32 }

33}

So what we are doing here is first ensuring that the logged-in user is authorized to upload files. Then we want to get the file uploaded and the hashed name to store. We then upload the file and store the record in the database, getting the information we need for the model from the file itself.

I would call this the default approach to uploading files, and I will admit there is nothing wrong with this approach. If your code looks something like this already, you are doing a good job. However, we can, of course, take this further – in a few different ways.

The first way we could achieve this is by extracting the upload logic to an UploadService where it generates everything we need and returns a Domain Transfer Object (which I call Data Objects) so that we can use the properties of the object to create the model. First, let us make the object we want to return.

1class File

2{

3 public function __construct(

4 public readonly string $name,

5 public readonly string $originalName,

6 public readonly string $mime,

7 public readonly string $path,

8 public readonly string $disk,

9 public readonly string $hash,

10 public readonly null|string $collection = null,

11 ) {}

12 

13 public function toArray(): array

14 {

15 return [

16 'name' => $this->name,

17 'file_name' => $this->originalName,

18 'mime_type' => $this->mime,

19 'path' => $this->path,

20 'disk' => $this->disk,

21 'file_hash' => $this->hash,

22 'collection' => $this->collection,

23 ];

24 }

25}

Now we can look at the upload service and figure out how we want it to work. If we look at the logic within the controller, we know we will want to generate a new name for the file and get the upload’s original name. Then we want to put the file into storage and return the Data Object. As with most code I write, the service should implement an interface that we can then bind to the container.

1class UploadService implements UploadServiceContract

2{

3 public function avatar(UploadedFile $file): File

4 {

5 $name = $file->hashName();

6 

7 $upload = Storage::put("{$name}", $file);

8 

9 return new File(

10 name: "{$name}",

11 originalName: $file->getClientOriginalName(),

12 mime: $file->getClientMimeType(),

13 path: $upload->path(),

14 disk: config('app.uploads.disk'),

15 hash: file_hash(

16 config('app.uploads.hash'),

17 storage_path(

18 path: "avatars/{$name}",

19 ),

20 ),

21 collection: 'avatars',

22 );

23 }

24}

Let us refactor our UploadController now so that it is using this new service:

1class UploadController

2{

3 public function __construct(

4 private readonly UploadServiceContract $service,

5 ) {}

6 

7 public function __invoke(UploadRequest $request)

8 {

9 Gate::authorize('upload-files');

10 

11 $file = $this->service->avatar(

12 file: $request->file('file'),

13 );

14 

15 Media::query()->create(

16 attributes: $file->toArray(),

17 );

18 

19 return redirect()->back();

20 }

21}

Suddenly our controller is a lot cleaner, and our logic has been extracted to our new service – so it is repeatable no matter where we need to upload a file. We can, of course, write tests for this, too, because why do anything you cannot test?

1it('can upload an avatar', function () {

2 Storage::fake('avatars');

3 

4 $file = UploadedFile::fake()->image('avatar.jpg');

5 

6 post(

7 action(UploadController::class),

8 [

9 'file' => $file,

10 ],

11 )->assertRedirect();

12 

13 Storage::disk('avatars')->assertExists($file->hashName());

14});

We are faking the storage facade, creating a fake file to upload, and then hitting our endpoint and sending the file. We then asserted that everything went ok, and we were redirected. Finally, we want to assert that the file now exists on our disk.

How could we take this further? This is where we are getting into the nitty gritty, depending on your application. Let’s say, for example, that in your application, there are many different types of uploads that you might need to do. We want our upload service to reflect that without getting too complicated, right? This is where I use a pattern I call the “service action pattern”, where our service calls an action instead of handling the logic. This pattern allows you to inject a single service but call multiple actions through it – keeping your code clean and focused, and your service is just a handy proxy.

Let us first create the action:

1class UploadAvatar implements UploadContract

2{

3 public function handle(UploadedFile $file): File

4 {

5 $name = $file->hashName();

6 

7 Storage::put("{$name}", $file);

8 

9 return new File(

10 name: "{$name}",

11 originalName: $file->getClientOriginalName(),

12 mime: $file->getClientMimeType(),

13 path: $upload->path(),

14 disk: config('app.uploads.disk'),

15 hash: hash_file(

16 config('app.uploads.hash'),

17 storage_path(

18 path: "avatars/{$name}",

19 ),

20 ),

21 collection: 'avatars',

22 size: $file->getSize(),

23 );

24 }

25}

Now we can refactor our service to call the action, acting as a useful proxy.

1class UploadService implements UploadServiceContract

2{

3 public function __construct(

4 private readonly UploadContract $avatar,

5 ) {}

6 

7 public function avatar(UploadedFile $file): File

8 {

9 return $this->avatar->handle(

10 file: $file,

11 );

12 }

13}

This feels like over-engineering for a minor application. Still, for more extensive media-focused applications, this will enable you to handle uploads through one service that can be well-documented instead of fragmented knowledge throughout your team.

Where can we take it from here? Let’s step into user-land for a moment and assume we are using the TALL stack (because why wouldn’t you!?). With Livewire, we have a slightly different approach where Livewire will handle the upload for you and store this as a temporary file, which gives you a somewhat different API to work with when it comes to storing the file.

Firstly we need to create a new Livewire component that we can use for our file upload. You can create this using the following artisan command:

1php artisan livewire:make UploadForm --test

Now we can add a few properties to our component and add a trait so that the component knows it handles file uploads.

1final class UploadForm extends Component

2{

3 use WithFileUploads;

4 

5 public null|string|TemporaryUploadedFile $file = null;

6 

7 public function upload()

8 {

9 $this->validate();

10 }

11 

12 public function render(): View

13 {

14 return view('livewire.upload-form');

15 }

16}

Livewire comes with a handy trait that allows us to work with File uploads straightforwardly. We have a file property that could be null, a string for a path, or a Temporary File that has been uploaded. This is perhaps the one part about file uploads in Livewire that I do not like.

Now that we have a basic component available, we can look at moving the logic from our controller over to the component. One thing we would do here is to move the Gate check from the controller to the UI so that we do not display the form if the user cannot upload files. This simplifies our component logic nicely.

Our next step is injecting the UploadService into our upload method, which Livewire can resolve for us. Alongside this, we will want to handle our validation straight away. Our component should not look like the following:

1final class UploadForm extends Component

2{

3 use WithFileUploads;

4 

5 public null|string|TemporaryUploadedFile $file;

6 

7 public function upload(UploadServiceContract $service)

8 {

9 $this->validate();

10 }

11 

12 public function rules(): array

13 {

14 return (new UserUploadValidator())->avatars();

15 }

16 

17 public function render(): View

18 {

19 return view('livewire.upload-form');

20 }

21}

Our validation rules method returns our avatar validation rules from our validation class, and we have injected the service from the container. Next, we can add our logic for actually uploading the file.

1final class UploadForm extends Component

2{

3 use WithFileUploads;

4 

5 public null|string|TemporaryUploadedFile $file;

6 

7 public function upload(UploadServiceContract $service)

8 {

9 $this->validate();

10 

11 try {

12 $file = $service->avatar(

13 file: $this->file,

14 );

15 } catch (Throwable $exception) {

16 throw $exception;

17 }

18 

19 Media::query()->create(

20 attributes: $file->toArray(),

21 );

22 }

23 

24 public function rules(): array

25 {

26 return (new UserUploadValidator())->avatars();

27 }

28 

29 public function render(): View

30 {

31 return view('livewire.upload-form');

32 }

33}

We need minimal changes to how our logic works – we can move it almost straight into place, and it will work.

This is how I find uploading files works for me; there are, of course, many ways to do this same thing – and some/most of them are a little simpler. It wouldn’t be a Steve tutorial if I didn’t go a little opinionated and overboard, right?

How do you like to handle file uploads? Have you found a way that works well for your use case? Let us know on Twitter!

Laravel News

MySQL Workbench Keys

https://blog.mclaughlinsoftware.com/wp-content/uploads/2022/10/lookup_erd.png

As I teach students how to create tables in MySQL Workbench, it’s always important to review the meaning of the checkbox keys. Then, I need to remind them that every table requires a natural key from our prior discussion on normalization. I explain that a natural key is a compound candidate key (made up of two or more column values), and that it naturally defines uniqueness for each row in a table.

Then, we discuss surrogate keys, which are typically ID column keys. I explain that surrogate keys are driven by sequences in the database. While a number of databases disclose the name of sequences, MySQL treats the sequence as an attribute of the table. In Object-Oriented Analysis and Design (OOAD), that makes the sequence a member of the table by composition rather than aggregation. Surrogate keys are also unique in the table but should never be used to determine uniqueness like the natural key. Surrogate keys are also candidate keys, like a VIN number uniquely identifies a vehicle.

In a well designed table you always have two candidate keys: One describes the unique row and the other assigns a number to it. While you can perform joins by using either candidate key, you always should use the surrogate key for joins statements. This means you elect, or choose, the surrogate candidate key as the primary key. Then, you build a unique index for the natural key, which lets you query any unique row with human decipherable words.

The column attribute table for MySQL Workbench is:

Key Meaning
PK Designates a primary key column.
NN Designates a not-null column constraint.
UQ Designates a column contains a unique value for every row.
BIN Designates a VARCHAR data type column so that its values are stored in a case-sensitive fashion. You can’t apply this constraint to other data types.
UN Designates a column contains an unsigned numeric data type. The possible values are 0 to the maximum number of the data type, like integer, float, or double. The value 0 isn’t possible when you also select the PK and AI check boxes, which ensures the column automatically increments to the maximum value of the column.
ZF Designates a zero fill populates zeros in front of any number data type until all space is consumed, which acts like a left pad function with zeros.
AI Designates AUTO_INCREMENT and should only be checked for a surrogate primary key value.

All surrogate key columns should check the PK, NN, UN, and AI checkboxes. The default behavior checks only the PK and NN checkboxes and leaves the UN and AI boxes unchecked. You should also click the UN checkbox with the AI checkbox for all surrogate key columns. The AI checkbox enables AUTO_INCREMENT behavior. The UN checkbox ensure you have the maximum number of integers before you would migrate the table to a double precision number.

Active tables grow quickly and using a signed int means you run out of rows more quickly. This is an important design consideration because using a unsigned int adds a maintenance task later. The maintenance task will require changing the data type of all dependent foreign key columns before changing the primary key column’s data type. Assuming you’re design uses referential integrity constraints, implemented as a foreign keys, you will need to:

  • Remove any foreign key constraints before changing the referenced primary key and dependent foreign key column’s data types.
  • Change the primary and foreign key column’s data types.
  • Add back foreign key constraints after changing the referenced primary key and dependent foreign key column’s data types.

While fixing a less optimal design is a relatively simple scripting exercise for most data engineers, you can avoid this maintenance task. Implement all surrogate primary key columns and foreign key columns with the signed int as their initial data type.

The following small ERD displays a multi-language lookup table, which is preferable to a monolinquistic enum data type.:

A design uses a lookup table when there are known lists of selections to make. There are known lists that occur in most if not all business applications. Maintaining that list of values is an application setup task and requires the development team to build an entry and update form to input and maintain the lists.

While some MySQL examples demonstrate these types of lists by using the MySQL enum data type. However, the MySQL enum type doesn’t support multilingual implementations, isn’t readily portable to other relational database, and has a number of limitations.

A lookup table is the better solution to using an enum data type. It typically follows this pattern:

  • Identify the target table and column where a list is useful. Use the table_name and column_name columns as a super key to identify the location where the list belongs.
  • Identify a unique type identifier for the list. Store the unique type value in the type column of the lookup table.
  • Use a lang column to enable multilingual lists.

The combination of the table_name, column_name, type, and lang let you identify unique sets. You can find a monolingual implementation in these two older blog posts:

The column view of the lookup table shows the appropriate design checkboxes:

While most foreign keys use copies of surrogate keys, there are instances when you copy the natural key value from another table rather than the surrogate key. This is done when your application will frequently query the dependent lookup table without a join to the lang table, which means the foreign key value should be a human friendly foreign key value that works as a super key.

A super key is a column or set of columns that uniquely identifies a rows in the scope of a relation. For this example, the lang column identifies rows that belong to a language in a multilingual data model. Belonging to a language is the relation between the lookup and language table. It is also a key when filtering rows with a specific lang value from the lookup table.

You navigate to the foreign key tab to create a lookup_fk foreign key constraint, like:

With this type of foreign key constraint, you copy the lang value from the language table when inserting the lookup table values. Then, your HTML forms can use the lookup table’s meaning column in any of the supported languages, like:

SELECT lookup_id
,      type
,      meaning
FROM   lookup
WHERE  table_name = 'some_table_name'
AND    column_name = 'some_column_name'
AND    lang = 'some_lang_name';

The type column value isn’t used in the WHERE clause to filter the data set because it is unique within the relation of the table_name, column_name, and lang column values. It is always non-unique when you exclude the lang column value, and potentially non-unique for another combination of the table_name and column_name column values.

If I’ve left questions, let me know. Other wise, I hope this helps qualify a best design practice.

Planet MySQL