Top Gun: Maverick spoiler-free review: A worthy return to the danger zone

https://cdn.arstechnica.net/wp-content/uploads/2022/05/topgunmaverick-listing-2-760×380.png

Tom Cruise, still crazy after all these years.

Enlarge / Tom Cruise, still crazy after all these years.

Skydance Productions

As I walked out of my review screening of Top Gun: Maverick, coming down from its adrenaline-filled finale, a small part of my brain began looking for dents in the film’s armor. Maybe it’s the critic in me, but my thoughts didn’t need long to land on stuff from the original film—a plot point, a stylistic choice, a particular character—that didn’t return this time.

I chewed on those thoughts for a second, but before I could lose myself in cataloging them at length, a sensation came over me. It landed like a massive G-force blast, as if I were a jet fighter pilot attempting a seemingly impossible climb: one of great satisfaction with this sequel and admiration that this film pulled off the impossible feat of adhering to the old while doing something new.

Returning to old haunts.

Enlarge / Returning to old haunts.

Skydance Productions

The series’ predilection for steering military theater toward Hollywood-style silliness is arguably more tolerable, as tempered by a savvy script and cutting-edge stunt work. The character development hits important notes for both Pete “Maverick” Mitchell and the people in his high-speed orbit, and the film’s focused supporting cast mostly hits the mark.

Perhaps most important of all, an aging-yet-excited Tom Cruise never steps beyond his pay grade. The Top Gun star of roughly 35 years ago ruled movie theaters for different reasons than the man he is today, yet this film never sees his character Maverick betray his beloved traits or feel like an old man faking like a 20-something hotshot.

A few of the series’ moving parts have been jettisoned so many years later, and lifetime fans of the film will definitely notice them. But Top Gun‘s core tenets—incredible fighter-jet combat, enjoyable cheese, and the big-grin smile of Cruise—have returned in arguably finer form than the original.

“Don’t think, just do”

Skydance has only released traditional theater ratio footage of the film for consumption outside of theaters, so you'll have to trust me when I say that shots like this look doubly incredible inside a 16:10 ratio container.

Enlarge / Skydance has only released traditional theater ratio footage of the film for consumption outside of theaters, so you’ll have to trust me when I say that shots like this look doubly incredible inside a 16:10 ratio container.

Skydance Productions

Top Gun: Maverick has the added benefit of looking incredible on a large screen, and it’s perhaps the best IMAX format showcase of the past five years. Cruise and co. were clearly eager to take cinematic air combat to the next level, and there’s no getting around it: If you have to stitch three hospital-grade masks together or rent out a private room to feel comfortable in a public movie theater in 2022, you should consider doing so for this film.

Every major flight scene includes per-cockpit camera rigs that emphasize the added height of IMAX’s 16:10 ratio, and in these moments, flying is choreographed to let this camera angle showcase Top Gun-caliber stuff. You might see another plane in view, or vapor trails, or dumped flares dancing and billowing smoke, or a glancing shadow of the jet against the Earth’s surface because the F/A-18 Hornet is actually flying that freaking low in real life. In these moments, the actors don’t hesitate to explode with emotion, whether shrinking back or splashing their palms on the cockpit glass that extends across the entire IMAX screen.

In <em>Top Gun: Maverick</em>, all buzzing is essential—and it's always portrayed with incredible detail.

Enlarge / In Top Gun: Maverick, all buzzing is essential—and it’s always portrayed with incredible detail.

Skydance Productions

Top Gun: Maverick spends a lot of time in this perspective, so it’s good to see the stunt teams and cinematographers repeatedly strike a hot beach volleyball high-five over this collaboration. Yet the crew also makes up for lost time since the first film was made by letting external cameras, including expertly staged drones, linger over death-defying flight sequences or use wide-angle shots to show how foolishly close its stunt flyers zip past each other. The 1986 style of hard camera cuts to stitch together a shot-down bogey are done. This time, we get to watch full dogfights that lead up to each climactic kaboom.

Really, the lengths to which this film goes to favor real-life stunts over green-screen trickery is incredible. Everyone will have a different favorite on this front, but mine is a dramatic fly-by somewhat early in the film that I won’t spoil for you, except to say that it was reportedly filmed with actors taking the real-life brunt of its buzz. You’ll know it (and feel it) when you see it.

My only shoulder-shrug about the air-combat content comes from a few CGI-filled briefings. In each of these, commanding officers point at holograms and break down each step of a mission or exercise—as if Cruise insisted that this film resemble the Mission: Impossible series in one way or another. While these moments are tolerable, I felt they were explanation overkill that took time away from getting the film’s cameras up into the danged skies.

Ars Technica

Laravel Real-Time performance monitoring & alerting using Inspector

https://inspector.dev/wp-content/uploads/2020/02/laravel-monitoring-cover-3.jpg

Hi, I’m Valerio software engineer, founder and CTO at Inspector.

As a product owner, I learned how an application issue could be so hard to fix, that it negatively impacts the users’ experience. Or worse blocks new potential customers during onboarding.

I publish new code changes almost every day. Unfortunately, it’s impossible to anticipate all the problems that could happen after each release. Furthermore, users don’t spend their time reporting bugs. They stop using our application if it doesn’t work as expected. Then they look for another one that better fits their needs.

In most of the projects I’ve worked on, 50% of the drawbacks for users are due to simple code mistakes. And, the more the application grows (more lines of code, new developers at work), the more difficult it is to avoid incidents.

When I started to share my idea behind Inspector, I realized that many developers know the problem. They spend too much time investigating strange behaviors inside their applications. Still, they didn’t know there was a solution to end this complexity. And do so in two minutes, avoiding customer complaints or even losing the customer.

Be the first to know if your application is in trouble before users stumble onto the problem. And drastically reduce the negative impacts on their experience. This gives you the proper foundation to run a successful users acquisition process. And allow you to increase engagement with fewer interruptions.

Lavarel Code Execution Monitoring: how it works

Inspector is a composer package to add real-time code execution monitoring to your Laravel application. It allows you to work on continuous code changes while catching bugs and bottlenecks in real-time. Before users do.

It takes less than one minute to get started. Let’s see how it works.

Install the composer package

Run the composer command below in your terminal to install the latest version:

composer require inspector-apm/inspector-laravel

Configure the Ingestion Key

Get a new Ingestion key by signing up for Inspector (https://app.inspector.dev/register) and creating a new project, it only takes a few seconds.

You’ll see installation instructions directly in the app screen:

Put the API key into your environment file:

INSPECTOR_API_KEY=xxxxxxxxxxxxxxx

Test everything is working

Execute the test command to check if your app sends data to Inspector correctly:

php artisan inspector:test

Go to https://app.inspector.dev/home to explore the demo data.


By default Inspector monitors:

  • Database interactions
  • Queued Jobs execution
  • Artisan commands
  • Email sent
  • Unhandled Exceptions

But, we turned on the light in the 50% of our app executed in the background. The next step is to monitor all execution cycles generated by user interactions.

Monitor Incoming HTTP Requests

To activate HTTP requests monitoring, you can use the WebRequestMonitoring middleware as an independent component. You are then free to decide which routes need to be monitored. Base it on your routes configuration or your monitoring preferences.

Attach the middleware in the App\Http\Kernel class:

/**
 * The application's route middleware groups.
 *
 * @var array
 */
protected $middlewareGroups = [
    'web' => [
       …,
       \Inspector\Laravel\Middleware\WebRequestMonitoring::class,
    ],    'api' => [
       …,
       \Inspector\Laravel\Middleware\WebRequestMonitoring::class,
    ]
]

Deploy your code and navigate the execution flow

The next step is to deploy your code to the production environment. Next, check out how Inspector creates a visual representation of what happens inside your code.

You will see transaction streams in your dashboard. And for each transaction, you can monitor what your application executes in real-time:

Enrich the Inspector timeline

Inspector monitors database queries, background jobs, and artisan commands by default. Still, there might be many critical statements in your code that need monitoring for performance and errors:

  • Http calls to external services
  • Function that deals with files (pdf, excel, images)

Thanks to Inspector, you can add custom segments in your timeline besides those detected by default. This allows you to measure the impact that a hidden code block has on a transaction’s performance.

Let me show you a real life example.

Suppose you have a queued job that executes some database queries and an HTTP request to an external service in the background.

Inspector detects job and database queries by default. Still, it could be interesting to monitor and measure the execution of the HTTP request to the external service. Then activate alerts if something goes wrong.

Use the inspector() helper function:

class TagUserAsActive extends Job
{
    /** 
     * @var User $user 
     */
    protected $user;

    // Monitring an external HTTP requests
    public function handle()
    {
        inspector()->addSegment(function () {            
            $this->guzzle->post('/mail-marketing/add_tag', [
                'email' => $this->user->email,
                'tag' => 'active',
            ]);        
        }, 'http');
    }
}

You will be able to identify the impact of the new segment in the transaction timeline:

Laravel Errors & Exceptions Alerting

By default, every exception fired in your Laravel app is reported. This ensures you’re alerted to unpredictable errors in real-time.

I wish that every change I make to my code could be perfect. But the reality is that this is not always the case. Some errors appear immediately after an update, while others pop up unexpectedly. It’s an unfortunate fact of life for developers. And it often also depends on problems caused by the connection between our application and other services.

Yet, Inspector makes the job easier. It automates the detection of unknown issues, so you no longer need to manually check the status of your apps. You no longer wait for reports from users. If something goes wrong, you’ll receive a notification in real-time. And after each release, you can stay informed about the impact of the latest code refactor.

If your code fires an exception, but you don’t want to block the execution, manually report the error to Inspector for personal monitoring.

try {   

    // Your dangerous code here...

} catch (GuzzleException $exception) {
   inspector()->reportException($exception)
}

Furthermore, if the HTTP request fails, you are alerted in real-time via your inbox to examine the error.

You even get access to detailed information gathered by Inspector in real time:

Conclusion

When a customer reports that something isn’t working, it forces you to drop whatever you are doing. Then start trying to reproduce the scenario, and recapture and reanalyze the logs in your toolset.

Getting an accurate picture of what’s happening can take hours or even days. Inspector can make a massive difference in efficiency, productivity, and customer happiness.

New to Inspector?

Get a monitoring environment specifically designed for software developers. Avoid any server or infrastructure configuration that many developers hate to deal with.

Thanks to Inspector, you will never need to install things at the server level or make complex configurations in your cloud infrastructure.

Inspector works with a lightweight software library that you install in your application like any other dependencies. Try the official Laravel package.

Create an account, or visit our website for more information: https://inspector.dev/laravel/

Laravel News Links

How to scale an agency while managing 2,000 client websites

https://www.noupe.com/wp-content/uploads/2022/05/pexels-canva-studio-3194519-964×1024.jpg

From ensuring that you hire the right people and are retaining employees, to onboarding long-term clients that will allow your business to grow, there’s no doubt that scaling any agency comes with its challenges. Once you reach a certain level in your agency, serving and managing multiple clients and their websites, things can get even more demanding. 

As a creative agency owner who has over 25 years of experience, I can sincerely say that scaling an agency while managing 2000 websites is no easy feat, but with the right know-how and tools, it is possible. 

Simplify the most important processes

When you’re managing a myriad of different elements, simplifying all areas of how your agency operates is essential. To achieve this, agencies must first begin by assessing which critical tasks are taking up the most time or require the most input from the large majority of their teams. Essentially, business owners need to strategically lower the impact that the most burdensome and important work has on the operations team. 

An agency specializing in designing websites, such as my own, will most likely realize that they need to understand their team’s strengths and design logistics to optimize the business. In my own business, we came to understand that we needed a software solution that would simplify and facilitate our agency’s ability to easily produce professional websites at a faster rate for our clients. Our thinking at the time was that if we reduced the effort and time it took to fulfill our most critical task, we could free up time and resources to onboard new clients and ultimately grow our business. 

Utilizing a low-code/no-code website building solution such as Duda helped us and will help growing agencies simplify the production and the workflow of their development, creative, and management teams. As a result, an agency’s core employees can rapidly create and finalize 10-page websites – which would normally take 20 hours to develop – within three to four hours. With up to 17 additional hours freed up, per website build, agencies that are just starting out can rapidly grow their business. More established agencies that manage many accounts will also benefit greatly from having additional hours to spare, as they can utilize these hours to manage even more clients and deliver even more products and services. For example, my agency only has one person in charge of maintaining 2,000 sites, because the website builder we use, Duda,  allows us to easily take care of updates and ensure modern websites that are constantly upgraded to the latest best practices.

Deliver a product that’s easy to use and unmatched in quality

The quality of the final product delivered to a client will greatly affect whether an agency will receive repeat business and word-of-mouth referrals. While spending money on marketing to bring in new clients is a great strategy in the short term, giving existing customers an incredible user experience and product will ensure that clients become free brand ambassadors, referring people to your business and plugging your service on social media.

Agencies that are managing a significant number of clients and want to drive high volumes of growth must utilize the most effective product support solutions to give their customers the best treatment. Pivoting to superior software, sooner, will help agencies deliver high-quality products and services. With a plethora of software solutions on the market, agencies must set aside time to investigate and test new software. Finding a solution that enhances the quality of the final product and makes the product easy to use might take time, but agencies should see this time as a necessary investment that will have good returns. 

Nurture revenue-building relationships with excellent support

Every relationship has the potential to be the key to an agency’s next big deal and growth. I’ll refer to my own agency as an example. In 2010, we started out with only eight clients. By keeping our clients happy, and with no sales or business development team, we grew to over 1,300 clients and counting. Clients who are well taken care of will reward you, and those who feel that your agency is not meeting their needs will warn their networks about your service. 

A major factor in maintaining a good relationship is the quality of support and communication they receive. When there is a request for their website to be updated, how long will it take for your agency to respond? If an average of a hundred service requests are received each week, can all requests be answered within two to three hours? Does your agency have a post-launch communication plan? These are the questions that agency owners need to ask themselves in order to assess the quality of their support. Agencies should never underestimate the power of calling clients regularly, solving their problems expeditiously, and sharing helpful information and insights without being prompted. 

Good service almost always leads to gaining a client’s trust. Once an agency has earned the trust of its clients, it is in a better position to offer additional services and will likely see clients remaining customers for a long time. While some may argue that retaining customers for a long period of time is insignificant, the reality, according to a survey conducted by conversion rate optimization experts Invesp, is that acquiring a new customer is five times more expensive than keeping an existing one. Mistreating or ignoring existing clients won’t get agencies any closer to actualizing their goal of scaling their business. 

A very important caveat is that not all clients are worth keeping. Most agencies will at some point encounter a client who cannot be satisfied, no matter what you do. To illustrate how we deal with high-stress clients in my own business, I’ll refer to a quarterly Board of Directors meeting which took in 2018. At the meeting, one of the Board members asked what our client turnover rate was, and I proudly replied: “less than one percent.” To my surprise, the entire Board was adamant that the client churn rate should be higher, as keeping difficult clients was bound to hinder the agency’s continuous growth. Today, we are able to identify which clients are worth keeping and which aren’t – a skill that all growing agencies should adopt. While we have only had to let go of about 10 to 15 clients, the shift in thinking resulted in increased productivity and, more importantly, a much better atmosphere in the workplace. No client is worth keeping if they bring unhappiness and unnecessary stress to an agency’s employees.

Quality begets quantity

Growing an agency while managing thousands of clients, while extremely challenging, is possible. Agencies that want to grow must simplify processes, deliver a high-quality service, and excel at customer support to effectively and seamlessly scale. When an agency specializes in a specific product offering, it’s critical to streamline the process of how the product is built. Quality will result in quantity: the higher the quality of the final product, the more revenue an agency will see. Furthermore, and most importantly, offering memorable and outstanding customer service will guarantee that clients spread the word and drive significant business growth.

The post How to scale an agency while managing 2,000 client websites appeared first on noupe.

noupe

Exploring Aurora Serverless V2 for MySQL

https://mydbops.files.wordpress.com/2022/05/null.png

Aurora Serverless V2 is generally available around the corner recently 21-04-22 for MySQL 8 and PostgreSQL, with promising features that overcome the V1 disadvantages. Below are those major features

Features

  • Online auto instance upsize (vertical scaling)
  • Read scaling (Supports up to 15 Read-replica)
  • Supports mixed-configuration cluster ie, the master can be normal Aurora(provisioned) and readers can be in serverlessv2 and vice versa
  • MultiAZ capability (HA)
  • Aurora global databases (DR)
  • Scaling based on memory pressure
  • Vertically Scales while SQL is running
  • Public IP allowed
  • Works with custom port
  • Compatible with Aurora version 3.02.0 ie., >= MySQL 8.0.23 (only supported)
  • Supports binlog
  • Support for RDS proxy.
  • High-cost saving

Now let’s proceed to get our hands dirty by launching the serverless-v2 for MYSQL

Launching Serverless V2

It’s time to choose the Engine & Version for launching our serverless v2

Engine type : Amazon Aurora

Edition : Amazon Aurora MySQL – Compatible edition ( Only MySQL used)

Filters : Turn ON Show versions that support ServerlessV2 ( saves time )

Version : Aurora MySQL 3.02.0 ( compatible with MySQL 8.0.23 )

Instance configuration & Availability

DB instance class : Serverless ‘Serverless v2 – new’

Capacity Range : Set based on your requirements and costing ( 1 to 64 ACUs )

Aurora capacity units(ACU) : 2GB RAM+ CPU + N/W

Availability & Durability : Create an Aurora replica

While choosing the capacity range, Minimum ACU will define the lowest capacity to which it scales down ie., 1ACU and Maximum ACU will define the maximum capacity to which it can scale up

Connectivity and Misc setting:

Choose the below settings based on your application needs

  • VPC
  • Subnet
  • Public access, (Avoid in favor of basic security)
  • VPC security group
  • Additional configuration ( Cluster group, parameter group, custom DB port, performance insight, backup config, Auto minor version upgrade, deletion protection )

To keep it short I have accepted all the defaults to proceed on to “Create database

Once you click the “create database” you can see the cluster getting created, Initially both the nodes in the cluster will be marked as “Reader instance” – don’t panic it’s quite normal.

Once the first instance becomes available, it would be promoted to “Writer” now the cluster is ready to accept the connection, post which the reader gets created in adjacent AZ, refer to the image below

Connectivity & End-point:

ServerlessV2 cluster also provides 3 end-points ie., Highly available cluster, read-only end-points and individual instance end-point

  • Cluster endpoint – This endpoint connects your application to the current primary DB instance for that Serverless v2 cluster. Your application can perform both read & write operations.
  • Readers endpoint – Serverless v2 cluster has a single built-in reader endpoint which is used only for read-only connections. This also balances connections up to 15 read-replica instances.
  • Instance endpoints – Each DB instance in a serverless v2 cluster has its own unique instance endpoint

You should always be mapping cluster and RO endpoints with applications for high availability

Monitoring:

Though Cloudwatch covers needed metrics, to get a deep & granular insight into DB behavior using PMM, I used this link for quick installation, In short for serverless I wanted to view the below

  • DB uptime, to see if DB reboots during scale-up or scale-down
  • Connection failures
  • Memory resize ( InnoDB Buffer Pool )

Here I took a T2.large machine to install & configure PMM.

Now let’s take Serverlessv2 for a spin:

The beauty of Aurora Serverless V2 is that it supports both Vertical scaling ie., auto Instance upsize as well as Horizontal scaling with read-replicas.

In the remaining portion of this blog will cover the vertical scaling feature of Serverless V2.

Vertical scaling:

With most of the clusters out there the most difficult part is upsizing the writer instance on the fly without interrupting the existing connection. Even after using proxies/DNS for failover, there would be connection failures.

I was more curious about the testing of the vertical scaling feature, since AWS claimed it to be online and without disrupting the existing connected connections, ie., while the query is running. Wow !! fingers crossed.

Come on let’s begin the test, So I decided the remove the “reader instance” first, below is the view of our cluster now.

My initial buffer pool allocation was 672MB since our Minimum (1ACU) we have 2GB out of which ¾ is allocated as InnoDB-buffer-pool

Test Case:

The test case is quite simple, am imposing an Insert only workload(Writes) using the simple load emulator tool Sysbench

Below is the command used

# sysbench /usr/share/sysbench/oltp_insert.lua --threads=8 --report-interval=1 --rate=20 --mysql-host=mydbops-serverlessv2.cluster-cw4ye4iwvr7l.ap-south-1.rds.amazonaws.com --mysql-user=mydbops --mysql-password=XxxxXxXXX --mysql-port=3306 --tables=8 --table-size=10000000 prepare

I started to load 8 tables in parallel with 8 threads and a dataset of 1M record per table

Observations and Timeline:

Scale-up:

Below are my observations during the scale-up process

  • Insert started at 03:57:40 exactly COM_INSERTS reaching 12.80/sec, Serverless was running with 672MB buffer_pool, exactly after 10 secs at 3:57:40 first Scaling process kicks in and buffer_pool memory was raised to 2GB, let’s have a closer look
  • After a Minute at 03:58:40, the second scaling process kicks in and buffer_pool size leaped to ~9G
  • I was keenly watching the uptime of MySQL for each scale-up process and also watching the thread failures, but to my surprise both were intact and memory(buffer pool) was scaling linearly at regular intervals of 60 secs and reached a max of 60GB at 04:11:40
  • The data loading got completed at 04:10:50 ( Graphical stats )

Scale Down:

  • Post the completion of Inserts in DB there was a brief period of 5min, since in production scale down has to be done in a slow and steady fashion. DB was completely idle now and connections were closed, at 04:16:40 buffer pool memory dropped from 60G to 48GB
  • Scale down process kicked in at regular intervals of 3 mins from the previous scale down operation and finally at 04:34:40 the serverless was back

Adaptive Scale-up & Down

I would say this entire scale up and scale down the process is very adaptive, intelligent, and well-organized one

  • No lag in DB performance.
  • Linear increase & decrease of resource is maintained
  • No DB reboot and Connection fails were kept at bay

Below is the complete snapshot of the buffer_pool memory scale_up & scale down process along with the INSERT throughput stats, both the process took around ~40mins

Along with the buffer_pool serverless also auto-tunes the below variables specific to MySQL

innodb_buffer_pool_size

innodb_purge_threads

table_definition_cache

table_open_cache

AWS recommends keeping this value to default in the custom Parameter group of serverlessV2

Below is the image summary of the entire scale-up and scale-down process.

AWS has nailed vertical scaling with aurora serverless, from my point of view its production though it’s in the early GA phase.

Summary:

  • The Upsize happens gradually on demand every 1 min.
  • The downsize happens gradually on the ideal load every 3 min.
  • Supports from MySQL 8.0.23
  • Untouch above said MySQL Variables on

Use Cases:

Below are some of the use cases where Aurora serverless V2 fits in perfectly

  • Applications such as gaming,retail applications, and online gambling apps wherein usage is high for a known period(say daytime or during the match ) and idle or less utilized for the other period
  • Suited for testing and developing environments
  • Multi-tenant applications where the load is unpredictable
  • Batch job processing

This is just a starting point, there are still a lot of conversations pending on the Aurora ServerlessV2 such as horizontal scaling(read scaling), Migration, parameters, DR, MutiAZ failover, and Pricing. Stay tuned here !!

Love to test this Serverless V2 on your production environment, Mydbops database engineers are happy to assist.

Planet MySQL

Following Supreme Court Precedent, Federal Court Says Unexpected Collection Of Data Doesn’t Violate The CFAA

https://i0.wp.com/www.techdirt.com/wp-content/uploads/2022/05/Screenshot-2022-05-14-1.22.33-PM.png?w=229&ssl=1

Last summer, the Supreme Court finally applied some common sense to the Computer Fraud and Abuse Act (CFAA). The government has long read this law to apply to pretty much any computer access it (or federal court litigants) doesn’t like, jeopardizing the livelihood of security researchers, app developers, and anyone who might access a system in ways the owner did not expect.

Allowing the government’s interpretation of the CFAA to move forward wasn’t an option, as the Supreme Court explained:

If the “exceeds authorized access” clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals. Take the workplace. Employers commonly state that computers and electronic devices can be used only for business purposes. So on the Government’s reading of the statute, an employee who sends a personal e-mail or reads the news using her work computer has violated the CFAA.

Or consider the Internet. Many websites, services, and databases “which provide ‘information’ from ‘protected computer[s],’ §1030(a)(2)(C)’” authorize a user’s access only upon his agreement to follow specified terms of service. If the “exceeds authorized access” clause encompasses violations of circumstance-based access restrictions on employers’ computers, it is difficult to see why it would not also encompass violations of such restrictions on website providers’ computers. And indeed, numerous amici explain why the Government’s reading of subsection (a)(2) would do just that: criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook

A decision [PDF] handed down by a New York federal court follows the Van Buren ruling to dismiss a lawsuit brought against a third-party app that collects and shares TikTok data to provide app users with another way to interact with the popular video sharing app. (h/t Orin Kerr)

Triller may exceed users’ expectations about what will be collected or shared, but it makes it pretty obvious it’s in the collection/sharing business. To utilize Triller, users have to opt in to data sharing right up front, as the court points out:

“To post, comment, or like videos, or to watch certain content on the App, users must create a Triller account.” ¶¶ 8, 30. When creating an account, a user is presented with a screen, depicted below, that provides various ways to sign up for an account:

This first step makes it clear Triller will need access to other social media services. Users can go the email route, but that won’t stop the app’s interaction with TikTok data. Hyperlinks on the sign-up screen directs users to the terms of service and privacy policy — something few users will (understandably) actually read.

But all the processes are in place to inform users about their interactions with Triller and its access to other social media services’ data. The court spends three pages describing the contents of these policies the litigant apparently did not read.

This is not to say users should be victimized by deliberately obtuse and convoluted terms of service agreements. If anything, more service providers should be required to explain, in plain English, what data will be collected and how it will be shared. But that’s a consumer law issue, not a CFAA issue, which is supposed to be limited to malicious hacking efforts.

Being unaware of what an app intends to do with user data is not a cause for action under the CFAA, especially now that some guardrails have been applied by the nation’s top court.

Wilson alleges that Triller exceeded its authorized access by causing users “to download and install the App” to their mobile devices without informing users that the App contained code that went beyond what users expected the App to do,” by collecting and then disclosing the users’ information. However, as Triller argues, even assuming that Wilson is not bound by the Terms and thus did not authorize Triller to collect and disclose her information, it is not the case that Triller collects this information by accessing parts of her device that she expected or understood to be “off limits” to Triller. Van Buren, 141 S. Ct. at 1662. Rather, Wilson merely alleges that Triller collects and then shares information about the manner in which she and other users interact through the App with Triller’s own servers. Thus, at most, Wilson alleges that Triller misused the information it collected about her, which is insufficient to state a claim under the CFAA.

Wilson can appeal. But she cannot revive this lawsuit at this level. The federal court says the Van Buren ruling — along with other facts in this case — make it impossible to bring an actionable claim.

Accordingly, Wilson’s CFAA claim is dismissed with prejudice.

That terminates the CFAA claims. Other arguments were raised, but the court isn’t impressed by any of them. The Video Privacy Protection Act (VPPA) is exhumed from Blockbuster’s grave because TikTok content is, after all, recorded video. Violations of PII (personally identifiable information) dissemination restrictions are alleged. These are tied together and they both fail as well.

While the complaint alleges what sort of information could be included on a user’s profile and then ultimately disclosed to the third parties, it contains no allegation as to what information was actually included on Wilson’s profile nor how that information could be used by a third party to identify Wilson. Indeed, the complaint lacks any allegation that would allow the Court to infer a “firm and readily foreseeable” connection between the information disclosed and Wilson’s identify, thus failing to state a claim under the VPPA even assuming the broader approach set out in Yershov.

These claims survive dismissal. So does Wilson’s claim about unjust enrichment under New York state law — something predicated almost entirely on the size of the hyperlinks directing users to Triller’s privacy policy and terms of service. Those can be amended, but there’s nothing in the decision that suggests they’ll survive dismissal again.

Wilson also brings a claim under Illinois’ more restrictive state law concerning user data (the same one used to secure a settlement from Clearview over its web scraping tactics), but it’s unclear how this law applies to a Illinois resident utilizing a service that is a Delaware corporation being sued in a New York federal court. It appears the opt-in process will be the determining factor, and that’s definitely going to weigh against the plaintiff. Unlike Clearview, which scrapes the web without obtaining permission from anyone or any site, Triller requires access to other social media sites to even function.

It’s a good decision that makes use of recent Supreme Court precedent to deter bogus CFAA claims. While Wilson may have legit claims under federal and state consumer laws (although this doesn’t appear to be the case here…), the CFAA should be limited to prosecution and lawsuits directed against actual malicious hacking, rather than app developers who are voluntarily given access to user information by users. This doesn’t mean entities like Triller should be let off the hook for obscuring data demands and sharing info behind walls of legal text. But the CFAA is the wrong tool to use to protect consumers from abusive apps.

Techdirt

The Ultimate Guide to Getting Started With Laravel

https://hashnode.com/utility/r?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1652509426115%2FtxKuaNzBf.jpg%3Fw%3D1200%26h%3D630%26fit%3Dcrop%26crop%3Dentropy%26auto%3Dcompress%2Cformat%26format%3Dwebp%26fm%3Dpng

Disclaimer: This article is long. I recommend you to code along as we progress. If you are not in front of a development computer, bookmark this URL and come back later.

Is this tutorial for you?

☑️ If you know PHP and wish to start using Laravel, this is for you.

☑️ If you know any PHP framework and wish to try Laravel, this is for you.

🙅 If you already know Laravel (well), this tutorial may not help that much.

Why this tutorial?

The official Laravel documentation is one of the best technical documentation. But, it is not designed to get beginners started with the framework step-by-step. It doesn’t follow the natural way of learning a new thing.

In this tutorial, we will first discuss what needs to be done next, and then see what Laravel offers to achieve that. In some cases, we’ll discuss why something exists.

Important to note that for getting to the depth of a topic/feature, the Laravel documentation is the only resource you’ll need.

Assumptions

  • You know about Object-oriented programming (OOP)
  • You have an idea about the Model-View-Controller pattern (MVC)
  • You are okay with me not offering a designed demo project with this tutorial.

What is the goal?

We’ll start from scratch and code a simple Laravel web app together to learn the building blocks of Laravel and boost your confidence.

As you may have read:

The only way to learn a new programming language is by writing programs in it.

— Dennis Ritchie (creator of the C programming language)

Same way, you will learn a new framework by doing a small test project today.

Keep in mind that we do not plan to use all the features of Laravel in a single project. We will cover all the important features though.

On your mark, Get set, Go!

You should be ready to get in the active mode from this point onwards. No more passive reading. I want you to code along, break things, witness magic, and call yourself a Laravel developer by the end of this journey.

Come on a ride with me. It’ll be fun, I promise!

On your mark, Get set, Go

Computer requirements

The things that you need to make a Laravel web app run:

  • Nginx (Or a similar webserver to handle HTTP requests)
  • PHP (We’ll use PHP 8.1)
  • MySQL (Laravel supports MariaDB, PostgreSQL, SQLite, and SQL Server too)
  • A bunch of PHP extensions
  • Composer

Depending on your OS, there are a few options available:

Mac

I recommend Valet. It is lightweight and the go-to choice of many devs.

Laravel Sail is a great choice too for Docker fans.

Linux

Laravel Sail is easier for new Linux users.

If you are comfortable with Linux, like me, go with the direct installations.

Windows

I recommend Laragon. It is not from the official Laravel team but works well.

Other good choices are Laravel Sail if you are comfortable with Docker. or Homestead if you have a powerful computer that can run a virtual environment.

A note on Composer

If you haven’t come across Composer before, it’s the dependency manager for PHP. Our project depends on the Laravel framework. And the framework has a lot of other dependencies.

Using the Composer allows us to easily install and update all these direct dependencies and nested dependencies. That results in code reusability which is the reason frameworks can offer such a huge number of features.

Setup the project

Once your computer is prepared to run a Laravel web app, fire your first command to create the project.

composer create-project laravel/laravel my-blog

This command does three things:

  1. Creates a new directory named my-blog
  2. Clones the Laravel skeleton application code into it
  3. Runs the composer install command to install the dependencies

As you may have guessed from the project name, we are developing a simple blog as our sample Laravel web app today.

A core PHP (non-framework) project starts from an empty directory, but a Laravel project starts with the skeleton app to get you up and running.

View in the browser

Many developers use Laravel’s local development server to view the web app during development. For that open terminal, cd into the project directory, and fire:

php artisan serve

You will be presented with a URL. Open that and you’ll see Laravel’s default page.

Laravel default page

Yay. It’s running. Great start.

I recommend using nginx to manage your local sites. You need to use it for production anyway.

Relax

There are many files and directories in the default Laravel web app structure. Please do not get overwhelmed. They are there for a reason and you do not need to know all of them to get started. We will cover them soon.

Create a database

You may use your favorite GUI tool or log in to the database server via console and create a database. We will need the DB name and user credentials soon.

While a Laravel web app can run even without a DB, why would you use Laravel then?
It’s like going to McDonald’s and ordering only a Coke. Okay, some people do it. 😵‍💫

Understanding the project configuration

The configuration values of a Laravel project can be managed from various files inside the config directory. Things like database credentials, logging setup, email sending credentials, third-party API credentials, etc.

The config values of your local environment will be different from the ones on the production server. And if multiple team members are working on the project, they may also need to set different config values on their local computers. But, the files are common for all.

Hmmm… How could that be managed then?

Laravel uses PHP dotenv to let you configure project config values specific to your environment. The .env file is used for the same. It is not committed to the version control system (git) and everyone has their local copy of it.

The Laravel skeleton app creates the .env file automatically when you run the composer create-project command. But when you clone an existing project, you need to duplicate it from the .env.example file yourself.

Working with the .env file

While there are many config values you can set in the .env file, we will discuss a few important ones only:

  • APP_NAME: A sane default. You may wish to change to ‘My Awesome Blog’
  • APP_ENV: Nothing to change. Set this to ‘production’ on the live site.
  • APP_KEY: Used for encryption. Automatically set by the composer create-project command. If you clone an existing project, run the php artisan key:generate command to set it.
  • DB_*: Set respective config values here.

Setting these should get you started. You may update others in the future.

The public directory

Laravel instructs that the public directory must be the entry point for all the HTTP requests to your project. That is for security reasons. Your .env file must not be accessible from the outside.

Please do not try any hack around setting the project root to the public directory. If your host doesn’t allow it, change your host.

You are doing great


You already know a bunch of fundamental Laravel concepts now. Doing well!


Your first database table

Let us start with the articles table. The schema can be:

id integer
title varchar
content text
is_published boolean (default: false)
created_at datetime
updated_at datetime

Challenges

In a non-framework project, we create and manage the tables manually. A few of the issues that we face are:

  • It is hard to track exactly when a column was added/updated in a table.
  • You must make necessary DB schema changes to the production site manually during deployments.
  • If multiple team members are working on the project, each one has to manually perform schema updates to their local database to keep the project working.

With Laravel and other backend frameworks, Migrations solve these issues. Did you ask how? Read on.

Database migrations

Migrations are nothing but PHP files that define the schema using pre-defined syntax. The Laravel framework provides a command to ‘migrate’ the database i.e. apply the latest schema changes.

Let us not stress over the theory much. It will make sense once you practically make one. Please open the terminal (you should be inside the project directory) and fire the following command.

php artisan make:migration create_articles_table

The framework should have created a brand new file inside the database/migrations directory with some default content. The name of the file would be in the format of DATE_TIME_create_articles_table.php.

Please open the file in your favorite text editor/IDE (mine is VS Code). And make changes in the up() method to make it look like the following:

Schema::create('articles', function (Blueprint $table) {
    $table->id();
    $table->string('title');
    $table->text('content');
    $table->boolean('is_published')->default(false);
    $table->timestamps();
});

The code should speak for itself. That is the beauty of Laravel.

Table generation in a snap

Ready to witness magic?

Run the following command to migrate the database:

php artisan migrate

The articles table will appear in your database now. Yeah, you saw it right. And the same command can be run on production and by the team members to create the table. They do not need to ask you for the table name or column details. We have all been there.

Tables can be created, updated, and removed with artisan commands in Laravel. You’ll never need to manually manage the tables in your database.

Fun fact: You can run the php artisan migrate command any number of times without affecting your DB.

Note: You’d find that many other tables were also generated. That is because Laravel comes with a few migration files out of the box. We won’t get into the details but you can update/remove them as needed in your actual projects.

Eloquent ORM

I am excited to share one of the most loved Laravel weapons with you: Eloquent

Eloquent power

If you’re new to the term ‘ORM’, it’s a technique where each database table is represented as a PHP class, and each table record is represented as an object of that class.

If you haven’t experienced the power of objects before, fasten your seatbelt. It’ll be an incredible ride with Eloquent.

Please fire the following command now:

php artisan make:model Article

A new file called Article.php should have been created inside the app/Models directory. It’s a blank class with one trait which we can skip for the time being.

Tip: You can create the model and migration files by firing a single command: php artisan make:model Article --migration

All the database interaction is generally done using Eloquent models in Laravel web apps. There is no need to play directly with the database tables in most cases.

Convention over configuration

Laravel, like many other frameworks, follows some conventions. You may not find the complete list of conventions but following the code examples as per the official documentation goes a long way.

Our model name is Article. And Eloquent will assume the table name to be the plural (and snake cased) version of it i.e. articles. We can still specify the table name if the convention can’t be followed in some exceptional cases.

Similar conventions are followed for the primary key (id) and the timestamps (created_at and updated_at) as most of the tables need them. You need to specify them inside the Eloquent model only if they are different from the default values. Again, no need to write that boilerplate code in most cases. 🤗


All clear till now? Good.. Next plan is to add articles and then list them.


Routes

The endpoints (or URLs) in a core PHP project are decided by files. If the user opens ‘domain.com/about.php’ in the browser, the about.php file gets executed.

But for Laravel web apps, the HTTP server (nginx/apache) is instructed to serve all requests to public/index.php. That file boots the framework and looks at the routes to check whether there is a respective endpoint defined for the requested URL.

There are numerous benefits to this route pattern compared to the traditional way. One of the main benefits is control. Just like they keep only one main entrance gate for buildings for better security, the route pattern allows us to control the HTTP traffic at any moment in time.

Your first route

We have the articles table but no articles yet. Let’s add some. Please open the routes/web.php file in the text editor and add:


use App\Http\Controllers\ArticleController;



Route::get('/articles/create', [ArticleController::class, 'create']);

You just defined a new route where the endpoint is /articles/create. When the user opens that URL in the browser, the create() method of the ArticleController will be used to handle the request.

I have already assumed that you know a bit about the MVC architecture so let’s jump straight into the code.

The controllers

You can either create a controller class in the app/Http/Controllers directory manually or use this command:

php artisan make:controller ArticleController

The controller code is a one-liner for this endpoint. All we need to do is display the add article page to the user.

<?php

namespace App\Http\Controllers;

class ArticleController extends Controller
{
    public function create()
    {
        return view('articles.create');
    }
}

Simple enough, we let the view file deliver the page.

The add article page view

The view files of Laravel are powered by the Blade templating engine. The extension of the view files is .blade.php and you can write HTML, PHP, and blade-syntax code to them.

Getting into the details of Blade is out of the scope of this article. For now, we will write simple HTML to keep things moving. As mentioned, we will also stay away from the designing part.

Please create a new file named created.blade.php in the resources/views/articles directory and add the following code:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Add Article</title>
</head>
<body>
    <form action="/articles" method="POST">
        @csrf

        <input type="text" name="title" placeholder="Title">
        <textarea name="content" placeholder="Content"></textarea>
        <input type="checkbox" name="is_published" value="1"> Publish?
        <input type="submit" value="Add">
    </form>
</body>
</html>

If the CSRF thing is new for you, the CSRF protection page inside the Laravel documentation does a great job explaining it.

The respective POST route

Route::post('/articles', [ArticleController::class, 'store']);

This route will handle the HTML form submission.

We are following the resource controller convention here. It is not compulsory but most people do follow it.


Congrats! You’re more than halfway there and believe me, it’s easier from here.


Request input validation

You’ll fall in love with the Validation features of Laravel. It is so easy to validate input data that you would want to do it.

Let’s see how:

use Illuminate\Http\Request;



public function store(Request $request)
{
    $validated = $request->validate([
        'title' => ['required', 'string', 'max:255', 'unique:articles'],
        'content' => ['required', 'string'],
        'is_published' => ['sometimes', 'boolean'],
    ]);
}

We are adding the store() method to the controller and validating the following rules with just four lines of code:

  • The ‘title’ and ‘content’ input fields must be provided, must be string, and cannot be empty.
  • A maximum of 255 characters can be passed for the ‘title’ input.
  • The ‘title’ cannot be the same as the title of any other existing articles (Yes, it fires a DB query).
  • The ‘is_published’ input may be passed (only if checked) and the value has to be a boolean when passed.

It still blows my mind 🤯 how can it be so easy to validate the inputs of a request.

Laravel offers an optional Form Request feature to move the validation (and authorization) code out of the controller to a separate class.

Display validation errors

The surprise is not over yet. If the validation fails, Laravel redirects the user back to the previous page and sets the validation errors in the session automatically. You can loop through them and display them on the page as per your design.

Generally, the code to display the validation errors is put in the layout files for reusability. But for now, you can put it in the create.blade.php file.

<body>
    @if ($errors->any())
        <ul>
            @foreach ($errors->all() as $error)
                <li></li>
            @endforeach
        </ul>
    @endif

    // ...

This piece of code uses Blade directives @if and @foreach to display the validation errors in a list.

If you were to do the same code without Blade directives, it would look like:

<body>
    <?php if ($errors->any()) { ?>
        <ul>
            <?php foreach ($errors->all() as $error) { ?>
                <li><?php echo $error; ?></li>
            <?php } ?>
        </ul>
    <?php } ?>

    // ...

Feel the difference? You get it!

Save article details to the database

Open the controller again and append the following code to the store() method.

use App\Models\Article;



public function store(Request $request)
{
    

    Article::create($validated);
}

That’s it! Did you expect the article saving code to be 10 lines long? 😃

This single line will add the new article with the submitted form details in the table. impressive! I know how you feel.

I invite you to go ahead and submit that form now.

Do not worry if you face an error like the following. Ours is a developer’s life.

Mass Assignment Error

Mass assignment

The create() method of the Eloquent model accepts an array where keys are the column names and values are, well, the values.

In our example, we are passing the array of validated inputs to the create() method. But many developers use all of the inputs from the POST request which results in a security vulnerability ⚠️. Users can submit the form with some extra columns which are not supposed to be controlled by them (like id, timestamps, etc.)

Laravel Eloquent enables mass assignment protection out-of-the-box to prevent that. It cares for you.

And you need to specifically inform Eloquent about how you wanna deal with mass assignments. Please open the model file and append the following line:

protected $fillable = ['title', 'content', 'is_published'];

Cross your fingers again, open the browser, and submit that form. With luck, the article should successfully be stored in the database table now. Bingo!

Redirect and notification

You would want to give feedback to the user with a notification after the successful creation of a new article.

Laravel makes it a cinch. Open the store() method of the controller again:



return back()->with('message', 'Article added successfully.');

This is what makes developers fall in love with Laravel. It feels like you’re reading plain English.

The user gets redirected back and a flash session variable is set. A flash session variable is available for one request only and that is what we want.

Again, the handling of notifications is generally done in the layout files using some Javascript plugins. For now, you may put the following code in the create.blade.php for the demo.

<body>
    @if (session('message'))
        <div>
            
        </div>
    @endif

    // ...

Add one more article using the form and you would be greeted with a success message this time.

List of articles

I assume you would have added some articles while testing the add article functionality. Let’s list them on the page now. First, please add a new route:

Route::get('/articles', [ArticleController::class, 'index']);

You read it right. We are using the same URL but the HTTP method is different. When the user opens the ‘/articles’ page in the browser (a GET request), the controller’s index() method will be called. But the store() method is used when the form is submitted to add a new article (a POST request).

The controller code

Here’s how you fetch records from the database and pass them to the view file.

public function index()
{
    $articles = Article::all();

    return view('articles.index', compact('articles'));
}

I am sure you are not surprised this time. You already know the power of Laravel Eloquent now.

The $articles variable is an instance of Collection (not an array) and we will briefly cover that soon.

Simple table to display articles

Please create another file named index.blade.php in the resources/views/articles directory. I will cover just the body tag:

<body>
    <table border="1">
        <thead>
            <tr>
                <th>Title</th>
                <th>Content</th>
                <th>Published</th>
            </tr>
        </thead>
        <tbody>
            @foreach ($articles as $article)
                <tr>
                    <td></td>
                    <td></td>
                    <td></td>
                </tr>
            @endforeach
        </tbody>
    </table>
</body>

The code is straight forward and you may now view the articles you have added on the /articles page now.

The limit string helper is provided by Laravel.

Tip: It is better to display paginated records (articles) when there are hundreds of them. Laravel has got your back for that too.


Pat your back. You did a great job. You can call yourself a Laravel developer now.


Limiting the scope of this article

Okay, everything comes to an end. So is this tutorial.

Implementing the authentication feature to this project is one of the things I wished to include in this article but it is already too long.

And before discussing authentication, we have to learn about password hashing, middleware, and sessions in Laravel.

Another topic I wanted to touch on is Eloquent relationships. This feature is so powerful you’d never want to go back to the old days.

Goes without saying that tests are a must for your production-level projects. And Laravel supports you with testing too.

In short, we have barely scratched the surface here. Drop a comment if you want me to write on the subject more. We can continue this example project and make a series of articles.

Meanwhile, let me share some other Laravel goodies you may wanna explore.

Extras

Collections: Arrays with superpowers. Offers various methods to transform your arrays into almost any format.

Mails: I have never sent an email to a real person from the development site thanks to Laravel. Which I had done multiple times before it.

Queues: Delegating tasks to the background processes gives a huge performance boost. Laravel Horizon + Redis combo provides scalability with simplicity.

Logging: Enabled by default. All the app errors are logged inside the storage/logs directory. Helps with debugging a lot.

File Storage: Managing user-uploaded files doesn’t have to be complex. This feature is built on top of the stable and mature Flysystem PHP package.

Factories and Seeders: Quick generation of dummy data of your tables for demo and tests.

Artisan commands: You’ve already used some. There are many more in the basket. And you can create custom ones too. Quite helpful when combined with the scheduler.

Task scheduling: I bet you’d agree that setting up cronjobs the right way is hard. Not in Laravel. You’ve to see it to believe it.

And many more…

What next?

There are a thousand and seventy more things we can do for this project. If you are interested, here are a few options:

  • Add register/login functionality
  • Let the user enter markdown for content for better formatting
  • Generate slug for the articles automatically
  • Allow users to edit/delete articles
  • Let users attach multiple tags to the articles

Bye bye

Okay, stop.

You have created a small project in Laravel and learned the basics. Time to celebrate. 🥳

I hope you enjoyed the journey as much as I did. Feel free to ask any questions below.

And If you know someone who might be interested in learning Laravel, share this article with them right away.

Bye.

PS – I would be very happy if you could push your code to a repository and share the link with me.

Laravel News Links