Short URL v2.0.0 Released! – Add short URLs to your web app

Short URL v2.0.0 Released! – Add short URLs to your web app

https://ift.tt/2xBnFkK

Short URL v2.0.0

New Features

Added referer URL tracking options (#26)

There is now a new tracking option for the short URLs that allows you to track the referer URL of the visitor. For example, if the short URL is placed on a web page with URL “` https://ift.tt/2UeanlN “`, that URL will be recorded if the referer URL tracking is enabled.

If you want to override the default config option of whether if referer URL tracking is enabled or not when creating a shortened URL, you can use the ->trackRefererURL() method.

The example below shows how to enable referer URL tracking for the URL and override the default config variable:

$builder = new \AshAllenDesign\ShortURL\Classes\Builder();  $shortURLObject = $builder->destinationUrl('https://destination.com')->trackVisits()->trackRefererURL()->make();

Added device type tracking options (#27)

There is now a new tracking option for the short URLs that allows you to track the device type of the visitor. There are four possibilities of device type: “` mobile “`, “` desktop “`, “` tablet “`, “` robot “`. This can be particularly useful for analytical purposes.

If you want to override the default config option of whether if device type tracking is enabled or not when creating a shortened URL, you can use the ->trackDeviceType() method.

The example below shows how to enable device type tracking for the URL and override the default config variable:

$builder = new \AshAllenDesign\ShortURL\Classes\Builder();  $shortURLObject = $builder->destinationUrl('https://destination.com')->trackVisits()->trackDeviceType()->make();

Added functionality to set the tracking options for individual short URLs (#29)

Up until now, the tracking options were set in the config and affected all new and existing short URLs. In this release, the tracking options in the config are now used for defining the defaults. These values can now be overridden when creating your short URLs.

Updating the tracking options in the config now won’t affect the new short URLs that are created.

The example below shows how to enable IP address tracking for the URL and override the default config variable:

$builder = new \AshAllenDesign\ShortURL\Classes\Builder();  $shortURLObject = $builder->destinationUrl('https://destination.com')->trackVisits()->trackIPAddress()->make();

Learn more about setting the tracking options in the README.

Added functionality to set the redirect status code for individual short URLs (#25)

Up until now, all short URLs have redirected (if using the package’s provided controller) with a 301 HTTP status code. But, this can now be overridden when building the shortened URL using the “` ->redirectStatusCode() “` method.

The example below shows how to create a shortened URL with a redirect HTTP status code of 302:

$builder = new \AshAllenDesign\ShortURL\Classes\Builder();  $shortURLObject = $builder->destinationUrl('http://destination.com')->redirectStatusCode(302)->make();

Added a ShortURLVisited event (#24)

Each time a short URL is visited, the following event is fired that can be listened on:

AshAllenDesign\ShortURL\Events\ShortURLVisited 

This is useful for if you want to trigger some code via listeners whenever a short URL is visited without needing to override the package’s provided controller.

Added trackingEnabled() helper method (#30)

To check if tracking is enabled for a short URL, you can use the ->trackingEnabled() method. It will return true if tracking is enabled, and false if not.

The following example shows how to check if a short URL has tracking enabled:

$shortURL = \AshAllenDesign\ShortURL\Models\ShortURL::first(); $shortURL->trackingEnabled();

Added trackingFields() helper method (#30)

To check which fields are enabled for tracking for a short URL, you can use the ->trackingFields() method. It will return an array with the names of each field that is currently enabled for tracking.

The following example shows how to get an array of all tracking-enabled fields for a short URL:

$shortURL = \AshAllenDesign\ShortURL\Models\ShortURL::first(); $shortURL->trackingFields();

programming

via Laravel News Links https://ift.tt/2dvygAJ

February 25, 2020 at 09:25AM

Bowling Ball Trick Shots

Bowling Ball Trick Shots

https://ift.tt/2SVDCdU

Bowling Ball Trick Shots

Link

Skateboarding site The Berrics dusted off and remastered this classic clip of trick shot master Billy Marks tossing around a bowling ball in a skate park. He might only knock down one pin at a time, but he does it with so much style and grace.

fun

via The Awesomer https://theawesomer.com

February 25, 2020 at 11:00AM

Limit Access to Authorized Users

Limit Access to Authorized Users

https://ift.tt/390vTAQ

For any typical web application, some actions should be limited to authorized users. Perhaps only the creator of a conversation may select which reply best answered their question. If this is the case, we’ll need to write the necessary authorization logic. I’ll show you how in this lesson!

Published on Feb 24th, 2020.

programming

via Laracasts https://ift.tt/1eZ1zac

February 24, 2020 at 04:06PM

How Stop Signs are Made

How Stop Signs are Made

https://ift.tt/2vY5oNY

How Stop Signs are Made

Link

New York City sees many of its stop signs and other street signs vandalized or stolen each year. Between replacements and other projects, the Department of Transportation’s Queens sign shop makes over 100,000 new signs each year. Insider takes us inside the facility for a look at the work that goes into this laborious process.

fun

via The Awesomer https://theawesomer.com

February 24, 2020 at 05:45PM

AndBeyond Ngala Treehouse

AndBeyond Ngala Treehouse

https://ift.tt/2HTQtXA

AndBeyond Ngala Treehouse

Link

This luxury safari getaway in South Africa is a four-story treehouse in the wilderness of the Ngala Private Game Reserve, adjacent to the Kruger National Park, where visitors can spot the Big Five including prides of lions. The solar-powered treehouse has a third-floor bedroom and a rooftop platform for a genuinely wild night.

fun

via The Awesomer https://theawesomer.com

February 24, 2020 at 03:30PM

5 ways to write Laravel code that scales (sponsor)

5 ways to write Laravel code that scales (sponsor)

https://ift.tt/2PkWb9j

5 ways to write Laravel code that scales

Well hello there, Laravel News reader, it’s Jack Ellis & Paul Jarvis, the founders of Fathom Analytics. Before we dive into the goods, allow us to introduce ourselves. We run Fathom Analytics, a simple, privacy-focused analytics platform used by forward-thinking website owners who care about their visitors’ privacy. Our application is built with Laravel and deployed on Laravel Vapor. Jack is also the creator of Serverless Laravel, a course for mastering Laravel Vapor, and Paul is also the author of Company of One, a book that questions traditional business growth in search of a better definition of success. Together, we make up the Fathom Analytics team.

Fathom Analytics is used extensively throughout the Laravel community. Some of our fantastic Laravel customers include:

  • Matt Stauffer (Partner at Tighten)
  • James Brooks (Developer at Laravel LLC & Happy Dev FM Host)
  • Dries Vints (Developer at Laravel LLC & Founder of Laravel.io)
  • Jack McDade (Creator of Statamic)
  • Justin Jackson (Cofounder of Transistor)
  • Stefan Bauer (Founder of PingPing)

And many others.

The following post is not us selling Fathom. Instead, it aims to help you be a better Laravel developer. Our only plug: If you ever need simple analytics or know someone who does, give Fathom Analytics a try.

So now that the introduction is done, I’m (Jack) going to go over some code tips for scaling. I’m going to be focusing on the code, not the infrastructure.

Be prepared for database downtime

When databases go offline, application front ends typically follow, because apps often can’t live without the database. But what happens behind the scenes? Whilst you’re replying to angry tweets, your queue workers are still working away, getting nowhere and potentially losing all of your job data.

When we write jobs, we need to understand that they’re sometimes going to fail. We’re not mad about this, we understand it’s the nature of a job. Imagine we’re using Redis for our queue, because we want something highly scalable, we set our worker up:

php artisan queue:work redis —tries=3 —delay=3

Everything is running beautifully. Our jobs are queuing up fast, thanks to super-low latency from Redis, and our users love us (no angry tweets in sight!).

But we would be silly to assume that our database is always going to be available.

Imagine that it goes offline for 20 minutes… what happens to our jobs? They continue to run since Redis is still online. And if we’ve not touched the default configuration, they’ll retry after 90 seconds and, based on the code above, there’ll be 3 attempts. After those attempts, the failed jobs go into the failed_jobs table in our database. Wait, hold on, our database is offline… so the jobs can’t be inserted into the failed_jobs table.

Here’s what we can do to prevent this:

try { // Check to see if the database is online DB::connection()->getPdo(); } catch (\Exception $e) { // Push it back onto the Redis queue for 20 mins $this->release(1200); } 

With this piece of code, we can run it inside some job middleware or add it to the start of a job. At the risk of being called Captain Obvious, let me explain what it does. Before it does anything in the job, it checks to make sure the database connection is online. If it’s not, it releases the job for an explicit amount of time (20 minutes). If you’re set-up to try your jobs 3 times, that’ll get you around 40 minutes in the first 2 attempts. If your database isn’t back online within that timeframe then, crikey, you have bigger problems.

Now, you might decide that having a 20-minute delay is stupid. Calm down, I have another approach. Set your tries up to something higher:

php artisan queue:work redis --tries=15 --delay=3

And then roll with this code:

try { // Check to see if the database is online DB::connection()->getPdo(); } catch (\Exception $e) { if ($this->attempts() <= 13) { $this->release(60); } else { $this->release(1200); } } 

With this, you get the best of both worlds. The first 13 attempts lead to a 60-second delay, which is great if your database had a tiny blip and was offline for 20ms, since your job will be completed much sooner, and you also have the 20-minute delay for when your database has been offline for 15 minutes or longer. This isn’t production code, this is just a concept for this lovely Laravel News article, but this can be modified, tested & implemented beautifully. So give it a go.

Assume all external services will go offline at some point

Developers can be complacent sometimes, can’t we? Throw off a job to the queue, it’ll be fine. Check the cache, it’ll be online. But what happens when these pieces are offline? Sure, if you’re running these things inside of jobs, the jobs will fail / retry and you’ll live to code another day. But if you’re queuing up jobs or checking cache when the user makes an HTTP request to your application, it’ll be the end of the world as we know it and everybody will hurt. But we can be shiny happy people if we use the following technique:

// Adding Fault tolerance retry(20, function() use ($request) { dispatch(new JobThatUsesTheRequest($request)); }, 200); 

The beauty here is that we retry the queueing of the job 20 times, with a 200ms delay between each attempt. This is a great way to absorb any temporary downtime from your queue. Yes, it increases the response time for the user but, guess what, the request gets fulfilled, so who’s the victim?

Whilst the above works great with high-availability, fully managed queues such as SQS, what do you do when you have low-availability queues? Ideally, you shouldn’t. If your boss or client won’t let you spend more money to get a high-availability queue solution, here’s some code that’ll help with that:

try { retry(20, function() use ($request) { dispatch(new JobThatUsesTheRequest($request)); }, 200); } catch (\Exception $e) { Mail::raw('Error with low-availability queue, increase budget please', function ($message) { $message->to('yourboss@yourcompany.com'); $message->subject('Look what you did'); }); } 

Well, that’s what I’d do 😉

Use a faster session driver

One of the things that I see in a lot of applications is people using the default session driver or their database. That’s fine at a small scale but it’s not going to deliver the best results at scale. A better option would be to use an in-memory store like Redis.

Before you do anything, get Redis set-up, grab the connection details and set the appropriate environment variables (that’s all you get here, this isn’t an “adding Redis to Laravel” guide :P).

Once that’s all set-up and ready to go, open up config/session.php and scroll down to the Redis section. Copy the default entry and change the key to ‘session’. And then change the database value to env(‘REDIS_SESSION_DB’, 3) and add an environment variable for it. The Redis area should look something like this:

'redis' => [ 'client' => env('REDIS_CLIENT', 'predis'), 'options' => [ 'cluster' => env('REDIS_CLUSTER', 'redis'), 'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'), ], 'default' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_DB', 0), ], 'cache' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_CACHE_DB', 1), ], 'session' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_SESSION_DB', 3), ], ], 

Now you want to make sure you have the following variables in your .env file:

  • SESSION_DRIVER=redis
  • SESSION_DRIVER=session

And you’ll be ready to rock & roll. Response times go down tremendously. You’re welcome, friend.

Don’t waste queries on pointless stuff

Let’s look at something that doesn’t matter much at low scale but starts to matter more as you grow: caching your queries. Most people already do this, which is fantastic, but a lot of people don’t. Some people cache the wrong stuff, and others get it just right. Others run into all sorts of stale cache issues, and they can spend hours debugging problems caused by cache. Heck, we’ve all been there.

So what can we do if we want to live a happy life where we use our resources efficiently?

Cache static data

If you have a table in your database, something like Countries, which is seldom going to be updated, you can cache that without any stale cache drama.

$countries = Cache::remember(‘countries:all’, 86400, function() { return Country::orderBy(‘name’, ‘asc’)->get(); }); 

And I’d typically go for 24 hours. Whilst there aren’t many new countries popping up each day, there’s still a chance a country may rename itself, etc. If we’re being realistic, you could also cache it for a week. But why don’t we use rememberForever? We could. I just prefer to set Redis’ eviction policy to a “lru option” (this isn’t a Redis lesson, so we stop here!).

Cache dynamic data

Back in the early, early days, a lot of us stayed away from caching user objects and other pieces. “What if the user changes their email and it’s wrong in the cache?”. God forbid. But it doesn’t have to be like this. If we take responsibility for keeping our cache fresh, there’s no issue. In Fathom Analytics, we use caching extensively for dynamic data, and we use observers to make sure that Cache is kept up to date.

We use functions such as Site::loadFromCache($id) and then, whenever the site changes, we make sure we call Site::updateCache($id, $site). And, of course, we also use Site::deleteFromCache($id). You can only imagine the database calls we save ourselves, allowing us to never worry about database load.

This can also be really beneficial for updates to the database. Instead of doing a findOrFail on a model, you can just check the cache and then run the update. When you’re handling 100 updates, the effects of this change are negligible, but once you get into the hundreds of thousands to millions, it can make a big difference.

Do less in your commands

Final one, I promise. Also, hey, you’ve read 1,500 words of my ramblings, I appreciate it. I’d love to hear from you, and so would Paul, we’re @jackellis and @pjrvs on Twitter. Even if you hated this article, tell us how much you hated it. You know what they say: any press is good press.

One of the things I’ve seen a lot of people do is try to do too much in their commands. For example, they might make their commands process & send emails whenever they’re executed, or they’ll perform some logic on various data. This is fine at small scale, but what happens when your data increases or you start sending our many more emails? Your commands will timeout. You need to break up your background tasks. Please. If you won’t do it for yourself, do it for us.

When using commands, you should use them to dispatch jobs. Jobs can be dispatched in a matter of milliseconds, and the processing can be done in isolation. This means your command won’t timeout or reach some silly memory limit. And yes, this isn’t always the case, but it’s relevant when you’re working with data loads that will scale.

$userChunks->each(function ($users) { SendUsersAnEmail::dispatch($users); }); 

By doing this, we break up our workload. We can break our users up into chunks of 100 and have our jobs handle emailing them. Imagine if we’re doing this with 500,000 users, we’ve moved from processing all 500,000 in a single command to handling it between 5,000 single jobs. Much nicer. We could do more than 100 users in a job, obviously, this is just an example.

And as my favourite bunny once said… that’s all folks. If you have any questions, you can always tweet us.

And now that we’re all done, I’ll re-plug a few things:

***

Many thanks to Fathom Analytics for sponsoring Laravel News this week.

Filed in: News

programming

via Laravel News https://ift.tt/14pzU0d

February 24, 2020 at 09:20AM

Netflix’s first ‘Transformers’ teaser reveals a hopeless war

Netflix’s first ‘Transformers’ teaser reveals a hopeless war

https://ift.tt/2T1Y0c8

Netflix has posted the first teaser trailer for its Transformers animated series, and it’s now clear just how the service will take on the robots’ origins. The clip for the first part of the War For Cybertron trilogy, Siege, portrays the Autobots fighting a seemingly hopeless war to prevent the Decepticons from finding the Allspark (the source of the machines’ power) and destroying the essence of what the Transformers are. It’s not the most complex narrative, although it’s surprisingly bleak for kids’ fare. It even draws eerie parallels with human politics by having Megatron spin the Autobots’ efforts as "aggression."

The teaser also gives a peek at the "new look" animation style, which ultimately boils down to cel-shaded CG but fits well with larger-than-life battling robots. On that note, you can expect plenty of fights — Rooster Teeth and Polygon Pictures haven’t forgotten one of the reasons why kids (and curious adults) latched on to Transformers in the first place.

There’s still no specific date for War For Cybertron‘s debut later this year. It’s still listed as "coming soon." Even so, this might tell you if it’s worth getting your hopes up for the prequel to the classic Transformers movie. If nothing else, this shows that Netflix is fully committed to its ’80s nostalgia push.

Source: Netflix (YouTube)

geeky,Tech,Database

via Engadget http://www.engadget.com

February 22, 2020 at 08:15PM

FizzBuzz 2.0: Pragmatic Programming Questions For Software Engineers

FizzBuzz 2.0: Pragmatic Programming Questions For Software Engineers

https://ift.tt/2PhMCrk

A former YC partner co-founded a recruiting company for technical hiring, and one of its software engineers is long-time Slashdot reader compumike. He now writes:
Like the decade-old Fizz Buzz Test, there are some questions that are trivial for anyone who can build software at a professional level, but are likely to stump anyone who can’t hack it. I analyzed the data from over 100,000 programmers to reveal how five multiple-choice questions easily separate the real software engineers from the rest. The questions (and the data about correct answers) come from Triplebyte’s own coder-recruiting quiz, and "98% of successful engineers answer at least 4 of 5 correctly," explains Mike’s article. ("Successful" engineers are defined as those who went on to receive an inbound message from a company matching their preferences through Triplebyte’s platform.) "I’m confident that if you’re an engineering manager running an interview, you wouldn’t give an offer to someone who performed below that line." Question 1: What kind of SQL statement retrieves data from a table?
LOOKUPREADFETCHSELECT


Read more of this story at Slashdot.

geeky

via Slashdot https://slashdot.org/

February 23, 2020 at 10:53AM

Download “Becoming The Hacker” For FREE (Worth $32)

Download “Becoming The Hacker” For FREE (Worth $32)

https://ift.tt/2VfOlRY

ai-fight-hackers

If you’d like to delve into web penetration testing, Becoming the Hacker is a clear guide to approaching this lucrative and growing industry.

This free book (worth $32) takes you through commonly encountered vulnerabilities and how to take advantage of them to achieve your goal. You’ll then go on to put your “newly learned techniques into practice, going over scenarios where the target may be a popular content management system or a containerized application and its network”.

Download This Ebook for Free

Topics covered include:

  • Introduction to attacking web applications
  • Advanced brute-forcing
  • File inclusion attacks
  • Out-of-band exploitation
  • Automated testing
  • Practical client-side and server-side attacks
  • Attacking APIs
  • Attacking CMS
  • And more

By developing a strong understanding of how an attacker approaches a web application, you’re placed in a strong position to help companies protect their own applications from these vulnerabilities.

This book is aimed at readers with basic experience with digital security, such as running a network, or coming across security issues when developing an application.

Want to download your free copy? Simply click here to download Becoming the Hacker from TradePub. You will have to complete a short form to access the ebook, but it’s well worth it!

Note: this free offer expires 3 Mar 2020.

Read the full article: Download “Becoming The Hacker” For FREE (Worth $32)

non critical

via MakeUseOf.com https://ift.tt/1AUAxdL

February 22, 2020 at 10:55AM