Bowling Ball Trick Shots

Bowling Ball Trick Shots

https://ift.tt/2SVDCdU

Bowling Ball Trick Shots

Link

Skateboarding site The Berrics dusted off and remastered this classic clip of trick shot master Billy Marks tossing around a bowling ball in a skate park. He might only knock down one pin at a time, but he does it with so much style and grace.

fun

via The Awesomer https://theawesomer.com

February 25, 2020 at 11:00AM

How Stop Signs are Made

How Stop Signs are Made

https://ift.tt/2vY5oNY

How Stop Signs are Made

Link

New York City sees many of its stop signs and other street signs vandalized or stolen each year. Between replacements and other projects, the Department of Transportation’s Queens sign shop makes over 100,000 new signs each year. Insider takes us inside the facility for a look at the work that goes into this laborious process.

fun

via The Awesomer https://theawesomer.com

February 24, 2020 at 05:45PM

Limit Access to Authorized Users

Limit Access to Authorized Users

https://ift.tt/390vTAQ

For any typical web application, some actions should be limited to authorized users. Perhaps only the creator of a conversation may select which reply best answered their question. If this is the case, we’ll need to write the necessary authorization logic. I’ll show you how in this lesson!

Published on Feb 24th, 2020.

programming

via Laracasts https://ift.tt/1eZ1zac

February 24, 2020 at 04:06PM

AndBeyond Ngala Treehouse

AndBeyond Ngala Treehouse

https://ift.tt/2HTQtXA

AndBeyond Ngala Treehouse

Link

This luxury safari getaway in South Africa is a four-story treehouse in the wilderness of the Ngala Private Game Reserve, adjacent to the Kruger National Park, where visitors can spot the Big Five including prides of lions. The solar-powered treehouse has a third-floor bedroom and a rooftop platform for a genuinely wild night.

fun

via The Awesomer https://theawesomer.com

February 24, 2020 at 03:30PM

5 ways to write Laravel code that scales (sponsor)

5 ways to write Laravel code that scales (sponsor)

https://ift.tt/2PkWb9j

5 ways to write Laravel code that scales

Well hello there, Laravel News reader, it’s Jack Ellis & Paul Jarvis, the founders of Fathom Analytics. Before we dive into the goods, allow us to introduce ourselves. We run Fathom Analytics, a simple, privacy-focused analytics platform used by forward-thinking website owners who care about their visitors’ privacy. Our application is built with Laravel and deployed on Laravel Vapor. Jack is also the creator of Serverless Laravel, a course for mastering Laravel Vapor, and Paul is also the author of Company of One, a book that questions traditional business growth in search of a better definition of success. Together, we make up the Fathom Analytics team.

Fathom Analytics is used extensively throughout the Laravel community. Some of our fantastic Laravel customers include:

  • Matt Stauffer (Partner at Tighten)
  • James Brooks (Developer at Laravel LLC & Happy Dev FM Host)
  • Dries Vints (Developer at Laravel LLC & Founder of Laravel.io)
  • Jack McDade (Creator of Statamic)
  • Justin Jackson (Cofounder of Transistor)
  • Stefan Bauer (Founder of PingPing)

And many others.

The following post is not us selling Fathom. Instead, it aims to help you be a better Laravel developer. Our only plug: If you ever need simple analytics or know someone who does, give Fathom Analytics a try.

So now that the introduction is done, I’m (Jack) going to go over some code tips for scaling. I’m going to be focusing on the code, not the infrastructure.

Be prepared for database downtime

When databases go offline, application front ends typically follow, because apps often can’t live without the database. But what happens behind the scenes? Whilst you’re replying to angry tweets, your queue workers are still working away, getting nowhere and potentially losing all of your job data.

When we write jobs, we need to understand that they’re sometimes going to fail. We’re not mad about this, we understand it’s the nature of a job. Imagine we’re using Redis for our queue, because we want something highly scalable, we set our worker up:

php artisan queue:work redis —tries=3 —delay=3

Everything is running beautifully. Our jobs are queuing up fast, thanks to super-low latency from Redis, and our users love us (no angry tweets in sight!).

But we would be silly to assume that our database is always going to be available.

Imagine that it goes offline for 20 minutes… what happens to our jobs? They continue to run since Redis is still online. And if we’ve not touched the default configuration, they’ll retry after 90 seconds and, based on the code above, there’ll be 3 attempts. After those attempts, the failed jobs go into the failed_jobs table in our database. Wait, hold on, our database is offline… so the jobs can’t be inserted into the failed_jobs table.

Here’s what we can do to prevent this:

try { // Check to see if the database is online DB::connection()->getPdo(); } catch (\Exception $e) { // Push it back onto the Redis queue for 20 mins $this->release(1200); } 

With this piece of code, we can run it inside some job middleware or add it to the start of a job. At the risk of being called Captain Obvious, let me explain what it does. Before it does anything in the job, it checks to make sure the database connection is online. If it’s not, it releases the job for an explicit amount of time (20 minutes). If you’re set-up to try your jobs 3 times, that’ll get you around 40 minutes in the first 2 attempts. If your database isn’t back online within that timeframe then, crikey, you have bigger problems.

Now, you might decide that having a 20-minute delay is stupid. Calm down, I have another approach. Set your tries up to something higher:

php artisan queue:work redis --tries=15 --delay=3

And then roll with this code:

try { // Check to see if the database is online DB::connection()->getPdo(); } catch (\Exception $e) { if ($this->attempts() <= 13) { $this->release(60); } else { $this->release(1200); } } 

With this, you get the best of both worlds. The first 13 attempts lead to a 60-second delay, which is great if your database had a tiny blip and was offline for 20ms, since your job will be completed much sooner, and you also have the 20-minute delay for when your database has been offline for 15 minutes or longer. This isn’t production code, this is just a concept for this lovely Laravel News article, but this can be modified, tested & implemented beautifully. So give it a go.

Assume all external services will go offline at some point

Developers can be complacent sometimes, can’t we? Throw off a job to the queue, it’ll be fine. Check the cache, it’ll be online. But what happens when these pieces are offline? Sure, if you’re running these things inside of jobs, the jobs will fail / retry and you’ll live to code another day. But if you’re queuing up jobs or checking cache when the user makes an HTTP request to your application, it’ll be the end of the world as we know it and everybody will hurt. But we can be shiny happy people if we use the following technique:

// Adding Fault tolerance retry(20, function() use ($request) { dispatch(new JobThatUsesTheRequest($request)); }, 200); 

The beauty here is that we retry the queueing of the job 20 times, with a 200ms delay between each attempt. This is a great way to absorb any temporary downtime from your queue. Yes, it increases the response time for the user but, guess what, the request gets fulfilled, so who’s the victim?

Whilst the above works great with high-availability, fully managed queues such as SQS, what do you do when you have low-availability queues? Ideally, you shouldn’t. If your boss or client won’t let you spend more money to get a high-availability queue solution, here’s some code that’ll help with that:

try { retry(20, function() use ($request) { dispatch(new JobThatUsesTheRequest($request)); }, 200); } catch (\Exception $e) { Mail::raw('Error with low-availability queue, increase budget please', function ($message) { $message->to('yourboss@yourcompany.com'); $message->subject('Look what you did'); }); } 

Well, that’s what I’d do 😉

Use a faster session driver

One of the things that I see in a lot of applications is people using the default session driver or their database. That’s fine at a small scale but it’s not going to deliver the best results at scale. A better option would be to use an in-memory store like Redis.

Before you do anything, get Redis set-up, grab the connection details and set the appropriate environment variables (that’s all you get here, this isn’t an “adding Redis to Laravel” guide :P).

Once that’s all set-up and ready to go, open up config/session.php and scroll down to the Redis section. Copy the default entry and change the key to ‘session’. And then change the database value to env(‘REDIS_SESSION_DB’, 3) and add an environment variable for it. The Redis area should look something like this:

'redis' => [ 'client' => env('REDIS_CLIENT', 'predis'), 'options' => [ 'cluster' => env('REDIS_CLUSTER', 'redis'), 'prefix' => env('REDIS_PREFIX', Str::slug(env('APP_NAME', 'laravel'), '_').'_database_'), ], 'default' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_DB', 0), ], 'cache' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_CACHE_DB', 1), ], 'session' => [ 'url' => env('REDIS_URL'), 'host' => env('REDIS_HOST', '127.0.0.1'), 'password' => env('REDIS_PASSWORD', null), 'port' => env('REDIS_PORT', 6379), 'database' => env('REDIS_SESSION_DB', 3), ], ], 

Now you want to make sure you have the following variables in your .env file:

  • SESSION_DRIVER=redis
  • SESSION_DRIVER=session

And you’ll be ready to rock & roll. Response times go down tremendously. You’re welcome, friend.

Don’t waste queries on pointless stuff

Let’s look at something that doesn’t matter much at low scale but starts to matter more as you grow: caching your queries. Most people already do this, which is fantastic, but a lot of people don’t. Some people cache the wrong stuff, and others get it just right. Others run into all sorts of stale cache issues, and they can spend hours debugging problems caused by cache. Heck, we’ve all been there.

So what can we do if we want to live a happy life where we use our resources efficiently?

Cache static data

If you have a table in your database, something like Countries, which is seldom going to be updated, you can cache that without any stale cache drama.

$countries = Cache::remember(‘countries:all’, 86400, function() { return Country::orderBy(‘name’, ‘asc’)->get(); }); 

And I’d typically go for 24 hours. Whilst there aren’t many new countries popping up each day, there’s still a chance a country may rename itself, etc. If we’re being realistic, you could also cache it for a week. But why don’t we use rememberForever? We could. I just prefer to set Redis’ eviction policy to a “lru option” (this isn’t a Redis lesson, so we stop here!).

Cache dynamic data

Back in the early, early days, a lot of us stayed away from caching user objects and other pieces. “What if the user changes their email and it’s wrong in the cache?”. God forbid. But it doesn’t have to be like this. If we take responsibility for keeping our cache fresh, there’s no issue. In Fathom Analytics, we use caching extensively for dynamic data, and we use observers to make sure that Cache is kept up to date.

We use functions such as Site::loadFromCache($id) and then, whenever the site changes, we make sure we call Site::updateCache($id, $site). And, of course, we also use Site::deleteFromCache($id). You can only imagine the database calls we save ourselves, allowing us to never worry about database load.

This can also be really beneficial for updates to the database. Instead of doing a findOrFail on a model, you can just check the cache and then run the update. When you’re handling 100 updates, the effects of this change are negligible, but once you get into the hundreds of thousands to millions, it can make a big difference.

Do less in your commands

Final one, I promise. Also, hey, you’ve read 1,500 words of my ramblings, I appreciate it. I’d love to hear from you, and so would Paul, we’re @jackellis and @pjrvs on Twitter. Even if you hated this article, tell us how much you hated it. You know what they say: any press is good press.

One of the things I’ve seen a lot of people do is try to do too much in their commands. For example, they might make their commands process & send emails whenever they’re executed, or they’ll perform some logic on various data. This is fine at small scale, but what happens when your data increases or you start sending our many more emails? Your commands will timeout. You need to break up your background tasks. Please. If you won’t do it for yourself, do it for us.

When using commands, you should use them to dispatch jobs. Jobs can be dispatched in a matter of milliseconds, and the processing can be done in isolation. This means your command won’t timeout or reach some silly memory limit. And yes, this isn’t always the case, but it’s relevant when you’re working with data loads that will scale.

$userChunks->each(function ($users) { SendUsersAnEmail::dispatch($users); }); 

By doing this, we break up our workload. We can break our users up into chunks of 100 and have our jobs handle emailing them. Imagine if we’re doing this with 500,000 users, we’ve moved from processing all 500,000 in a single command to handling it between 5,000 single jobs. Much nicer. We could do more than 100 users in a job, obviously, this is just an example.

And as my favourite bunny once said… that’s all folks. If you have any questions, you can always tweet us.

And now that we’re all done, I’ll re-plug a few things:

***

Many thanks to Fathom Analytics for sponsoring Laravel News this week.

Filed in: News

programming

via Laravel News https://ift.tt/14pzU0d

February 24, 2020 at 09:20AM

FizzBuzz 2.0: Pragmatic Programming Questions For Software Engineers

FizzBuzz 2.0: Pragmatic Programming Questions For Software Engineers

https://ift.tt/2PhMCrk

A former YC partner co-founded a recruiting company for technical hiring, and one of its software engineers is long-time Slashdot reader compumike. He now writes:
Like the decade-old Fizz Buzz Test, there are some questions that are trivial for anyone who can build software at a professional level, but are likely to stump anyone who can’t hack it. I analyzed the data from over 100,000 programmers to reveal how five multiple-choice questions easily separate the real software engineers from the rest. The questions (and the data about correct answers) come from Triplebyte’s own coder-recruiting quiz, and "98% of successful engineers answer at least 4 of 5 correctly," explains Mike’s article. ("Successful" engineers are defined as those who went on to receive an inbound message from a company matching their preferences through Triplebyte’s platform.) "I’m confident that if you’re an engineering manager running an interview, you wouldn’t give an offer to someone who performed below that line." Question 1: What kind of SQL statement retrieves data from a table?
LOOKUPREADFETCHSELECT


Read more of this story at Slashdot.

geeky

via Slashdot https://slashdot.org/

February 23, 2020 at 10:53AM

Netflix’s first ‘Transformers’ teaser reveals a hopeless war

Netflix’s first ‘Transformers’ teaser reveals a hopeless war

https://ift.tt/2T1Y0c8

Netflix has posted the first teaser trailer for its Transformers animated series, and it’s now clear just how the service will take on the robots’ origins. The clip for the first part of the War For Cybertron trilogy, Siege, portrays the Autobots fighting a seemingly hopeless war to prevent the Decepticons from finding the Allspark (the source of the machines’ power) and destroying the essence of what the Transformers are. It’s not the most complex narrative, although it’s surprisingly bleak for kids’ fare. It even draws eerie parallels with human politics by having Megatron spin the Autobots’ efforts as "aggression."

The teaser also gives a peek at the "new look" animation style, which ultimately boils down to cel-shaded CG but fits well with larger-than-life battling robots. On that note, you can expect plenty of fights — Rooster Teeth and Polygon Pictures haven’t forgotten one of the reasons why kids (and curious adults) latched on to Transformers in the first place.

There’s still no specific date for War For Cybertron‘s debut later this year. It’s still listed as "coming soon." Even so, this might tell you if it’s worth getting your hopes up for the prequel to the classic Transformers movie. If nothing else, this shows that Netflix is fully committed to its ’80s nostalgia push.

Source: Netflix (YouTube)

geeky,Tech,Database

via Engadget http://www.engadget.com

February 22, 2020 at 08:15PM

Download “Becoming The Hacker” For FREE (Worth $32)

Download “Becoming The Hacker” For FREE (Worth $32)

https://ift.tt/2VfOlRY

ai-fight-hackers

If you’d like to delve into web penetration testing, Becoming the Hacker is a clear guide to approaching this lucrative and growing industry.

This free book (worth $32) takes you through commonly encountered vulnerabilities and how to take advantage of them to achieve your goal. You’ll then go on to put your “newly learned techniques into practice, going over scenarios where the target may be a popular content management system or a containerized application and its network”.

Download This Ebook for Free

Topics covered include:

  • Introduction to attacking web applications
  • Advanced brute-forcing
  • File inclusion attacks
  • Out-of-band exploitation
  • Automated testing
  • Practical client-side and server-side attacks
  • Attacking APIs
  • Attacking CMS
  • And more

By developing a strong understanding of how an attacker approaches a web application, you’re placed in a strong position to help companies protect their own applications from these vulnerabilities.

This book is aimed at readers with basic experience with digital security, such as running a network, or coming across security issues when developing an application.

Want to download your free copy? Simply click here to download Becoming the Hacker from TradePub. You will have to complete a short form to access the ebook, but it’s well worth it!

Note: this free offer expires 3 Mar 2020.

Read the full article: Download “Becoming The Hacker” For FREE (Worth $32)

non critical

via MakeUseOf.com https://ift.tt/1AUAxdL

February 22, 2020 at 10:55AM

Keeping Records of CRM Pipeline Sales Leads to Success

Keeping Records of CRM Pipeline Sales Leads to Success

https://ift.tt/2x35Jza

Photo by You X Ventures on Unsplash

Keeping good records of CRM pipeline sales and learning to analyze those records can reveal surprising details about how best to run your business.

CRM is an acronym for “customer relationship management.” The term “pipeline sales” refers where your customers are in the buying process. What do CRM and pipeline sales have in common? And how can you use these tools to learn more about your business? Keep reading to find out.

RELATED ARTICLE: STREAMLINING BUSINESS PROCESSES FOR SUCCESS

Make Full Use of Both Your CRM and Your Pipeline

Your company’s CRM is the lifeblood of your business. This is because your profits are driven by customer purchases. Moreover, taking full advantage of the combined elements of your CRM and good record-keeping of your sales pipeline will translate into success for your business.

What is the Difference Between a Sales Funnel and a Sales Pipeline?

The sales pipeline differs slightly from the sales funnel. This is because the sales funnel is about leads, and the sales pipeline is about sales.

The funnel is wide at the top to capture an infinite number of prospects. As the potential for sales increases, the funnel narrows toward the bottom, separating potential customers from actual customers. The desired outcomes of both the sales funnel and the sales pipeline are greater profits and happier customers.

Let’s look at seven reasons why maintaining good CRM pipeline sales records will benefit you.

Good Record-Keeping Will Help You Identify Threats and Opportunities

With a systematic record-keeping process, you remain continuously updated about the status of your CRM pipeline. Moreover, using an efficient system to move prospects through the sales funnel in an integrated process enables you to mitigate risks and harness opportunities at every step of the way.

Reducing threats means knowing how to discourage progressive movement through the sales funnel. Leveraging opportunities translates into encouraging upward movement of prospects through the sales pipeline.

Learn Your Company’s Strengths and Weaknesses by Observing CRM Pipeline Sales

You can gain a good understanding of your company’s strengths and weaknesses by learning to analyze the records of your CRM pipeline sales. It’s best to do this by recording your sales funnel records.

This is because funnel records show conversion rates at each point. This will provide greater clarity about the improvements you need to make to your sales and marketing efforts. Then, you’ll know where your staff needs more training. Obviously, more training in weak areas will become key to making system improvements.

Set Goals and Achieve Them by Observing CRM Pipeline Sales

You’ll be better able to set effective goals when you understand more about where you need additional sales team training.

Naturally, you will then rely on precise record-keeping to adjust those goals as well as your sales strategies. This will lead to more successful conversions.

Develop Your Goal-Setting Skills

You want to slow or stop prospect movement toward the top of your funnel at each point. This is because you don’t want leads to leave your sales funnel but to continue downward toward the narrow end of the funnel. In other words, you want leads to progress toward sales, not toward leaving your funnel altogether.

Therefore, you need to learn more about your overall CRM sales pipeline. Additionally, you want to hone your goal-setting skills at each step. These new skills will correspondingly drive conversion rates throughout your sales pipeline. One point at which you have the best chance of achieving this goal is at first contact.

Good Record-Keeping of CRM Pipeline Sales Will Improve Company Response Times

Researchers write in the Harvard Business Review about a study that illustrates the importance of shortening lead response times.

Their research showed that companies who responded to leads within one hour were seven times more likely to achieve successful movement through the sales funnel than those who contacted the customer in the second hour.

Even more surprisingly, initial contacts made in the first hour were 60 times more successful than those made after 24 hours.

Always Remember That Prospects Are People Who Can Become Customers

Additionally, the Harvard study underscores the fact that people are social creatures. They appreciate personalized services and interactions. They also value rapid response times.

Therefore, the right technical systems and CRM record-keeping software will make all the difference to your sales and profits.

However, along with all this technical savvy, your business success also depends on how well you build your relationships with your customers. You can gain the knowledge you need by choosing an effective CRM sales pipeline record-keeping system. On the other hand, though, the effective building of human relations will require continual staff training.

Employ Enhanced CRM with Advanced VoIP

Using the advanced functionality of an innovative VoIP system as your record-keeping software will further enhance your CRM. This is because an advanced VoIP system will give you online website chat capabilities, surveys, SMS functionality, and video conferencing. This includes both inbound and outbound CRM support.

Additionally, the leads from this VoIP system will all be captured in your sales pipeline. This will enable better sales tracking and a shortened sales cycle timeline. Then your staff will have more time to make sales. In short, your company will enjoy more productivity, improved customer response times, and greater customer satisfaction.

A CRM Sales Pipeline Translates into Free Marketing and More Sales

The CRM sales pipeline differs between companies, but typically ranges between four to eight stages in the customer’s buying process. This range begins with an ocean of potential prospects who are simply looking for something to purchase, and ends with successful sales to happy customers.

Happy customers are open to upselling, which happens when employees are able to persuade the customer to purchase a more expensive product, or an additional one.

What’s more, happy customers refer friends and family to your business, promoting additional sales and profits.

Satisfied customers often write positive testimonials about your business. These testimonials can persuade others of the value customers received from your company. Pleased customers can even become fans who voluntarily promote your business and products, creating another free marketing channel.

Key Takeaways

  • CRM systems translate into accurate records that highlight the sales status.
  • Record keeping supports the identification of strengths, weaknesses, opportunities, and threats.
  • Understanding and analyzing your records empowers goal creation and achievement.
  • Training fills in the gaps, allowing for improved response times, people management, and forecasting.

Conduct research to determine the CRM sales pipeline software that works best for you. Purchasing the right product for your business will result in improved business systems, well-trained employees, greater conversions, happier customers, and a sustainable, successful business.

The post Keeping Records of CRM Pipeline Sales Leads to Success appeared first on Business Opportunities.

business

via Business Opportunities Weblog https://ift.tt/2CJxLjg

February 22, 2020 at 12:29PM