Livewire File Uploads to Amazon S3

Livewire File Uploads to Amazon S3

https://ift.tt/2JfRldj


Many multi-tenant apps require image uploads, and may need to store those files in Amazon S3. Let’s create an Amazon S3 bucket from scratch and get it connected to our app. Then, we’ll leverage the powerful and simple file uploading functionality that Livewire provides.

programming

via Laracasts https://ift.tt/1eZ1zac

October 27, 2020 at 02:41PM

Wyze launches version 3 of its $20 security camera

Wyze launches version 3 of its $20 security camera

https://ift.tt/37Npg7k

Wyze first made a name for itself when it launched its $20 indoor security camera a few years ago. Since then, the company branched out into other smart home products, ranging from doorbells to scales. Today, it’s going back to its origins with the launch of the Wyze Cam V3, the third generation of its flagship camera.

The new version is still $20 (though that’s without shipping unless there’s a free shipping promotion in the Wyze store), but the company redesigned both the outside and a lot of the hardware inside the camera, which is now also IP65 rated, so you can now use it outdoors, too.

Image Credits: Wyze

The Cam V3 now also features new sensors that enable color night vision, thanks to an F1.6 aperture lens that captures 40 percent more light than the previous version. That lens now also covers a 130-degree field of view (up from 110 degrees in V2) and the company pushed up the frames per second from 15 during the day and 10 at night to 20 and 15 respectively.

The company also enhanced the classic black and white night vision mode — which you’ll still need when it’s really dark outside or in the room you are monitoring — by adding a second set of infrared lights to the camera.

Other new features are an 80dB siren to deter unwanted visitors. This feature is triggered by Wyze’s AI-powered person-detection capability, but that’s a feature the company recently moved behind its $2/month CamPlus paywall, after originally offering it for free. That’s not going to break the bank (and you get a generous free trial period), but it’d be nice if the company could’ve kept this relatively standard feature free and instead only charged for extra cloud storage or more advanced features (though you do get free 14-day rolling cloud storage for 12-second clips).

Wyze Cam V2 (left) and V3 (right).

Wyze provided me with a review unit ahead of today’s launch (and a Cam V2 to compare them). The image quality of the new camera is clearly better and the larger field of view makes a difference, even though the distortion at the edges is a bit more noticeable now (but given the use case, that’s not an issue). The new night color vision mode works as promised and I like that you can set the camera to automatically switch between them based on the lighting conditions.

The person detection has been close to 100% accurate — and unlike some competing cameras that don’t feature this capability, I didn’t get any false alarms during rain or when the wind started blowing leaves across the ground.

If you already have a Wyze Cam V2, you don’t need to upgrade to this new one — the core features haven’t changed all that much, after all. But if you’re in the market for this kind of camera and aren’t locked into a particular security system, it’s hard to beat the new Wyze Cam.

technology

via TechCrunch https://techcrunch.com

October 27, 2020 at 01:08PM

MagSafe 15W fast charging restricted to Apple 20W adapter

MagSafe 15W fast charging restricted to Apple 20W adapter

https://ift.tt/2Tup1FS


New testing shows Apple’s MagSafe charging puck does peak at 15W with iPhone 12, but only when paired with the company’s 20W adapter.

The apparent restriction was discovered by Aaron Zollo of YouTube channel Zollotech. In a comprehensive evaluation of Apple’s MagSafe device posted on Monday, Zollo found two Apple adapters — a new standalone 20W USB-C device and the 18W unit that came with iPhone 11 Pro handsets — achieved high charge rates.

Measuring energy throughput with an inline digital meter revealed MagSafe hits the advertised 15W peak charging rate (up to 16W in the video) when paired with Apple’s branded 20W adapter. Speeds drop to about 13W with the 18W adapter, and Zollo notes the system takes some time to ramp up to that level.

Older adapters and third-party models with high output ratings do not fare well in the test. Apple’s own 96W MacBook Pro USB-C adapter eked out 10W with MagSafe, matching a high seen by Anker’s PowerPort Atom PD1. Likewise, charging rates hovered between 6W and 9W when attached to Aukey’s 65W adapter, Google’s Pixel adapter and Samsung’s Note 20 Ultra adapter.

It appears third-party devices will need to adopt a MagSafe-compatible power delivery (PD) profile to ensure fast, stable energy delivery when connected to iPhone 12 series devices.

As can be expected with any charging solution, temperature plays a significant role in potential throughput. Zollo found MagSafe throttles speeds as temperatures rise, meaning actual rates are not a constant 15W even when using the 20W adapter. When heat rises, energy output decreases to protect sensitive hardware components and the battery itself. In some cases, this could prompt users to remove their iPhone from its case — including Apple-branded MagSafe models — to achieve maximum thermal efficiency.

Zollo also confirms older Qi-compatible iPhone models, like iPhone 8 Plus and iPhone 11 Pro Max, charge at about 5W with MagSafe. Apple previously said Qi devices would charge at 7.5W.

macintosh

via AppleInsider https://ift.tt/3dGGYcl

October 26, 2020 at 08:38PM

Device Tracking in Laravel

Device Tracking in Laravel

https://ift.tt/3otT1jj


Laravel Device Tracking is a package by Ivano Matteo that allows you to track different devices used by users of your application. You can use this package as a base for functionality like detecting users on new devices and managing the verified status between device and user. You could also possibly see device hijacking.

The package works by adding the UseDevices trait to your application’s User model:

use IvanoMatteo\LaravelDeviceTracking\Traits\UseDevices;

class User extends Authenticatable
{

    use HasFactory, Notifiable, UseDevices;
    // ...
}

The UseDevices trait gives you access to a devices() (belongs to many) relationship to get verified devices:

$user->device()

Here are some examples of methods you can access via the package’s facade:

use IvanoMatteo\LaravelDeviceTracking\LaravelDeviceTrackingFacade as DeviceTracker;

DeviceTracker::detectFindAndUpdate();

DeviceTracker::flagCurrentAsVerified();

// Flag as verified for the current user
DeviceTracker::flagCurrentAsVerified();

// Flag as verified for a specific user
DeviceTracker::flagAsVerified($device, $user_id);

// Flag as verified for a specific user by device UUID
DeviceTracker::flagAsVerifiedByUuid($device_uuid, $user_id);

You can learn more about this package, get full installation instructions, and view the source code on GitHub at ivanomatteo/laravel-device-tracking.


This package was submitted to our Laravel News Links section. Links is a place the community can post packages and tutorials around the Laravel ecosystem. Follow along on Twitter @LaravelLinks

Filed in:
News
/
laravel
/
packages

programming

via Laravel News https://ift.tt/14pzU0d

October 27, 2020 at 09:22AM

Memes that made me laugh 29

Memes that made me laugh 29

https://ift.tt/3ms69U7

 

Gathered up over the past week on the Internet.

More next week!

Peter

non critical

via Bayou Renaissance Man https://ift.tt/1ctARFa

October 26, 2020 at 05:59AM

Stack Abuse: What Does if __name__ == “__main__”: Do in Python?

Stack Abuse: What Does if __name__ == “__main__”: Do in Python?

https://ift.tt/37IeoYk

Introduction

It’s common to see if __name__ == "__main__" in Python scripts we find online, or one of the many we write ourselves.

Why do we use that if-statement when running our Python programs? In this article, we explain the mechanics behind its usage, the advantages, and where it can be used.

The __name__ Attribute and the __main__ Scope

The __name__ attribute comes by default as one of the names in the current local scope. The Python interpreter automatically adds this value when we are running a Python script or importing our code as a module.

Try out the following command on your Python interpreter. You may find out that __name__ belongs to the list of attributes in dir():

dir()

dir() output

The __name__ in Python is a special variable that defines the name of the class or the current module or the script from which it gets invoked.

Create a new folder called name_scripts so we can write a few scripts to understand how this all works. In that folder create a new file, script1.py with the following code:

print(f'The __name__ from script1 is "{__name__}"')

script1.py output

That’s a curveball! We’d expect that the name would be script1, as our file. What does the output __main__ mean?

By default, when a script is executed, the interpreter reads the script and assigns the string __main__ to the __name__ keyword.

It gets even more interesting when the above script gets imported to another script. Consider a Python file named script2.py with the following code:

import script1  # The print statement gets executed upon import

print(f'The __name__ from script2 is "{__name__}"')

script2.py output

As you can see, when the script is executed the output is given as script1 denoting the name of the script. The final print statement is in the scope of script2 and when it gets executed, the output gets printed as: __main__.

Now that we understand how Python uses the __name__ scope and when it gives it a value of "__main__", let’s look at why we check for its value before executing code.

if __name__ == "__main__" in Action

We use the if-statement to run blocks of code only if our program is the main program executed. This allows our program to be executable by itself, but friendly to other Python modules who may want to import some functionality without having to run the code.

Consider the following Python programs:

a) script3.py contains a function called add() which gets invoked only from the main context.

def add(a, b):
    return a+b


if __name__ == "__main__":
    print(add(2, 3))

Here’s the output when script3.py gets invoked:

script3.py output

As the script was executed directly, the __name__ keyword is assigned to __main__, and the block of code under the if __name__ == "__main__" condition is executed.

b) Here’s what happens when this snippet is imported from script4.py:

import script3

print(f"{script3.__name__}")

script4.py output

The block under if __name__ == "__main__" from script3.py did not execute, as expected. This happened because the __name__ keyword is now assigned with the name of the script: script3. This can be verified by the print statement given which prints the assigned value for the __name__ keyword.

How Does __name__ == "__main__" Help in Development?

Here are some use cases for using that if-statement when creating your script

  • Testing is a good practice which helps not only catch bugs but ensure your code behaves as required. Test files have to import a function or object to them. In these cases, we typically don’t want the script being run as the main module.
  • You’re creating a library but would like to include a demo or other special run-time cases for users. By using this if-statement, the Python modules that use your code as a library are unaffected.

Creating a __main__.py File for Modules

The point of having the if __name__ == "__main__" block is to get the piece of code under the condition to get executed when the script is in the __main__ scope. While creating packages in Python, however, it’s better if the code to be executed under the __main__ context is written in a separate file.

Let’s consider the following example – a package for performing calculations. The file tree structure for such a scenario can be visualized as:

calc                 # --> Root directory
├── __main__.py
├── script1.py
├── script2.py
├── script3.py
├── script4.py
└── src              # --> Sub-directory
    ├── add.py
    └── sub.py

The tree structure contains calc as the root directory and a sub-directory known as src. The __main__.py under the calc directory contains the following content:

from src.add import add
from src.sub import sub

a, b = input("Enter two numbers separated by commas: ").split(',')
a, b = int(a), int(b)

print(f"The sum is: {add(a, b)}")
print(f"The difference is: {sub(a, b)}")

The add.py contains:

def add(a, b):
    return a+b

And sub.py contains:

def sub(a, b):
    return a-b

From right outside the calc directory, the script can be executed and the logic inside the __main__.py gets executed by invoking:

python3 calc

script1.py output

This structure also gives a cleaner look to the workspace location, the way how the directories are organized, and the entry point is defined inside a separate file called __main__.py.

Conclusion

The __name__ == "__main__" runs blocks of code only when our Python script is being executed directly from a user. This is powerful as it allows our code to have different behavior when it’s being executed as a program instead of being imported as a module.

When writing large modules, we can opt for the more structured approach of having a __main__.py file to run a module. For a stand-alone script, including the if __name__ == "__main__" is a simpler method to separate the API from the program.

Python

via Planet Python https://ift.tt/1dar6IN

October 26, 2020 at 09:50AM

Why the iPhone 12 Pro is worth the upgrade cost

Why the iPhone 12 Pro is worth the upgrade cost

https://ift.tt/31EV7TN


Putting the iPhone 12 Pro through its paces in the real world really shows why it’s worth the extra cost over an iPhone 12.

It’s more than surface deep

The new iPhone 12 Pro of course offers more features than its predecessors, but before you even notice any of those, you immediately see — and feel — how it has all been physically redesigned. As with all the iPhone 12 range, it has the iPad Pro-style flat edges, and they make it remarkably appealing to hold.

Then with the iPhone 12 Pro, Apple retained the stainless steel frame but has four new colors. What’s been less well reported, though, is that even the colors that we thought we’d seen before, such as silver and gold, have a subtly different — and better — look.

For instance, the silver version, which has the white glass back, is now lighter than before. The gold has a new finish to make the color more substantial around the edge, and this also makes it more resistant to fingerprints. Unfortunately, the darker colors remain fingerprint magnets.

Graphite iPhone 12 Pro and space gray iPhone 11 Pro

Whereas Pacific Blue is entirely new. It replaces last year’s green and, at least anecdotally, appears to be a particularly popular option. There’s a slight slate-color tint to the blue on the iPhone 12 Pro, and it’s gorgeous enough that you will keep staring at it until you put the phone in a case.

To go with these brand new colors, and improved existing ones, are new exclusive wallpapers. Apple has created four new live wallpapers for the iPhone 12 Pro line that match the phone colors, and move. Hold your finger on the lock screen and these images animate as if they have lens flares.

Massive camera updates

You can point to the finer color and, actually, to the brighter screen, to say there are variations between the iPhone 12 and the iPhone 12 Pro, but the real differences are in the new photo and video capabilities on the new iPhone 12 Pro.

Most of the best new features are relegated to the iPhone 12 Pro Max, though. That has yet to be released, but in the meantime, iPhone 12 Pro has some key new features of note.

Such as the addition of Dolby Vision recording at 60 frames per second, as opposed to the 30fps of the iPhone 12. The inclusion of Dolby Vision at all is a feat, and it means that these two smartphones are the first in the world on which you can shoot, edit, and share 4K Dolby Vision HDR.

However, if you are going to benefit from Dolby Vision, it feels wrong to hamper yourself with the 30fps version. The iPhone 12 Pro’s 60fps is certainly better, and makes greater use of the potential of Dolby Vision recording.

What’s more, in real-world use, it is as easy as you’d want and expect it to be.

When you come to play or edit it, you can immediately tell that footage was shot in Dolby Vision because it is marked with an HDR watermark in the top-left corner of the video app. Similarly, if you edit in the Photos app, you’ll see the display get brighter as it starts to display this footage.

It all looks very good when played on an HDR-capable display, but can be toggled off if you don’t wish to capture it and take up all the storage space it requires.

Night shoots

Another frankly amazing feature we explored was night mode portraits on the iPhone 12 Pro. This night mode feature came with the iPhone 11 line, and it already allowed you take long-exposure shots in very low light situations. With iPhone 12 Pro, though, that same functionality comes to portrait shots.

When you switch to portrait mode in the Camera app and go to take a pic in a very low-light environment, you will see the night mode icon in the lower-left corner where the 1X and 2X indicators are.

You can’t zoom in and keep this portrait effect, you have to take the shot at 1X. Explain to your subject that you have to step closer. That’s because for this type of shot it needs the new faster aperture of the wide-angle camera rather than that on the 2X tele lens.

For the iPhone 12 Pro, Apple increased the aperture from f/1.8 to f/1.6 which allows more light in and allows the shutter to fire faster. The new LiDAR scanner is also used because it allows the camera to focus in near pitch-black environments.

iPhone 11 Pro low-light portrait shot versus night mode portrait on iPhone 12 Pro

We will have a more comprehensive comparison soon, but we did take a quick set of example shots using portrait mode on our iPhone 11 Pro and iPhone 12 Pro. The iPhone 11 Pro wasn’t able to enable portrait mode at all so it just captured a normal image.

Naturally, that image came out very, very dark and completely unusable. On the other hand, iPhone 12 Pro captured a very impressive image in almost no light.

Ultra-wide lens correction on iPhone 12 Pro

Aside from night mode coming to all cameras — notably including the front-facing True Depth or selfie one — Apple has improved the ultra-wide lens. There’s also a new lens correction that’s applied in order to deal with the quite excessive distortion that could be present before. Once more, see our sample shot took on iPhone 11 Pro and iPhone 12 Pro to see how much of a difference this has made.

As important and visibly improved as the new lens and camera systems are, it’s this combination of corrections and software control that make the iPhone 12 Pro such a good buy for photographers. That’s only going to become even truer, too, when the promised Apple ProRAW format comes out.

We’ll know for sure when it’s released and we can test it in the real world. However, Apple ProRAW is claimed to take all of the advantages of shooting RAW, of using uncompressed images, and applying Apple’s computational photography algorithms to get the very finest results possible.

Internal upgrades

Powering all of these new features is Apple’s latest A14 Bionic processor. Last year, the A13 Bionic processor on the iPhone 11 Pro scored 1334 and 3543 on the single-core and multi-core tests. This year, the iPhone 12 Pro pulled a 1598 and a 4180.

That represents about a 20 percent improvement on the single-core score and about 15 percent gain on the multi-core. These are the kinds of improvements that don’t just sound good on paper, you can actually appreciate them in real use.

Geekbench scores for iPhone 12 Pro

That’s going to apply to everything you do on the phone as most tasks are single-core, so this iPhone 12 Pro feels more snappy in daily use. But it’s particularly noticeable in video and photo editing, which is faster even when you’re dealing with 4K 60FPS content.

Most of these internal differences are also in the iPhone 12, but Apple has given the iPhone 12 Pro an extra 2GB of RAM, bringing it to 6GB. This directly aids with specific tasks like loading apps from the background, many Safari tabs, and more. Storage was doubled too, going from 64Gb on the base model to 128GB at the same price point.

Of course, 5G is also an internal upgrade, supporting both sub-6GHz and mmWave 5G here in the US, and sub-6GHz elsewhere.

MagSafe

MagSafe charger on iPhone 12 Pro

In terms of what it means for the iPhone 12 Pro, though, MagSafe is poised to be a massive new feature. You’re going to see a huge increase in the iPhone ecosystem between cases, chargers, mounts, wallets, cases, folios, PopSockets, and more, which are all on their way.

Right now, our real world tests with the iPhone 12 Pro have been using Apple’s own cases, and its own MagSafe charger.

Even based on these, though, MagSafe is a hit. The convenience of the longer lead that means you can pick up the phone without disconnecting it from the charge is a boon.

And the magnets really do instantly center the iPhone 12 Pro on the right spot to make sure it gets charged properly.

Look to the future

That’s the thing about an Apple device. You can review it as it’s launched, and you can properly test it out in the real world, but then it changes.

We’re going to see the addition of more MagSafe devices — such as Apple’s own forthcoming device that charges both the iPhone 12 Pro and the Apple Watch — and we’re going to see Apple ProRAW soon.

Right now, the iPhone 12 Pro is an exceptional phone. It’s going to be interesting to see just how significant the extra camera improvements are in the iPhone 12 Pro Max. But regardless of that, this iPhone 12 Pro is a good buy that is going to keep on getting better.

macintosh

via AppleInsider https://ift.tt/3dGGYcl

October 26, 2020 at 01:18PM

The Most Flammable Dust

The Most Flammable Dust

https://ift.tt/3jsZOWJ

The Most Flammable Dust

Link

You wouldn’t think that something as innocuous as corn starch could cause a massive fireball, but you’d be wrong. The Beyond the Press channel conducted a series of experiments to show just how flammable various kinds of dust and powder can be when exposed to a flame. They didn’t try non-dairy creamer though.

fun

via The Awesomer https://theawesomer.com

October 26, 2020 at 02:45PM

The No-Code Generation is arriving

The No-Code Generation is arriving

https://ift.tt/3kuyetu

In the distant past, there was a proverbial “digital divide” that bifurcated workers into those who knew how to use computers and those who didn’t.[1] Young Gen Xers and their later millennial companions grew up with Power Macs and Wintel boxes, and that experience made them native users on how to make these technologies do productive work. Older generations were going to be wiped out by younger workers who were more adaptable to the needs of the modern digital economy, upending our routine notion that professional experience equals value.

Of course, that was just a narrative. Facility with using computers was determined by the ability to turn it on and login, a bar so low that it can be shocking to the modern reader to think that a “divide” existed at all. Software engineering, computer science, and statistics remained quite unpopular compared to other academic programs, even in universities, let alone in primary through secondary schools. Most Gen Xers and millennials never learned to code, or frankly, even to make a pivot table or calculate basic statistical averages.

There’s a sociological change underway though, and it’s going to make the first divide look quaint in hindsight.

Over the past two or so years, we have seen the rise of a whole class of software that has been broadly (and quite inaccurately) dubbed “no-code platforms.” These tools are designed to make it much easier for users to harness the power of computing in their daily work. That could be everything from calculating the most successful digital ad campaigns given some sort of objective function, or perhaps integrating a computer vision library into a workflow that calculates the number of people entering or exiting a building.

The success and notoriety of these tools comes from the feeling that they grant superpowers to their users. Projects that once took a team of engineers some hours to build can now be stitched together in a couple of clicks through a user interface. That’s why young startups like Retool can raise at nearly a $1 billion and Airtable at $2.6 billion, while others like Bildr, Shogun, Bubble, Stacker, and dozens more are getting traction among users.

Of course, no-code tools often require code, or at least, the sort of deductive logic that is intrinsic to coding. You have to know how to design a pivot table, or understand what a machine learning capability is and what might it be useful for. You have to think in terms of data, and about inputs, transformations, and outputs.

The key here is that no-code tools aren’t successful just because they are easier to use — they are successful because they are connecting with a new generation who understands precisely the sort of logic required by these platforms to function. Today’s students don’t just see their computers and mobile devices as consumption screens and have the ability to turn them on. They are widely using them as tools of self-expression, research and analysis.

Take the popularity of platforms like Roblox and Minecraft. Easily derided as just a generation’s obsession with gaming, both platforms teach kids how to build entire worlds using their devices. Even better, as kids push the frontiers of the toolsets offered by these games, they are inspired to build their own tools. There has been a proliferation of guides and online communities to teach kids how to build their own games and plugins for these platforms (Lua has never been so popular).

These aren’t tiny changes. 150 million play Roblox games across 40 million user-created experiences, and the platform has nearly 350,000 developers. Minecraft for its part has more than 130 million active users. These are generation-defining experiences for young people today.

That excitement to harness computers is also showing up in educational data. Advanced Placement tests for Computer Science have grown from around 20,000 in 2010 to more than 70,000 this year according to the College Board, which administers the high school proficiency exams. That’s the largest increase among all of the organization’s dozens of tests. Meanwhile at top universities, computer science has emerged as the top or among the top majors, pulling in hundreds of new students per campus per year.

The specialized, almost arcane knowledge of data analysis and engineering is being widely democratized for this new generation, and that’s precisely where a new digital divide is emerging.

In business today, it’s not enough to just open a spreadsheet and make some casual observations anymore. Today’s new workers know how to dive into systems, pipe different programs together using no-code platforms, and answer problems with much more comprehensive — and real-time — answers.

It’s honestly striking to see the difference. Whereas just a few years ago, a store manager might (and strong emphasis on might) put their sales data into Excel and then let it linger there for the occasional perusal, this new generation is prepared to connect multiple online tools together to build an online storefront (through no-code tools like Shopify or Squarespace), calculate basic LTV scores using a no-code data platform, and prioritize their best customers with marketing outreach through basic email delivery services. And it’s all reproducible, since it is in technology and code and not produced by hand.

There are two important points here. First is to note the degree of fluency these new workers have for these technologies, and just how many members of this generation seem prepared to use them. They just don’t have the fear to try new programs out, and they know they can always use search engines to find answers to problems they are having.

Second, the productivity difference between basic computer literacy and a bit more advanced expertise is profound. Even basic but accurate data analysis on a business can raise performance substantially compared to gut instinct and expired spreadsheets.

This second digital divide is only going to get more intense. Consider students today in school, who are forced by circumstance to use digital technologies in order to get their education. How many more students are going to become even more capable of using these technologies? How much more adept are they going to be at remote work? While the current educational environment is a travesty and deeply unequal, the upshot is that ever more students are going to be forced to become deeply fluent in computers.[2]

Progress in many ways is about raising the bar. This generation is raising the bar on how data is used in the workplace, in business, and in entrepreneurship. They are better than ever at bringing together various individual services and cohering them into effective experiences for their customers, readers, and users. The No-Code Generation has the potential to finally fill that missing productivity gap in the global economy, making our lives better while saving time for everyone.

[1] Probably worth pointing out that the other “digital divide” at the time was describing households who had internet access and households who did not. That’s a divide that unfortunately still plagues America and many other rich, industrialized countries.

[2] Important to note that access to computing is still an issue for many students and represents one of the most easily fixable inequalities today in America. Providing equal access to computing should be an absolute imperative.

technology

via TechCrunch https://techcrunch.com

October 26, 2020 at 12:26PM