Your Obligatory Friday Read: THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia

Your Obligatory Friday Read: THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia

https://ift.tt/2GymI1u

When you are auditing you see mistakes happen all the time. Humans make errors. Except in real life, mistakes usually go in different directions. When all the mistakes go in the same direction and benefit the same parties, they probably aren’t mistakes. They’re malfeasance.

THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia. 

Go read.

When he says “what is potentially fatal for America is half the populace believing that their elections are hopelessly rigged and they’re eternally fucked.” he is right on the money. I saw it happen and it will perpetuate the Dems in power because “Why we should bother to vote if they get to cheat so blatantly and get away with it?”.  By then they don’t even have to bother to cheat, there won’t be enough opposition votes to make it worthwhile.

 

guns

via https://gunfreezone.net

November 6, 2020 at 07:50AM

Dataquest: Beginner Python Tutorial: Analyze Your Personal Netflix Data

Dataquest: Beginner Python Tutorial: Analyze Your Personal Netflix Data

https://ift.tt/32eVZik

how much have i watched the office analyzing netflix data

How much time have I spent watching The Office?

That’s a question that has run through my head repeatedly over the years. The beloved sitcom has been my top "comfort show/background noise" choice for a long time.

It used to be a question I couldn’t answer, because the data Netflix allowed users to download about their activity was extremely limited.

Now, though, Netflix allows you to download a veritable treasure-trove of data about your account. With a just a little Python and pandas programming, we can now get a concrete answer to the question: how much time have I spent watching The Office?

Want to find out how much time you have spent watching The Office, or any other show on Netflix?

In this tutorial, we’ll walk you through exactly how to do it step by step!

Having a little Python and pandas experience will be helpful for this tutorial, but it’s not strictly necessary. You can sign up and try our interactive Python for beginners course for free.

But first, let’s answer a quick question . . .

Can’t I Just Use Excel? Why Do I Need to Write Code?

Depending on how much Netflix you watch and how long you’ve had the service, you might be able to use Excel or some other spreadsheet software to analyze your data.

But there’s a good chance that will be tough.

The dataset you’ll get from Netflix includes every time a video of any length played — that includes those trailers that auto-play as you’re browsing your list.

So, if you use Netflix often or have had the streaming service for a long time, the file you’re working with is likely to be pretty big. My own viewing activity data, for example, was over 27,000 rows long.

Opening a file that big in Excel is no problem. But to do our analysis, we’ll need to do a bunch of filtering and performing calculations. With that much data, Excel can get seriously bogged-down, especially if your computer isn’t particularly powerful.

Scrolling through such a huge dataset trying to find specific cells and formulas can also become confusing fast.

Python can handle large datasets and calculations like this much more smoothly because it doesn’t have to render everything visually. And since we can do everything with just a few lines of code, it’ll be really easy to see everything we’re doing, without having to scroll through a big spreadsheet looking for cells with formulas.

Step 1: Download Your Netflix Data

For the purposes of this tutorial, I’ll be using my own Netflix data. To grab your own, make sure you’re logged in to Netflix and then visit this page. From the main Netflix screen, you can also find this page by clicking your account icon in the top right, clicking "Account", and then clicking "Download your personal information" on the page that loads.

On the next page, you should see this:

download-personal-netflix-data

Click the red button to submit your data download request.

Click "Submit a Request." Netflix will send you a confirmation email, which you’ll need to click.

Then, unfortunately, you’ll have to wait. Netflix says preparing your data report can take up to 30 days. I once got one report within 24 hours, but another one took several weeks. Consider bookmarking this page so that you can come back once you’ve got your data.

If you’d like, I’ve also made a small sample from my own data available for download here. If you’d like, you can download that file and use it work through this project. Then, when your own data becomes available, simply substitute your file for the same, run your code again, and you’ll get your answers almost instantly!

When Netflix says it may take a month to get your data.

Netflix will email you when your report is available to download. When it is, act fast because the download will "expire" and disappear again after a couple of weeks!

The download will arrive as a .zip file that contains roughly a dozen folders, most of which contain data tables in .csv format. There are also two PDFs with additional information about the data.

Step 2: Familiarize Yourself with the Data

This is a critical step in the data analysis process. The better we understand our data, the better our chances are of producing meaningful analysis.

Let’s take a look at what we’ve got. Here’s what we’ll see when we unzip the file:

Our goal here is to figure out how much time I’ve spent watching Netflix. Content Interaction seems like the most likely folder to contain that data. If we open it, we’ll find a file called ViewingActivity.csv that looks exactly like what we want — a log of everything we’ve viewed over the history of the account.

A sample of what the data looks like as a spreadsheet.

Looking at the data, we can quickly spot one potential challenge. There’s a single column, Title, that contains both show and episode titles, so we’ll need to do a little extra work to filter for only episodes of The Office.

At this point, it would be tempting to dive right into the analysis using that data, but let’s make sure we understand it first! In the downloaded zip file, there’s a file called Cover sheet.pdf that contains data dictionaries for all of the .csv files, including ViewingActivity.csv.

This data dictionary can help us answer questions and avoid errors. For example, consulting the dictionary for ViewingActivity.csv, we can see that the column Start Time uses the UTC timezone. If we want to analyze which times of day we most often watch Netflix, for example, we’ll need to convert this column to our local timezone.

Take some time to look over the data in ViewingActivity.csv and the data dictionary in Cover sheet.pdf before moving on to the next step!

Step 3: Load Your Data into a Jupyter Notebook

For this tutorial, we’ll be analyzing our data using Python and pandas in a Jupyter notebook. If you don’t already have that set up, you can find a quick, beginner-friendly guide at the beginning of this tutorial, or check out a more in depth Jupyter Notebook for Beginners post.

Once we’ve got a notebook open, we’ll import the pandas library and read our Netflix data CSV into a pandas dataframe we’ll call df:

import pandas as pd

df = pd.read_csv('ViewingActivity.csv')

Now, let’s do a quick preview of the data to make sure everything looks correct. We’ll start with df.shape, which will tell us the number of rows and columns in the dataframe we’ve just created.

df.shape
(27354, 10)

That result means we have 27,353 rows and 10 columns. Now, let’s see what it looks like by previewing the first few rows of data using df.head().

To maintain some privacy, I’ll be adding the additional argument 1 inside the .head() parentheses so that only a single row prints in this blog post. In your own analysis, however, you can use the default .head() to print the first five rows.

df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #2A54A7; color: #FFF;
font-weight: bold;
}

Profile Name Start Time Duration Attributes Title Supplemental Video Type Device Type Bookmark Latest Bookmark Country
0 Charlie 2020-10-29 3:27:48 0:00:02 NaN The Office (U.S.): Season 7: Ultimatum (Episod… NaN Sony PS4 0:00:02 0:00:02 US (United States)

Perfect!

Step 4: Preparing the Data for Analysis

Before we can do our number-crunching, let’s clean up this data a bit to make it easier to work with.

Dropping Unnecessary Columns (Optional)

First, we’ll start by dropping the columns we’re not planning to use. This is totally optional, and it’s probably not a good idea for large-scale or ongoing projects. But for a small-scale personal project like this, it can be nice to work with a dataframe that includes only columns we’re actually using.

In this case, we’re planning to analyze how much and when I’ve watched The Office, so we’ll need to keep the Start Time, Duration, and Title columns. Everything else can go.

To do this, we’ll use df.drop() and pass it two arguments:

  1. A list of the columns we’d like to drop
  2. axis=1, which tells pandas to drop columns

Here’s what it looks like:

df = df.drop(['Profile Name', 'Attributes', 'Supplemental Video Type', 'Device Type', 'Bookmark', 'Latest Bookmark', 'Country'], axis=1)
df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #2A54A7; color: #FFF;
font-weight: bold;
}

Start Time Duration Title
0 2020-10-29 3:27:48 0:00:02 The Office (U.S.): Season 7: Ultimatum (Episod…

Great! Next, let’s work with the time data.

Converting Strings to Datetime and Timedelta in Pandas

The data in our two time-related columns certainly looks correct, but what format is this data actually being stored in? We can use df.dtypes to get a quick list of the data types for each column in our dataframe:

df.dtypes
Start Time    object
Duration      object
Title         object
dtype: object

As we can see here, all three columns are stored as object, which means they’re strings. That’s fine for the Title column, but we need to change the two time-related columns into the correct datatypes before we can work with them.

Specifically, we need to do the following:

  • Convert Start Time to datetime (a data and time format pandas can understand and perform calculations with)
  • Convert Start Time from UTC to our local timezone
  • Convert Duration to timedelta (a time duration format pandas can understand and perform calculations with)

So, let’s approach those tasks in that order, starting with converting Start Time to datetime using pandas’s pd.to_datetime().

We’ll also add the optional argument utc=True so that our datetime data has the UTC timezone attached to it. This is important, since we’ll need to convert it to a different timezone in the next step.

We’ll then run df.dtypes again just to confirm that this has worked as expected.

df['Start Time'] = pd.to_datetime(df['Start Time'], utc=True)
df.dtypes
Start Time    datetime64[ns, UTC]
Duration                   object
Title                      object
dtype: object

Now we’ve got that column in the correct format, it’s time to change the timezone so that when we do our analysis, we’ll see everything in local time.

We can convert datetimes to any timezone using the .tz_convert() and passing it an argument with the string for the timezone we want to convert to. In this case, that’s 'US/Eastern'. To find your specific timezone, here’s a handy reference of TZ timezone options.

The tricky bit here is that we can only use .tz_convert() on a DatetimeIndex, so we need to set our Start Time column as the index using set_index() before we perform the conversion.

In this tutorial, we’ll then use reset_index() to turn it back into a regular column afterwards. Depending on your preference and goals, this may not be necessary, but for the purposes of simplicity here, we’ll try to do our analysis with all of our data in columns rather than having some of it as the index.

Putting all of that together looks like this:

# change the Start Time column into the dataframe's index
df = df.set_index('Start Time')

# convert from UTC timezone to eastern time
df.index = df.index.tz_convert('US/Eastern')

# reset the index so that Start Time becomes a column again
df = df.reset_index()

#double-check that it worked
df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #104E8B; color: #FFF;
font-weight: bold;
}

Start Time Duration Title
0 2020-10-28 23:27:48-04:00 0:00:02 The Office (U.S.): Season 7: Ultimatum (Episod…

We can see this is correct because the previous first row in our dataset had a Start Time of 2020-10-29 03:27:48. During Daylight Savings Time, the U.S. Eastern time zone is four hours behind UTC, so we can see that our conversion has happened correctly!

Now, let’s deal with our Duration column. This is, as the name suggests, a duration — a measure of a length of time. So, rather than converting it to a datetime, we need to convert it to a timedelta, which is a measure of time duration that pandas understands.

This is very similar to what we did when converting the Start Time column. We’ll just need to use pd.to_timedelta() and pass it the column we want to convert as an argument.

Once again, we’ll use df.dtypes to quickly check our work.

df['Duration'] = pd.to_timedelta(df['Duration'])
df.dtypes
Start Time    datetime64[ns, US/Eastern]
Duration                 timedelta64[ns]
Title                             object
dtype: object

Perfect! But we’ve got one more data preparation task to handle: filtering that Title column so that we can analyze only views of The Office.

Filtering Strings by Substring in pandas Using str.contains

There are many ways we could approach filtering The Office views. For our purposes here, though, we’re going to create a new dataframe called office and populate it only with rows where the Title column contains 'The Office (U.S.)'.

We can do this using str.contains(), giving it two arguments:

  • 'The Office (U.S.)', which is the substring we’re using to pick out only episodes of The Office.
  • regex=False, which tells the function that the previous argument is a string and not a regular expression.

Here’s what it looks like in practice:

# create a new dataframe called office that that takes from df
# only the rows in which the Title column contains 'The Office (U.S.)'
office = df[df['Title'].str.contains('The Office (U.S.)', regex=False)]

Once we’ve done this, there are a few ways we could double-check our work. For example, we could use office.sample(20) to inspect a random ten rows of our new office dataframe. If all twenty rows contained Office episodes, we could be pretty confident things worked as expected.

For the purposes of preserving a little privacy in this tutorial, though, I’ll run office.shape to check the size of the new dataframe. Since this dataframe should contain only my views of The Office, we should expect it to have significantly fewer rows than the 27,000+ row df dataset.

office.shape
(5479, 3)

Filtering Out Short Durations Using Timedelta

Before we really dig in and analyze, we should probably take one final step. We noticed in our data exploration that when something like an episode preview auto-plays on the homepage, it counts as a view in our data.

However, watching two seconds of a trailer as you scroll past isn’t the same as actually watching an episode! So let’s filter our office dataframe down a little bit further by limiting it to only rows where the Duration value is greater than one minute. This should effectively count the watchtime for partially watched episodes, while filtering out those short, unavoidable "preview" views.

Again, office.head() or office.sample() would be good ways to check our work here, but to maintain some semblance of privacy, I’ll again use df.shape just to confirm that some rows were removed from the dataframe.

office = office[(office['Duration'] > '0 days 00:01:00')]
office.shape
(5005, 3)

That looks good, so let’s move on to the fun stuff!

Analyzing the Data

When you realize how much time you’ve spent watching the same show.

How much time have I spent watching The Office?

First, let’s answer the big question: How much time have I spent watching The Office?

Since we’ve already got our Duration column in a format that pandas can compute, answering this question is quite straightforward. We can use .sum() to add up the total duration:

office['Duration'].sum()
Timedelta('58 days 14:03:33')

So, I’ve spent a total of 58 days, 14 hours, 3 minutes and 33 seconds watching The Office on Netflix. That is . . . a lot.

In my defense, that’s over the course of a decade, and a good percentage of that time wasn’t spent actively watching! When I’m doing brain-off work, working out, playing old video games, etc., I’ll often turn The Office on as a kind of background noise that I can zone in and out of. I also used to use it as a kind of white noise while falling asleep.

But we’re not here to make excuses for my terrible lifestyle choices! Now that we’ve answered the big question, let’s dig a little deeper into my The Office-viewing habits:

When do I watch The Office?

Let’s answer this question in two different ways:

  • On which days of the week have I watched the most Office episodes?
  • During which hours of the day do I most often start Office episodes?

We’ll start with a little prep work that’ll make these tasks a little more straightforward: creating new columns for "weekday" and "hour".

We can use the .dt.weekday and .dt.hour methods on the Start Time column to do this and assign the results to new columns named weekday and hour:

office['weekday'] = office['Start Time'].dt.weekday
office['hour'] = office['Start Time'].dt.hour

# check to make sure the columns were added correctly
office.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #104E8B; color: #FFF;
font-weight: bold;
}

Start Time Duration Title weekday hour
1 2020-10-28 23:09:43-04:00 0 days 00:18:04 The Office (U.S.): Season 7: Classy Christmas:… 2 23

Now, let’s do a little analysis! These results will be easier to understand visually, so we’ll start by using the %matplotlib inline magic to make our charts show up in our Jupyter notebook. Then, we’ll import matplotlib.

%matplotlib inline
import matplotlib

Now, let’s plot a chart of my viewing habits by day of the week. To do this, we’ll need to work through a few steps:

  • Tell pandas the order we want to chart the days in using pd.Categorical — by default, it will plot them in descending order based on the number of episodes watched on each day, but when looking at a graph, it’ll be more intuitive to see the data in Monday-Sunday order.
  • Count the number of episodes I viewed on each day in total
  • Sort and plot the data

(There are also many other ways we could approach analyzing and visualizing this data, of course.)

Let’s see how it looks step by step:

# set our categorical and define the order so the days are plotted Monday-Sunday
office['weekday'] = pd.Categorical(office['weekday'], categories=
    [0,1,2,3,4,5,6],
    ordered=True)

# create office_by_day and count the rows for each weekday, assigning the result to that variable
office_by_day = office['weekday'].value_counts()

# sort the index using our categorical, so that Monday (0) is first, Tuesday (1) is second, etc.
office_by_day = office_by_day.sort_index()

# optional: update the font size to make it a bit larger and easier to read
matplotlib.rcParams.update({'font.size': 22})

# plot office_by_day as a bar chart with the listed size and title
office_by_day.plot(kind='bar', figsize=(20,10), title='Office Episodes Watched by Day')

The Office views by day, Mon-Sun.

As we can see, I’ve actually tended to watch The Office more during the week than on weekends. This makes sense based on my habits, since it’s often background noise during evening work, workouts, etc.

Now, let’s take a look at the same data by hour. The process here is very similar to what we just did above:

# set our categorical and define the order so the hours are plotted 0-23
office['hour'] = pd.Categorical(office['hour'], categories=
    [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23],
    ordered=True)

# create office_by_hour and count the rows for each hour, assigning the result to that variable
office_by_hour = office['hour'].value_counts()

# sort the index using our categorical, so that midnight (0) is first, 1 a.m. (1) is second, etc.
office_by_hour = office_by_hour.sort_index()

# plot office_by_hour as a bar chart with the listed size and title
office_by_hour.plot(kind='bar', figsize=(20,10), title='Office Episodes Watched by Hour')

The Office views by hour, AM-PM

From the data, it looks like 12 a.m. and 1 a.m. were the hours during which I most often started episodes of The Office. This is due to my (unhealthy) habit of using the show as white noise while going to sleep — many of these episodes probably auto-played while I was already asleep!

Outside of that, it’s no surprise to see that most of my viewing happened during the evenings.

(Note: This data actually may not reflect my real habits very well, because I lived in China for a significant portion of my Netflix account ownership. We didn’t account for that in this tutorial because it’s a unique situation that won’t apply for most people. If you’ve spent significant time in different timezones during your Netflix usage, then you may need to do some additional date filtering and timezone conversion in the data cleaning stage before analysis.)

What’s Next?

In this tutorial, we’ve taken a quick dive into some personal Netflix data and learned that — among other things — I watch The Office too much. But there are tons of places you could go from here! Here are some ideas for expanding this project for yourself:

  • Do the same or similar analysis for another show.
  • See if you can create separate columns for show titles and episode titles using regular expressions [learn to use those in our Advanced Data Cleaning course)
  • Figure out which specific episodes you’ve watched most and least
  • Create prettier charts (our Storytelling with Data Visualization course can help with that)

When you realize your Netflix viewing habits have led to you finishing a cool project.

You can also try out some other fun projects using your own personal data. For example:

Want to learn to do this kind of project on your own, whenever you want? Our interactive data science courses will teach you to do all of this — and a whole lot more! — right in your browser window.

Charlie Custer

Charlie is a student of data science, and also a content marketer at Dataquest. In his free time, he’s learning to mountain bike and making videos about it.

The post Beginner Python Tutorial: Analyze Your Personal Netflix Data appeared first on Dataquest.

Python

via Planet Python https://ift.tt/1dar6IN

November 5, 2020 at 08:51PM

The Revolution is here and it’s called Statamic 3

The Revolution is here and it’s called Statamic 3

https://ift.tt/3mUCLGt


I’ve finally realized and come to appreciate the power of a CMS system. After maintaining my own blog built on Laravel Nova. After publishing on Medium. After hosting a static site from various free and open source software that got abandoned by their maintainer or never really had much of a community in the first place. Publishing content is paramount to any web operation. The internet is media and how you manage your media can determine a lot of the success of your business. If you need a developer to commit to the codebase or write entries to the database to publish a blog post or update copy there’s a lot of overhead there. Pre-release when I found out that you’d be able to install Statamic 3 into a new or existing Laravel application I was intrigued.

The Statamic team is grounded in the Laravel community. Their copy is hilarious. Their leader Jack McDade is a far out designer. His Radical Design course is expected to be out soonish and his personal website reads I’m Jack McDade and I’m tired of boring websites. Statamic has been around since 2012. In addition to Jack it was cofounded by repeat Product Hunt maker of the year Mubashar Iqbal (aka Mubs). Statamic 3 was launched with a magical unicorn on June 11th, 2020. You can read the announcement blog post titled Everything You Need to Know About Statamic 3.

Let’s start with the end in mind shall we?

I’m typing this on a beautiful editor in my browser. There’s no code changes I need to make to get this post out. I don’t need to write it in a google doc and paste it over. The Statamic dashboard gives me powers. Hell, Mr. McDade even live streamed building an AirBnB clone with Statamic. It’ll be the first video result when you google “Statamic Airbnb for chairs“. Though I’m familiar with Laravel and love to code I don’t want to start with anything too crazy because with great power comes great responsibility. P.S. they’re casting Toby McGuire and Andrew Garfield in the new Spiderman movie with Doctor Strange. It’s gunna be lit. In this tutorial we’ll go over how I built and launched this very site you’re on using Statamic 3. We’re hosted on Netlify and bought the domain name with Google Domains.

How it all began

This project began on the internet. The internet is a series of interconnected tubes. Birds fly in some of these tubes and they come out on an internet website called twitter dot com.

Epic Laravel origin story on twitter

I’d followed William since purchasing his book on how to Break Into Tech With Twitter. To be perfectly frank I have not started the book but I’m looking forward to reading it. As another side project I run a site called Employbl that is a resource for job seekers. I figured the book would be good reading to learn about how people break in and get their start in the tech industry. Everyone’s experience is different! I’d categorize myself as a Laravel fanboi. I use it in my day job. I build my side project with it. I like to learn about it. I follow Laravel devs on Twitter. There’s lots I like about it. I’ve blogged a fair bit about Laravel but never really dedicated a site to it. I run a Full Stack Developer Meetup group. But even there we don’t really have a blog. Until recently I published my Laravel tutorials and blog posts on Employbl. That’s fine but it’s not strictly related to the company mission. I’d like Employbl for the tech industry more broadly, even other departments outside of engineering. It’s ultimately about giving you the tools to help you get hired. I needed a space for Laravel developers. Plus, Statamic 3 was out and I wanted to give it a test drive. I bought the domain epiclaravel.dev on Google Domains for $12 for one year of hosting.

Create a project

I had a domain then I needed to create a Statamic website. Here was the first “AHA moment”. Statamic 3 has a Static Site Generator package, open source on GitHub. This enables us to host our statamic sites anywhere we could store a bunch of flat files. That could be S3, Netlify, GitHub Pages, Vercel (formerly Zeit). It doesn’t require us spin a server like we would if we were hosting a PHP application on something like Digital Ocean, EC2, Laravel Forge, Ploi or Render. I was excited. It simplified the whole process, reduced cost and would be easier to maintain and set up a deployment process for.

The Statamic team has built some starter templates for our ease of use. There are only a couple right now but I could see this being a growth area. They already have a Marketplace for Addons and display copy that a starter kit section is coming soon. Start building statamic starter kits now and you could be one of the first themes available on the platform!

Potential aside today we have a few options:

Going with the Doogie Browser theme was tempting but making my website look like a 90s PC was a bit too much to swallow so Starter’s Creek it was. Once I’d picked a starter kit I could generate my project. Of course I could have started from complete scratch but I’m hella lazy like that and would like to be up and blogging / running building ChairBnB so I used a starter kit. The only difference being an argument when generating the project:

git clone git@github.com:statamic/starter-kit-starters-creek.git epiclaravel
cd epiclaravel
rm -rf .git
composer install
cp .env.example .env && php artisan key:generate

You can view the source code for the starter template here. The Statamic team has ingeniously named their main php worker file “please”. To use the command line interface that’s included with Laravel you use “artisan“. To use the command line interface that’s included with Statamic you use “please“. So we create a user:

php please make:user

Fan Fact: Ecamm Live is a streaming tool for mac.

I have Laravel Valet set up on my local machine so will use that for running the website locally. The site is visible on my local though the browser at http://epiclaravel.test. Setting up Laravel Valet can be a little tricky if you’re completely unfamiliar but it’s worth it! For Laravel applications you can sites with Laravel Valet and use Takeout Docker containers to host other services your app needs like Postgres, Redis, Meilisearch, ElasticSearch and more. Laravel Takeout is built and maintained by the Tighten team. For my purposes having composer installed and Valet configured is enough to run the site in development.

Enter the dashboard

To login to the dashboard head to your domain /cp. From there you’re off and running. It’s probably best to start reading the documentation. Statamic is really powerful and I’ll probably write some more blog posts as I explore it more. Statamic Collections are very promising and I’m looking forward to implementing and learning about Statamic handles Search. Their documentation reads “There are three components — coincidentally the same number of Hanson brothers — whose powers combine to provide you the power of search. The form, the index, and the driver.” With the site running locally and my user created I see this:

Open the project in a code editor (for example PHPStorm or VSCode) and you can play with the values or the HTML/CSS. The Starter’s Creek starter kit is built with TailwindCSS. I’m excited to play with that. Previously I’d been plagued by build process errors when trying to set up Tailwind. I’d stuck to Bootstrap 4 out of habit. For now though we have the template, not a lot of feature development to be done. Let’s deploy!

Deploy

One of the awesome new features of Statamic 3 (along with being able to install Statamic into any Laravel project as a composer package) is the Static Site Generator. Why is this awesome you ask? Static sites are easier to host than running your own server. When a site is “static” it pretty much just means it’s a bunch of files sitting on a server somewhere. All the computer needs to do is serve the files (HTML, CSS and Javascript) to the end users, in most cases a web browser. The alternative is having your own server that you maintain or doing “serverless” things (still involves servers). Static sites you can host with Netlify, Amazon S3, GitHub Pages or Vercel. If an app requires a server (and probably a database) you’re more in the Digital Ocean / Google Cloud / AWS / Azure space. Render and Heroku are great options too 🙂

We could deploy our Statamic site using a server and a database like a normal option. I think it’s going to be easier to deploy a static site to start off. All we want to do is host content for now. I’ve used Netlify before so going to stick with that.

We first need to require the static site generator composer package into our app:

composer require statamic/ssg

We’ll publish the config file to be explicit about what we have going on:

php artisan vendor:publish --provider="Statamic\StaticSite\ServiceProvider"

This normally generates a file in the config directory. There’s a config/statamic directory. It looks like the starter template already had this ready to go. You can view the config file here if that’s what floats your boat 🚣

Now we can build our static site: ✨

php please ssg:generate

This is the output I got:

The Statamic team outlined some Deployment Examples for us. It looks pretty straightforward and awesome:

Here are the steps to deploy a static Statamic site. Your app will be powered by flat files and stored safely in version control

Deployment Step 1: Deploy to a GitHub repo

You could also deploy to GitLab or BitBucket. Honestly I’ve heard great things about GitLab but use GitHub mostly out of habit and for the platform’s social aspects. Maybe GitLab has that too idk. Anywho create your repo. From the root of your project run:

git init
git add -A
git commit -m 'initial commit'

Deployment Step 2: Deploy with Netlify

We can link Netlify to our git repo and configure the build command PHP version as an environment variable and set the publish directory:

This deploys our site to a Netlify URL like: https://boring-noyce-0f134b.netlify.app/ 

Woohoo! It’s live on the internet with continuous deployment set up. Pushing to the master branch with git will redeploy our site. We also need to set some environment variables on the Netlify dashboard. The .env file is not stored in git. The Netlify dashboard provides space to specify these variables for production.

Deployment Step 3: Hook up domain name

I bought my domain name through Google Domains. This in hindsight was a mistake. The Google Domains UI is easy to navigate and I have other domains there but if I’m hosting through Netlify shoulda just bought the domain through them too. To point the domain name at Netlify’s servers. We’ll be using “Netlify DNS”.

This takes up to 48 hours to propagate over so let’s hope it worked! We can view the propagation status in the Netlify dashboard under Settings > Domain Management.

After the DNS changes propagate your site will be live. The flow for future updates is login to the control panel on your test domain, write content, make edits and do CMS things. This will change the flat files in your project. When your site is looking good locally push the changes up to GitHub and your site will automatically be deployed! That’s what I’m doing for Epic Laravel and it’s working great 😉

Conclusion

In this post we’ve gone from no website to a functional one with a CMS. The most complicated or technical part is probably setting up Laravel Valet for local development. Once the site is running we can do lots of edits from the Control Panel. We can also use our Laravel, PHP and Tailwind knowledge to build custom functionality or buy pre-built solutions from the Statamic marketplace. Moving forward I’m looking forward to exploring the Statamic core concepts to build the site and maybe even install Statamic into my existing Laravel projects.

programming

via Laravel News Links https://ift.tt/2dvygAJ

November 3, 2020 at 02:24PM

Laravel Has Many Through

Laravel Has Many Through

https://ift.tt/3k0db11

Laravel Has Many Through generates the code for your has many through relationships by asking a few simple questions.

programming

via Laravel News Links https://ift.tt/2dvygAJ

November 3, 2020 at 02:24PM

23,600 Hacked Databases Have Leaked From a Defunct ‘Data Breach Index’ Site

23,600 Hacked Databases Have Leaked From a Defunct ‘Data Breach Index’ Site

https://ift.tt/2I1x4HB

More than 23,000 hacked databases have been made available for download on several hacking forums and Telegram channels in what threat intel analysts are calling the biggest leak of its kind. From a report: The database collection is said to have originated from Cit0Day.in, a private service advertised on hacking forums to other cybercriminals. Cit0day operated by collecting hacked databases and then providing access to usernames, emails, addresses, and even cleartext passwords to other hackers for a daily or monthly fee. Cybercriminals would then use the site to identify possible passwords for targeted users and then attempt to breach their accounts at other, more high-profile sites. The idea behind the site isn’t unique, and Cit0Day could be considered a reincarnation of similar "data breach index" services such as LeakedSource and WeLeakInfo, both taken down by authorities in 2018 and 2020, respectively.


Read more of this story at Slashdot.

geeky

via Slashdot https://slashdot.org/

November 4, 2020 at 12:56PM

Why 2A Supporters Love The Mandalorian

Why 2A Supporters Love The Mandalorian

https://ift.tt/3jNgW9L


Why 2A Supporters Love The Mandalorian
This image released by Disney Plus shows Pedro Pascal, as Din Djarin, right, with The Child, in a scene from “The Mandalorian,” premiering its second season on Friday. (Disney Plus via AP)

One of the first movies I saw in the theater was when I wasn’t even old enough for kindergarten. I was just four years old when Star Wars premiered. My uncle, then all of 16 and a newly licensed driver, took me to see what was becoming a cultural phenomenon. I became a massive science fiction fan at that moment, a genre I still love all these many years later.

Then the prequels came, and they were…not good.

Then we got the new movies. While I liked The Force Awakened, The Last Jedi was freaking awful. Rise of Skywalker was better, but that was a low bar.

Disney had all but destroyed my beloved Star Wars.

Then Disney Plus launched and premiered The Mandalorian. It showed that the issue wasn’t Disney, but something else.

What I noticed, though, were just how many of my fellow Second Amendment lovers also loved The Mandalorian.

Now in its second season–which premiered this past Friday–the show is continuing where it left off, and I think the show’s popularity with the Second Amendment crowd will continue to grow. In fact, I expect to start seeing Mandalorian-themed stuff begin to replace Punisher skulls any day now.

But the question is, why? Here are a few reasons I’ve seen.

In one season one episode, the Mandalorian has to talk to jawas about parts for his ship. He’s advised to leave his guns behind. The character, Din Djarin, simply replies that he’s a Mandalorian and that “weapons are part of my religion.”

While guns aren’t religious for most of us, the refusal to leave our guns behind speaks to a part of the Second Amendment supporter’s soul. Guns are for self-defense. Leaving them behind exposes you to danger. While Djarin has more reason than most of us to be concerned–he’s a bounty hunter, after all–but at this point, no one is actively hunting him so far as he’s aware. He simply won’t leave his weapons behind.

It’s kind of hard not to look at that and think about how similar it feels to how many of us approach things. A “Gun Free Zone” sign is basically telling us to go away, conduct business somewhere else. An espoused anti-Second Amendment opinion is much the same thing.

While guns aren’t part of our religion necessarily, they’re a part of our life and we recognize that people danger doesn’t go away just because you wish it would.

Over the course of the show, there are a couple of episodes that show evil people preying on the peaceful but disarmed folks just trying to get by in life. It takes someone with a gun to make armed bad guys go away.

Of course, while this is fiction, the reality of it appears everywhere in real life. Criminals prey on the innocent citizen unless that citizen is armed. Some who can afford it hire private security to bring their guns, but many of us can’t afford to outsource it.

Whether it’s protecting a village as Djarin did in season one or watching a would-be marshal put slavers down, the only thing that really stops bad people with guns is good people with guns.

I mean, I don’t have to lay out why that appeals to the Second Amendment crowd.

More importantly than the symbolism, of course, is the story. One of the worst things about much of modern science fiction is the idea that politics should trump telling a good story.

In The Mandalorian, story doesn’t play second fiddle to anything. The plot is engaging and entertaining. It fully embraces the idea of it being a space western in a way that no show has since Firefly. In fact, there’s some debate as to which is better, but since I absolutely love both, I’m staying out of that one.

For fans of westerns, you’ll recognize the similar themes. For example, there’s the episode with MMA legend Gina Carano which is reminiscent of The Magnificant Seven. Episode one of season two gives a bit of a shout-out to Justified and Deadwood with guest star Timothy Olyphant showing up.

And through it all, there’s a weapon on his side.

See, while it tells great stories, it doesn’t beat you over the head with all the ways you suck like so much of modern media tries to do. Instead, it just entertains you while, admittedly, showing all the things that Second Amendment fans have been saying for years.

I know that a lot of people aren’t fans of Disney, and I get that. However, let’s be better than the other side and not try to destroy businesses that disagree with us on stuff.

Instead, support good fiction that maybe shows a bit of what we believe. Do that enough and they’ll start making more of it, especially when so much of their other stuff isn’t getting that support. You win the culture war surrounding the Second Amendment by making sure to support stuff that might not be intended to be pro-2A but actually is.

Author’s Bio:

Tom Knighton


Tom Knighton is a Navy veteran, a former newspaperman, a novelist, and a blogger and lifetime shooter. He lives with his family in Southwest Georgia. He’s also the host of Unloaded TV on YouTube.

More posts from Tom Knighton

guns

via Bearing Arms https://ift.tt/2WiVJN5

November 2, 2020 at 06:05PM

PyCharm – A Simple Illustrated Guide

PyCharm – A Simple Illustrated Guide

https://ift.tt/3oLo6PC

PyCharm is one of the most popular and widely used IDE for Python. This tutorial is a complete walkthrough of the PyCharm Integrated Development Environment to help Python Programmers use PyCharm and its features.

I have researched a lot on the topic and then compiled this PyCharm article/walkthrough for you so that you get a firm grip on using the most popular IDE when it comes to programming in Python. Not only have I added screenshots and images on numerous topics that have been discussed in this tutorial but also added numerous videos for your convenience and better understanding. So, are you ready to learn the ins and outs of PyCharm?

❖ Introduction to Integrated Development Environments (IDE)

A common question asked by most Python beginners is –

What environment should I prefer while programming in Python?

Answer: You can either use an IDE or a text editor for coding. You need an IDE or a text editor for writing/modifying code.

We have a plethora of choices when it comes to text editors, however, some of them are more popular than the others, majorly because of their ease of use and the features that they provide. Let us have a look at some of them.

➠ Some commonly used text editors for programming are:

  1. Sublime Text
  2. Atom
  3. Vim
  4. Visual Studio Code
  5. Notepad++

➠ Now, here is a list of some of the most commonly used IDE’s used for coding in Python:

  1. PyCharm
  2. IDLE
  3. Spyder
  4. PyDev
  5. Wing

Now that brings us to the next question –

Should we use an IDE or a Text editor?

Answer: This is one of the most debated questions among programmers. I prefer using an IDE over text editors. The reason being, IDEs provide numerous advantages over a simple text editor though one might argue that IDEs can be used as text editors, and text editors can be used as IDEs. However, strictly speaking, a text editor is used for writing/modifying text/code whereas, an IDE, enables us to do a lot more within that single program; running, debugging, version control, etc.

An IDE or Integrated Development Environment can be considered as a programming tool that integrates several specialized tools into a cohesive environment. These specialized tools may include:

  • A text editor
  • A code autocomplete function
  • A build procedure that includes a compiler, linker, etc.
  • A debugger
  • A file or project manager
  • A performance profiler
  • a deployment tool
  • and so on.

Advantages of using an IDE

  • Provide an Interactive interface which makes life easy for programmers as it ensures that syntactic or semantic errors are detected while developing without any hassle.
  • Reduces debugging time.
  • Provides an inbuilt version control mechanism.
  • Facilitates visual programming through flow-charts, block diagrams, etc.

Therefore it makes more sense to use an IDE instead of using a text editor. In order to use a text editor like an IDE, you must install numerous plugins so that it behaves the way an IDE does but all of that is already taken care of by an IDE without the need for extra plugins.

IDE Selection

Selecting an IDE is purely based on the developers requirement. Some of the factors governing the selection of an IDE can be –

  • If a developer has to code in multiple languages?
  • Whether an integrated debugger is required?
  • If a drag-drop GUI layout is required?
  • If features like autocomplete and class browsers are required? and so on.

Having said that, the most commonly used and preferred IDE by Python programmers is PyCharm.

❖ Introduction To PyCharm

As mentioned earlier PyCharm is the most popular IDE used by Python programmers. It is a cross-platform IDE developed by the Czech company JetBrains.

PyCharm Features

PyCharm offers the following features:

  • Syntax highlighting
  • Auto-Indentation and code formatting
  • Code completion
  • Line and block commenting
  • On-the-fly error highlighting
  • Code snippets
  • Code folding
  • Easy code navigation and search
  • Code analysis
  • Configurable language injections
  • Python refactoring
  • Documentation

What makes PyCharm special and more efficient than most other IDEs?

🧠 Intelligent Python Assistance

PyCharm provides:

  • smart code completion,
  • code inspections,
  • on-the-fly error highlighting and quick-fixes,
  • automated code refactoring and rich navigation capabilities.

🌐 Web Development Frameworks

PyCharm offers framework-specific support for modern web development frameworks such as Django, Flask, Google App Engine, Pyramid, and web2py.

🔬 Scientific Tools

PyCharm integrates with IPython Notebook, has an interactive Python console, and supports Anaconda as well as multiple scientific packages including matplotlib and NumPy.

🔀 Cross-technology Development

In addition to Python, PyCharm supports JavaScript, CoffeeScript, TypeScript, Cython, SQL, HTML/CSS, template languages, AngularJS, Node.js, and more.

💻 Remote Development Capabilities

With PyCharm you can run, debug, test, and deploy applications on remote hosts or virtual machines, with remote interpreters, an integrated ssh terminal, and Docker and Vagrant integration.

🛠 Built-in Developer Tools

PyCharm contains a huge collection of out of the box tools:

  • An integrated debugger and test runner;
  • Python profiler;
  • A built-in terminal;
  • Integration with major VCS
  • Built-in Database Tools.

PyCharm Editions

PyCharm is available in three editions:

  1. Community (open-source)
  2. Professional (paid)
  3. Educational (open-source)

Let’s compare the Community and Professional editions in the table given below:

  PyCharm Professional Edition   PyCharm Community Edition
Intelligent Python editor  ✔  ✔
Graphical debugger and test runner  ✔  ✔
Navigation and Refactorings  ✔  ✔
Code inspections  ✔  ✔
VCS support  ✔  ✔
Scientific tools  ✔  ❌
Web development  ✔  ❌
Python web frameworks  ✔  ❌
Python Profiler  ✔  ❌
Remote development capabilities  ✔  ❌
Database & SQL support  ✔  ❌

Now that we have gone through the basics of PyCharm, let us have a look at how we can install PyCharm.

❖ Installing PyCharm

✨ Installing PyCharm on Windows

1. The first step is to download the latest version of PyCharm for either of the professional or community version. Here’s the link to download it from the official website:

2. After the download is complete, run the executable installer file and follow the wizard steps that follow.

✨ Installing PyCharm on Mac

Step 1: Open PyCharm and download PyCharm for Mac for either of the Community or Professional version.

Step 2: Once the .dmg file has been downloaded, double click on the file to begin your installation.

Step 3: After the dmg file is launched, drag PyCharm into your Application folder.

Step 4: In the Applications Folder, double click on PyCharm to open the Application.

Step 5: On the first launch you will be asked to import settings. Tick the box: ☑ I do not have a previous version of PyCharm or I do not want to import my settings. Click on OK and Accept the Privacy Policy. Keep the Install Config as it is set by default. Click OK.

💡 On the Welcome screen, you can do the following:

  • Create a New Project.
  • Open an existing project or file.
  • Check out an existing project from a version control system.

✨ Installing PyCharm on Linux

Method 1: Using Snap Package 

PyCharm is available as a Snap package. If you’re on Ubuntu 16.04 or later, you can install PyCharm from the command line.

sudo snap install [pycharm-professional|pycharm-community] --classic

Note: If you are on some other Linux distribution, you can enable snap support first and then use the snap command to install the PyCharm Community Edition.

Method 2: Using official Linux installer from JetBrains 

1. Download the latest version of PyCharm (tar.gz file) for either of the Professional or Community version.

2. Go to the folder where you have downloaded your file.

cd ~/Downloads

3. Extract the tar.gz file.

tar -xzf pycharm-community-2020.1.1.tar.gz

4. Move into the the extracted PyCharm folder and then inside the bin folder.

cd pycharm-community-2020.1.1/bin

5. Add executable permissions to the script file inside the bin folder.

chmod u+x pycharm.sh

6. Then run the script file.

sh pycharm.sh

7. PyCharm starts running and in the first run, you will be asked you to accept the privacy policy. Then you will be asked whether you would like to send data about features, plugins, and other data. If you wish to send the data, you can hit the “Send Anonymous Statistics” button, or you can click on the “Don’t Send” button. And finally, PyCharm will ask you to set up the IDE. Start by choosing the UI theme, creating a launcher script, and adding plugins.

Now that brings us to the end of the first section of this comprehensive guide on PyCharm. In the next section, we will learn how to write our first code in Python using PyCharm. We will also discuss how to run, debug, and test your code. Let’s begin the next phase of our PyCharm journey!

Please click on the Next button/link given below to move onto the next section of this tutorial!

The post PyCharm – A Simple Illustrated Guide first appeared on Finxter.

Python

via Finxter https://ift.tt/2HRc2LV

November 1, 2020 at 03:56PM

Check Out These Extensive Breakdowns of Alita: Battle Angel’s Visual Effects

Check Out These Extensive Breakdowns of Alita: Battle Angel’s Visual Effects

https://ift.tt/2TFQUuA


Alita wrecking.
Image: 20th Century Fox

As a woman who is also at least part robot, I have a deep-held affinity for Alita. Her first film outing (hopefully not her last) didn’t check every box, but it was a lot of fun. And it sure was visually striking, with some intense and innovative visual effects work.

If you were also transfixed by the look of Alita: Battle Angel, you’re in for some hefty lucky, because recently Weta Digital posted a bunch of glimpses at its visual effects work, Ranging from its composting to the lush dreg heap of Iron City to Alita herself, Weta is eager to show off how Alita: Battle Angel made a story of cyborg self-actualization so visually unique.

What an absolutely striking film. It had foul luck, going up against Captain Marvel and also becoming the site of a weird reactionary crusade to keep people from seeing Captain Marvel, but it remains a unique, absolutely all-in big-budget experience. I am among those hoping for a sequel, but in the meantime, these breakdowns are a mighty satisfying watch.


For more, make sure you’re following us on our Instagram @io9dotcom.

G/O Media may get a commission

geeky,Tech

via Gizmodo https://gizmodo.com

November 1, 2020 at 04:57PM

Codementor: How I learned Python

Codementor: How I learned Python

https://ift.tt/2JqdYvz

This is the story of how I started off with python

Python

via Planet Python https://ift.tt/1dar6IN

November 1, 2020 at 01:52AM