Undeniable Mathematical Evidence the 2020 Election is Being Stolen

Undeniable Mathematical Evidence the 2020 Election is Being Stolen

https://ift.tt/2UebpyA

Reprinted with permission from The Red Elephants. www.theredelephants.com .
AmmoLand Editors Note: This reprint is only part of the complete article documenting the theft of the 2020 Presidential Election. Check the master page for full and continuing updates. It is important that gun owners are made aware of this as the Biden Administration will be an existential threat to your right to keep and bear arms. All Images property of theredelephants.com.

Undeniable Mathematical Evidence the Election is Being Stolen, img The Red Elephants
Undeniable Mathematical Evidence the Election is Being Stolen, img The Red Elephants

USA – -(AmmoLand.com)- According to CBS News, President Trump does not plan to concede in the event that the media declares Joe Biden the winner of the election, and elected the 46th president of the United States. The Trump campaign and it’s top advisers called for multiple lawsuits on the grounds that the ongoing vote count would result in tallying illegally cast ballots.

The lawsuits will amount to an aggressive effort to highlight anomalies, statistical impossibilities, or other perceived problems that could affect vote counts before a final presidential winner is declared.

Many reporters at press conferences that took place in Arizona, Pennsylvania, and Michigan on Thursday asked his political appointees and supporters for evidence of the wide-scale problems they alleged occurred.

If it is just the mathematical evidence Americans are looking for, there is endless evidence. Here are just the facts.

Statistical Impossibilities in Wisconsin and Michigan: 

In Wisconsin, voter turnout matched the record high of 2004. The Wisconsin Elections Commission uses the estimated voting-age population as the denominator when calculating statewide voter turnout numbers. According to the Elections Commission, there was a 73 percent turnout in this Wisconsin election.

Turnout was 67 percent in 2016; 70 percent in 2012; 69 percent in 2008; and 73 percent in 2004. Apparently Joe Biden smashed Barack Obama’s 2008 turnout in most places in the country.

In both Michigan and Wisconsin, several vote dumps occurred at approximately 4am on Wednesday morning, which showed that Joe Biden received almost 100 percent of the votes. President Trump was leading by hundreds of thousands of votes in both states as America went to sleep, and turnout in the state of Wisconsin seems to be particularly impossible.

With absentee ballots, former vice-president Joe Biden was also up 60 points in Pennsylvania and almost 40 points in Michigan According to the New York Times.  Comparably, Biden was only up single digits in absentee voting in most other battleground states. Wisconsin has not yet been reported.

New York Times 2020 Absentee Ballot Counts

 

Elections officials in Michigan and Wisconsin could not explain Democratic presidential nominee Joe Biden’s sudden and dramatic vote tally increase that occurred in both states Wednesday morning.

When asked at a Wednesday press conference how this occurred, Michigan Department of State spokesperson Aneta Kiersnowski told reporters “We cannot speculate as to why the results lean one way or another.”

This is particularly concerning considering republicans led in mail-in ballots requested and mail-in and in-person ballots returned leading up to and at the start of election day.

According to NBC News on election day before the polls opened, In Michigan, Republicans led 41% to 39% in Mail-in Ballots requested. Republicans also led 42% to 39% with Mail-in and in-person ballots returned.

In Wisconsin on election day before the polls opened, Republicans led Mail-in Ballots requested 43% to 35%, and Mail-in and early in-person ballots returned 43% to 35%. Almost ALL of the ballots found, while most in the country were sleeping, after they officials stated they would stop counting, were for Joe Biden.

NBC News Michigan 2020 Election Results

 

NBC News Wisconsin 2020 Election Results

Some statistically savvy observers noticed other mathematical flaws, as random numbers in statistics should follow a pattern in their distribution. If the numbers are falsified, it is easy to detect.

The increase in Democrats relative to Republicans was significantly higher when the Democrat was doing worse overall in early counting. Within each ward, late votes broke heavily to the Democrat in exactly the races where they are likely to affect the result.

Biden’s Vote Tallies Violate Benford’s Law:

According to some analysts, Biden’s Vote Tallies Violate Benford’s Law. All of the other candidates’ tallies follow Benford’s law across the country, except for Biden’s when he gets in a tight race. Biden pretty clearly fails an accepted test for catching election fraud, used by the State Department and forensic accountants.

Biden’s Vote Tallies Milwaukee, WI

 

Biden’s Vote Tallies Allegheny, PA

Biden’s Vote Tallies Chicago IL

Analysts ran the data with Allegheny using the Mebane 2nd digit test with Trump vs Biden. The difference was significant. It just doesn’t work. Biden’s is fishy, many significant deviations. In Trump’s there were only 2 deviations but neither are significant at the 5% level. The X-asis is the digit in question, the Y-axis is the % of observations with that digit.

Bidens Mebane 2nd digit test Allegheny PA

Trumps Mebane 2nd digit test Allegheny PA

So as an example, if the total votes for Biden is 100 in a precinct, “0” is the second digit. If the total votes were 110, “1” is the second digit, and so on.

For Biden in Allegheny absentee ballots, there are multiple significant deviations. For Trump, none of the deviations are significant at the 5% level.

Biden Absentee Ballots Allegheny PA 2020

Biden’s Vote Tallies Chicago IL

Senate and House Races Compared to Presidential Seem Curious

Others have taken a look at ballot numbers in important states with no down-ballot votes, versus states that are not swing states, and noticed a disturbing trend.

In Michigan, Trump received 2,637,173 votes while the GOP senate candidate received 2,630,042 votes. The difference here is only 7,131 which is not far off from what we see historically. In the same state, Joe Biden received 2,787,544 votes while the Democratic senate candidate received 2,718,451. The difference is 69,093 votes which is much higher than the historical norm.

In Barack Obama’s 2008 victory, he received a total of 2,867,680 votes, while the democratic senate candidate received 3,033,000 votes. Somehow Joe Biden gained over 60,000 ballots with no down-ballot vote.

In Georgia, it’s even worse. President Trump gained 2,432,799 votes, while the GOP Senate candidate tallied 2,433,617 votes. This is a difference of only 818 votes. Joe Biden in contrast gained 2,414,651 votes, while his Democratic Senate candidate tallied 2,318,850 votes.  This is a difference of 95,801 votes.

In many counties and states across the country, including in those called for Biden or where Trump is now trailing, Republicans substantially overperformed what was projected. NBC estimates Republicans may ultimately end up gaining as many as eight seats.

In Cuyahoga, Ohio, Joe Biden only had a net gain of 4,000 votes compared to Hillary Clinton’s 2016 performance, yet at the same time had a net gain of almost 70,000 in Wayne County Michigan. Numbers like this are unprecedented and highly questionable. Some economists have even expressed their confusion regarding what happened in Wayne county and Milwaukee in the dead of night.

Joe Biden apparently ended up with millions more votes than Barack Obama received in his historic 2008 election where he ended up with 365 electoral votes, winning Florida, North Carolina, and even Indiana.  This is even with millions more votes to be counted yet in 2020.

Massive Enthusiasm Gap:

Joe Biden with almost record low enthusiasm, underperformed across many major cities compared to Hillary Clinton in 2016.

In New York City, Chicago, and Miami, he was down 201,408, 260,835, and 6,945 respectively.

However, in the states Biden needed to overtake Trump in 2020, he gained massively.  According to the Associated Press vote total data, in Atlanta, Milwaukee, and Pittsburg, he was up 76,518, 67,630, 28,429, and 29,150 respectively.
According to polling, The difference in enthusiasm for the candidates is significant. Trump leads 52.9 percent to 45 percent among the 51.2 percent of registered Rust Belt voters who say they are “Extremely Enthusiastic” about voting for their preferred candidate. Among likely voters who are extremely enthusiastic, the president enjoys a double-digit advantage—60.5 percent to 44.9 percent.

Massive Enthusiasm Gap Chart

Less than half of the supporters of former vice president Joe Biden, (46.9 percent) said they were voting for Joe Biden because they like the candidate. Approximately 8 in 10 voted for President Trump because they wanted his as President.
Preferred Candidate For or Against Chart
Preferred Candidate For or Against Chart

 

Pennsylvania Chaos:

In Pennsylvania, Trump led by almost 800,000 votes on election night after most Americans headed to bed. Over the course of the last 72 hours, President Trump’s lead shrunk to less than a 95,000 vote lead in the keystone state, and then Joe Biden took the lead.

Over the past couple of days, batches of votes started flowing into the final tally, mostly in favor of Joe Biden. Five Thirty Eight reported recently that “Two more batches of Pennsylvania vote were reported: 23,277 votes in Philadelphia, all for Biden”

The Pennsylvania Democratic Party predicted the remaining 580,000 uncounted mail-in ballots will go resoundingly for Joe Biden, projecting that the former vice president will carry the state by about 175,000 votes.

“Based on the Party distribution of the ballots cast in each county, we believe that 75% of the remaining ballots will go to Joe Biden,” state Sen. Sharif Street wrote in a statement. “We project Biden will win by about 175,000 votes.”

Biden is only leading in Pennsylvania by just a few thousand votes.

According to Politico, it was the ballots found in postal facilities that put Biden over the top in Pennsylvania. Postal workers found more than 1,000 ballots in Philadelphia facilities Thursday and 300 in Pittsburgh. The Philadelphia and Pittsburgh ballots were part of more than 2,000 ballots discovered in dozens of postal facilities across the two states and expedited to election officials, pursuant to a judge’s court order.

For what it’s worth, there also happens to be a record number of 90-year-olds registered to vote in one year, during a pandemic, than at any point in Pennsylvania history.

New 90 Year Old Voter Registration Pennsylvania 2020

Confirmed Errors: 

On election day, there were several confirmed reporting errors that were fixed upon being revealed by journalists as they watched the numbers roll in.

Arizona

In Arizona according to Politico, An error found in Edison Research data that was identified by a journalist showed that 98% of the vote had been counted in Arizona when in fact only 84% of the vote had been counted.  Officials corrected this mistake when it was pointed out.

Georgia

In Georgia, Voters were unable to cast machine ballots for a couple of hours in Morgan and Spalding counties after the electronic devices crashed, state officials said.

The companies “uploaded something last night, which is not normal, and it caused a glitch,” said Marcia Ridley, elections supervisor at Spalding County Board of Election. That glitch prevented poll workers from using the pollbooks to program smart cards that the voters insert into the voting machines.

“That is something that they don’t ever do. I’ve never seen them update anything the day before the election,” Ridley said. Ridley said she did not know what the upload contained.

Michigan

There was also something suspicious about the vote reporting in Antrim County, Michigan, where Trump beat Hillary Clinton by 30 points in 2016. Initial vote totals there showed Biden ahead of Trump by 29 points, a result that can’t possibly be accurate, as plenty of journalists noted.

When NY Times journalist pointed this out on Twitter, they corrected this and called this an ‘error.’

According to the Detroit Free Press, a USA Today affiliate, officials investigated the wonky election results in Antrim County.

Antrim County Clerk Sheryl Guy said results on electronic tapes and a computer were somehow scrambled after the cards were transported in sealed bags from township precincts to county offices and uploaded onto a computer.

In 2016, Trump won Antrim County with about 62% of the vote, compared with about 33% for Democrat Hillary Clinton. Trump beat Clinton by about 4,000 votes.

Wednesday morning, Antrim results showed Democrat Joe Biden leading Trump by slightly more than 3,000 votes, with 98% of precincts reporting.

More in Michigan

In Oakland County’s 15th county commission district, a fixed computer glitch turned a losing Republican into a winner. A computer error led election officials in Oakland County to hand an upset victory Wednesday to a Democrat, only to switch the win back to an incumbent Republican a day later.  The incumbent, Adam Kochenderfer appeared to lose by a few hundred votes, an outcome that seemed odd to many in his campaign.  After the apparent computer error was found and fixed, Kochenderfer ended up winning by over 1,000 votes.

There were many other confirmed errors, including in Virginia where 100,000 extra votes were tallied for Joe Biden, and more may be revealed as the weeks go on.

There were also many processing delays, specifically in Fulton county Georgia where a pipe suddenly burst in the processing center.

On November 4th, at approximately 6:07 a.m., the staff at State Farm Arena notified Fulton County Registration and Elections of a burst pipe affecting the room where absentee ballots were being tabulated.

As of 7 p.m. on Wednesday Fulton County Elections officials said 30,000 absentee ballots were not processed due to a pipe burst. Officials reassured voters that none of the ballots were damaged and the water was quickly cleaned up.

But the emergency delayed officials from processing ballots between 5:30 a.m. and 9:30 a.m.

Former Politicians of Blue Cities Chime In:

President Trump was leading big until certain Rust Belt states froze their ballot return reports. When reporting resumed, Trump began to lose steadily.
In Mexico, 1988, the PRI was losing handily until ballot returns froze, only to resume in a massive pro-PRI turnaround.
“In an autobiography that began circulating in Mexico this week, de la Madrid sheds more light on that dark night in Mexico’s history. What he reveals is not new, political analysts said. But in 850 pages, de la Madrid’s memoirs give the firmest confirmation to date of one of this country’s biggest open secrets: the presidential elections of 1988 were rigged.

Political analysts and historians have described that election as one of the most egregious examples of the fraud that allowed the Institutional Revolutionary Party to control this country for more than seven decades, and the beginning of the end of its authoritarian rule.”

Rod Blagojevich explained that there is no question that this is what is happening in Philadelphia, Milwaukee, Detroit, and other cities. “In big cities where they control the political apparatus and they control the apparatus that counts the votes, and they control the polling places and the ones who count the votes, it’s widespread and it’s deep,” Blagojevich said on Friday.

I guess even corrupt politicians who went to jail understand the impossibility that Biden outperformed Hillary Clinton in only cities located in Michigan, Pennsylvania, Georgia, and Wisconsin.  All while the GOP lost zero house races, and won 8 of 11 governors races. 

As Nicaraguan dictator Anastasio Somoza once said, “Indeed, you won the elections, but I won the count.”

Ballots Received After Election Day

A federal judge on Wednesday said he may call Postmaster General Louis DeJoy to testify about why the U.S. Postal Service missed an Election-Day deadline to sweep locations in several states for left behind mail-in ballots.

The order, issued by U.S. District Judge Emmet G. Sullivan, was meant to trigger sweeps of facilities in six key battleground states: Pennsylvania, Michigan, Georgia, Texas, Arizona, and Florida. Some of the 12 districts included in the order have legislation against accepting ballots after midnight on election night.

“Now you can tell your clients this in no uncertain terms,” Sullivan said in a Wednesday hearing. “I am not pleased about this 11th-hour development last night. You can tell your clients that someone may have a price to pay for that.”

According to the Washington Post, more than 150,000 ballots were caught in U.S. Postal Service processing facilities and not delivered by Election Day.

In a pair of decisions, the Supreme Court on Wednesday let election officials in two key battleground states, Pennsylvania and North Carolina, accept absentee ballots for several days after Election Day.

In the Pennsylvania case, the court refused a plea from Republicans in the state that it decide before Election Day whether election officials can continue receiving absentee ballots until November 12th.

In the North Carolina case, the court let stand lower court rulings that allowed the state’s board of elections to extend the deadline to nine days after Election Day, up from the three days called for by the state legislature.

On October 26th, the Supreme Court declined to extend the deadline for counting of mail-in votes in Wisconsin, a victory for Republicans who brought the legal challenge. This particular extension order originally came from a federal judge in September, a crucial point that the conservatives on the court all agreed on: federal courts shouldn’t micromanage state-run elections.

The Supreme Court ordered Pennsylvania Democrats to respond by Thursday evening in a case challenging the state’s three-day extension for counting mail-in ballots.

President Trump has moved to intervene in a lawsuit brought by Pennsylvania Republicans, arguing the state’s Democratic Party and Secretary of State violated the law by extending the time for counting mail-in ballots to Nov. 6 at 5 p.m., despite the state legislature setting the deadline as Election Day.

The lawsuit takes issue with a state Supreme Court ruling that postmarked ballots be presumed to have been mailed before Nov. 3, even if not clearly postmarked to that effect.

North Carolina will not finish counting votes in the presidential and state elections until local elections boards process outstanding mail-in and provisional ballots next week, according to state elections officials.

The process, spelled out in state law, means the winner of North Carolina’s 15 electoral votes for president likely won’t be known until next Friday, Nov. 13.

The court, at Trump’s request, recently issued an interim order to election boards to set aside certain mail ballots that lack identifying info for the voter, and to not count those votes until after the court rules further.

Thousands of Deceased Confirmed to Be Registered and Some Even Voting:

An observer noticed something curious about some of the names on the ballots recorded in the state of Michigan.  Upon further review, one particular name out of the list, and confirmed to have cast a ballot, happened to be born in 1902 and passed away in 1984.

A pollster noticed the list and video, and used social security death index data confirming the deceased registered voter.

Here is the website where you can verify for yourself. Note that you have to guess-check the month of birth.

Another poll watcher who was later kicked out for taking photographs, noticed a decent-sized list of Michigan residents who have also been confirmed to have cast their ballots.  All of the names on the list he reviewed show their birthdate in chronological order.  Apparently, there are many voters born in the early 1900’s in the great lakes state.

Michigan voters born in the early 1900

A lawsuit filed by the Public Interest Legal Foundation (PILF) alleges that there are at least 21,000 dead people on Pennsylvania’s voter rolls. The lawsuit claims that Pennsylvania failed to “reasonably maintain” their voter registration records under federal and state law in time for the 2020 presidential election.

“As of October 7, 2020, at least 9,212 registrants have been dead for at least five years, at least 1,990 registrants have been dead for at least ten years, and at least 197 registrants have been dead for at least twenty years,” the lawsuit states.

“Pennsylvania still left the names of more than 21,000 dead individuals on the voter rolls less than a month before one of the most consequential general elections for federal officeholders in many years,” the lawsuit continues.

According to the lawsuit, about 92 percent of the 21,000 dead people on Pennsylvania’s voter rolls died sometime before October 2019. About 216 dead people show voting credits after federally listed dates of death in 2016 and 2018, the lawsuit alleges.

Studies Finds Hundreds of Thousands of Illegal Ballots, and More Votes than Existing Registered Voters:

According to CBS LA, Ellen Swensen with the Election Integrity Project California says they found more than 277,000 questionable ballots were mailed this election year in L.A. County before election day.

That’s 63% of all the questionable ballots mailed statewide.

It includes more than 4,800 duplicate ballots mailed to the same person, and 728 ballots mailed to people who likely have died. In 2016 their investigation found many dead voters still registered. Now four years later, there are more.

If this is happening in LA, it’s certainly happening in major cities nationwide….. this is just the tip of the iceberg, click the link below to read the complete article.

Editors Note:  As we stated at the top of this page the proof of a stolen election continues to pile up. Please follow the link to see the original page for “There is Undeniable Mathematical Evidence the Election is Being Stolen” where the Red Elephant team continues to add more and more glaring examples.  This fight is just beginning.


About The Red Elephants

The Red Elephants is an organization of like-minded conservatives that have come together to spread awareness and truth. Each member of The Red Elephants organization represents the liberties, freedoms, and constitutional rights of the American people.

Our goal is to spread the truth to the citizens of this great nation by reporting news and promoting free-thinking. We will present a new brand of reporting that will be used to give conservatives a voice in the media that’s dominated by the left.

”We Are Spreading Conservative Truth For The Greater Good.” – The Red Elephants (www.theredelephants.com)

The Red Elephants

The post Undeniable Mathematical Evidence the 2020 Election is Being Stolen appeared first on AmmoLand.com.

guns

via AmmoLand.com https://ift.tt/2okaFKE

November 9, 2020 at 03:27PM

Laravel Jetstream: How it Works and Example How to Customize [VIDEO]

Laravel Jetstream: How it Works and Example How to Customize [VIDEO]

https://www.youtube.com/watch?v=d8YgWApHMfA

Laravel Jetstream came as a new Auth solution with Laravel 8, with a lot of tech-stack that may be new to many Laravel users.

programming

via Laravel News Links https://ift.tt/2dvygAJ

November 9, 2020 at 07:48PM

The Python RegEx Cheat Sheet for Budding Programmers | MakeUseOf

The Python RegEx Cheat Sheet for Budding Programmers | MakeUseOf

https://ift.tt/3laVPzF

The use of Python to solve various tech problems and its easy learning curve has made it one of the most popular modern programming languages. Despite being quick to learn, its regular expressions can be tricky, especially for newcomers.

Although Python has a lot of libraries, it’s wise that you know your way around its regular syntaxes. Even if you’re an expert at it, there’s a chance that you still need to occasionally look-up some Python commands to refresh your memory.

For that reason, we’ve prepared this Python regular expressions cheat sheet to help you get a better hold of your syntaxes.

FREE DOWNLOAD: This cheat sheet is available as a downloadable PDF from our distribution partner, TradePub. You will have to complete a short form to access it for the first time only. Download the Python RegEx Cheat Sheet for Budding Programmers.

The Python RegEx Cheat Sheet for Budding Programmers

Get Creative When Using Python

Learning Python’s regular expressions is a big step towards becoming a better Python programmer, but that’s just one of the few things you need to do.

However, playing around with its syntaxes and getting creative with them polishes your coding skill. So beyond learning the syntaxes, use them in real-life projects and you will become a better Python programmer.

Expression Action Examples
print() Display the result of a command x="Hello world"
print(x)

output: Hello world

input() Collect inputs from users print(input("what is your name?"))

output: what is your name?

type() Find the type of a variable x="Regular expressions"
type(x)

output:

len() Find the number of items in a variable len([1, 2, 3])

output: 3

\ Escape a character that changes the intent of a line of code print("I want you to add\"\"")

output: I want you to add""

\n Break a string character to start on the next line print("This is a line \n This is a second line")

output:
This is a line
This is a second line

def function_name(parameter):
commands
Initiate a function with an optional parameter def yourName(x):
print(x+1)
lambda Call an anonymous function add_3_to = lambda y: y+3
print(add_3_to(4))

output: 7

return Return a result from a function def yourName(x):
return x+1
class Create a Python object class myClass:
def myFunc(x):
def __init__ Initialize the attrributes of a class class myClass:
def __init__(self, attributes…)
"__init__.py Save a file containing a module so that it’s read successfully in another Python file Rename a file containing a module as:

"__init__.py

int() Convert a variable to integer int(1.234)

output: 1

str() Convert a variable to string str(1.234)

output: ‘1.234’

float() Convert a variable to float float(23)

output: 23.0

dict(Counter()) Convert a list or a tupple into a dictionary after sorting with a Python built-in Counter from collections import Counter
dict(Counter([1,1,2,1,2,3,3,4]))

output: {1: 3, 2: 2, 3: 2, 4: 1}

round() Round up the output of an operation to the nearest whole number round(23.445)

output: 23

round(operation or number, decimal places) Round up the output of an operation to a specific number of decimal places round(23.4568, 2)

output: 23.46

if: Initiate a conditional statement if 2<3:
print("Two is smaller")
elif: Make a counterstatement when the if statement is False if 2<3:
print("Two is smaller")
elif 2==3:
print("Go on")
else: Make a final counterstatement if other conditions are False if 2<3:
print("Two is smaller")
elif 2==3:
print("Go on")
else:
print("Three is greater")
continue Ignore a condition and execute the rest of the loop a=[1, 4, -10, 6, 8]
for b in a:
if b<=0:
continue
print(b)

output:
1
4
6
8

break Terminate the flow of a loop with a given condition a=[1, 4, -10, 6, 8]
for b in a:
if b>=6:
break
print(b)

output:
1
4
-10

pass Ignore a set of prior instructions for b in a:
pass
try, except Try a block of code, else, raise a defined exception try:
print(a)

except:
print("An error occured!")

output: An error occured!

finally Execute a final code when the try and the except blocks fail try:
print(a)

except:
print(d)
finally:
print("You can’t print an undefined variable")

output: You can’t print an undefined variable

raise Exception() Raise an exception that stops the command when execution isn’t possible a=7+2
if a<10:
raise Exception("Oh! You didn’t get a score of 10")
import x Import a whole module or library import math
from x import y Import a library x from a file, or a class y from scipy.stats import mode
as Customize an expression to your preferred name import pandas as pd
in Check if a value is present in a variable x=[1, 4, 6, 7]
if 5 in x:
print("There is a five")
else:
print("There is no five")

output: There is no five

is Check if two variables refer to a single element x=[1, 4, 6, 7]
x=b
print(x is b)
True
None Declare a null value x=None
< Check if one value is lesser than another 5<10

output: True

> Check if one value is more than another 5>10

output: False

<= Check if a value is lesser or equal to another 2*2<=3

output: False

>= Check if a value is greater or equal to another 2*2>=3

output: True

"== Check if a value is exactly equal to the other 3==4

ouput: False

!= Ascertain that a value is not equal to the other 3!=4

ouput: True

import re Import Python’s built-in regular expressions import re
re.findall("strings", variable)
a|b Check if either of two elements are present in a string import re
someText="Hello regular expression"
a=re.findall("regular|Hello", someText)
print(a)

output: [‘Hello’, ‘regular’]

string$ Check if a variable ends with a set of strings import re
someText="Hello regular expression"
a=re.findall("expression$", someText)

output: [‘expression’]

^string Check if a variable starts with a set of strings import re
someText="Hello regular expression"
a=re.findall("^Hello", someText)
print(a)

output: [‘Hello’]

string.index() Check the index position of a string character a= "Hello World"
a.index(‘H’)

output: 0

string.capitalize() Capitalize the first character in a set of strings a= "Hello World"
a.capitalize()

output: ‘Hello world’

string.swapcase() Print the first letter of each word as a lower case and the others as upper case a= "Hello World"
a.swapcase()

output:
‘hELLO wORLD’

string.lower() Convert all the strings to a lowercase a= "Hello World"
a.lower()

output: ‘hello world’

string.upper() Convert all strings to uppercase a= "Hello World"
a.upper()

output: ‘HELLO WORLD’

string.startswith() Check if a string starts with a particular character a= "Hello World"
a.startswith(‘a’)

output: False

string.endswith() Check if a string ends with a particular character a= "Hello World"
a.endswith(‘d’)

output: True

string.split() Separate each word into a list a= "Hello World"
a.split()

output: [‘Hello’, ‘world’]

strings {}’.format() Display an output as string a=3+4
print("The answer is {}".format(a))

output: The answer is 7

is not None Check if the value of a variable is not empty def checknull(a):
if a is not None:
return "its full!"
else:
return "its empty!"
x%y Find the remainder (modulus) of a division 9%4

output: 1

x//y Find the quotient of a division 9//4

output: 2

"= Assign a value to a variable a={1:5, 3:4}
"+ Add elements together ["a two"] + ["a one"]

output: [‘a two’, ‘a one’]

1+3

output=4

"- Find the difference between a set of numbers 3-4

output=-1

"* Find the product of a set of numbers 3*4

output:12

a+=x Add x to variable a without assigning its value to a new variable a=2
a+=3

output: 5

a-=x Subsract x from variable a without assigning it to a new variable a=3
a-=2

output: 1

a*=x Find the product of variable a and x without assigning the resullt to a new variable a=[1, 3, 4]
a*=2

output: [1, 3, 4, 1, 3, 4]

x**y Raise base x to power y 2**3

output: 8

pow(x, y) Raise x to the power of y pow(2, 3)

output: 8

abs(x) Convert a negative integer to its absolute value abs(-5)

output: 5

x**(1/nth) Find the nth root of a number 8**(1/3)

output: 2

a=b=c=d=x Assign the same value to multiple variables a=b=c=d="Hello world"
x, y = y, x Swap variables x = [1, 2]
y = 3
x, y = y, x
print(x, y)

output:
3 [1, 2]

for Loop through the elements in a variable a=[1, 3, 5]
for b in a:
print(b, "x", "2", "=", b*2)

output:
1 x 2 = 2
3 x 2 = 6
5 x 2 = 10

while Keep looping through a variable, as far as a particular condition remains True a=4
b=2
while b<=a:
print(b, "is lesser than", a)
b+=1

output:
2 is lesser than 4
3 is lesser than 4
4 is lesser than 4

range() Create a range of positive integers between x and y x=range(4)
print(x)
range(0, 4)
for b in x:
print(b)

output:
0
1
2
3

sum() Iterate through the elements in a list print(sum([1, 2, 3]))

output:6

sum(list, start) Return the sum of a list with an added element print(sum([1, 2, 3], 3))

output: 9

[] Make a list of elements x=[‘a’, 3, 5, ‘h’, [1, 3, 3], {‘d’:3}]
() Create a tupple—tupples are immutable x=(1, 2, ‘g’, 5)
{} Create a dictionary a={‘x’:6, ‘y’:8}
x[a:b] Slice through a list x=[1, 3, 5, 6]
x[0:2]

output: [1, 3]

x[key] Get the value of a key in dictionary x a={‘x’:6, ‘y’:8}
print(a[‘x’])

output: 6

x.append() Add a list of values to an empty list x=[1]
x.append([1,2,3])
print(x)

output: [1, [1,2,3]]

x.extend() Add a list of values to continue an existing list without necessarily creating a nested list x=[1,2]
x.extend([3,4,6,2])
print(x)

output:
[1, 2, 3, 4, 6, 2]

del(x[a:b]) Delete an item completely from a list at a specific index x=[1,2,3,5]
del(x[0:2])
print(x)

output: [2,3,5]

del(x[key]) Delete a key and a value completely from a dictionary at a specific index y={1:3, 2:5, 4:6, 8:2}
del(y[1], y[8])
print(y)

output= {2:5, 4:6}

dict.pop() Pop out the value of a key and remove it from a dictionary at a specific index a={1:3, 2:4, 5:6}
a.pop(1)

output: 3

dict.popitem() Pop out the last item from a dictionary and delete it a={1:2, 4:8, 3:5}
a.popitem()

output: (3, 5)
print(a)
output: {1:2, 4:8}

list.pop() Pop out a given index from a list and remove it from a list a=[1, 3, 2, 4, 1, 6, 6, 4]
a.pop(-2)

output: 6
print(a)
output: [1, 3, 2, 4, 1, 6, 4]

clear() Empty the elements of a list or a dictionary x=[1, 3, 5]
x.clear()
print(x)

output: []

remove() Remove an item from a list x=[1, 5, 6, 7]
x.remove(1)

output: [5, 6, 7]

insert() Insert elements into a llist x=[3, 5, 6]
x.insert(1, 4)
print(x)

output: [1, 4, 3, 5, 6]

sort(reverse=condition) Reverse the direction of the elements in a list x=[1, 3, 5, 6]
x.sort(reverse=True)
print(x)

output: [6, 5, 3, 1]

update() Update a dictionary by changing its first element and adding any other item to its end x={1:3, 5:6}
x.update({1:4, 8:7, 4:4})
print(x)

output: {1: 4, 5: 6, 8: 7, 4: 4}

keys() Show all the keys in a dictionary a={1:2, 4:8}
a.keys()

output: dict_keys([1, 4])

values() Show all the values in a dictionary a={1:2, 4:8}
a.values()

output: dict_values([2, 8])

items() Display the keys and the values in a dictionary a={1:2, 4:8}
a.items()

output: dict_items([(1, 2), (4, 8)])

get(key) Get the value of an item in a dictionary by its key a={1:2, 4:8, 3:5}
a.get(1)

output: 2

setdefault(key) Return the original value of an element to a dictionary a.setdefault(2)
f={**a, **b} Merge two dictionaries a={‘x’:6, ‘y’:8}
b={‘c’:5, ‘d’:3}
f={**a, **y}
print(f)

output:{‘x’: 6, ‘y’: 8, ‘c’: 5, ‘d’: 3}

remove() Remove the first matching value of an element from a list without minding its index a=[1, 3, 2, 4, 4, 1, 6, 6, 4]
a.remove(4)
print(a)

output: [1, 3, 2, 4, 1, 6, 6, 4]

memoryview(x) Access the internal buffers of an object a=memoryview(object)
bytes() Convert a memory buffer protocol into bytes bytes(a[0:2])
bytearray() Return an array of bytes bytearray(object)
# Write a single line of comment or prevent a line of code from being executed # Python regex cheat sheet
""" """ Write a multi-line comment """The Python regex cheat sheet is good for beginners
It’s equally a great refresher for experts"""
Command Line
pip install package Install an online library pip install pandas
virtualenv name Use virtaulenv to create a virtual environment virtualenv myproject
mkvirtualenv name Use virtual environment wrapper to create virtual environment mkvirtualenv myproject
python file.py Run the commands in a Python file "python my_file.py
pip freeze List out all the installed packages in a virtual environment pip freeze
pip freeze > somefiles Copy all installed libraries in a single file pip freeze > requirements.txt
where Find the installation path of Python where python
–version Check the version of a package python –version
.exe Run a Python shell python.exe
with open(file, ‘w’) Write to an existing file and overwrite its existing content with open(‘regex.txt’, ‘w’) as wf:
wf.write("Hello World!")
with open(file, ‘r’) Open a file as read-only with open(‘regex.txt’, ‘r’) as rf:
print(rf.read()
with open(file, ‘a’) Write to a file without overwriting its existing content with open(‘regex.txt’, ‘a’) as af:
af.write("\nHello Yes!")
file.close Close a file if it’s not in use af=open(‘regex.txt’)
af.close
exit Exit the Python shell exit()

non critical

via MakeUseOf.com https://ift.tt/1AUAxdL

November 7, 2020 at 03:10PM

Your Obligatory Friday Read: THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia

Your Obligatory Friday Read: THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia

https://ift.tt/2GymI1u

When you are auditing you see mistakes happen all the time. Humans make errors. Except in real life, mistakes usually go in different directions. When all the mistakes go in the same direction and benefit the same parties, they probably aren’t mistakes. They’re malfeasance.

THE 2020 ELECTION: FUCKERY IS AFOOT – Larry Correia. 

Go read.

When he says “what is potentially fatal for America is half the populace believing that their elections are hopelessly rigged and they’re eternally fucked.” he is right on the money. I saw it happen and it will perpetuate the Dems in power because “Why we should bother to vote if they get to cheat so blatantly and get away with it?”.  By then they don’t even have to bother to cheat, there won’t be enough opposition votes to make it worthwhile.

 

guns

via https://gunfreezone.net

November 6, 2020 at 07:50AM

Dataquest: Beginner Python Tutorial: Analyze Your Personal Netflix Data

Dataquest: Beginner Python Tutorial: Analyze Your Personal Netflix Data

https://ift.tt/32eVZik

how much have i watched the office analyzing netflix data

How much time have I spent watching The Office?

That’s a question that has run through my head repeatedly over the years. The beloved sitcom has been my top "comfort show/background noise" choice for a long time.

It used to be a question I couldn’t answer, because the data Netflix allowed users to download about their activity was extremely limited.

Now, though, Netflix allows you to download a veritable treasure-trove of data about your account. With a just a little Python and pandas programming, we can now get a concrete answer to the question: how much time have I spent watching The Office?

Want to find out how much time you have spent watching The Office, or any other show on Netflix?

In this tutorial, we’ll walk you through exactly how to do it step by step!

Having a little Python and pandas experience will be helpful for this tutorial, but it’s not strictly necessary. You can sign up and try our interactive Python for beginners course for free.

But first, let’s answer a quick question . . .

Can’t I Just Use Excel? Why Do I Need to Write Code?

Depending on how much Netflix you watch and how long you’ve had the service, you might be able to use Excel or some other spreadsheet software to analyze your data.

But there’s a good chance that will be tough.

The dataset you’ll get from Netflix includes every time a video of any length played — that includes those trailers that auto-play as you’re browsing your list.

So, if you use Netflix often or have had the streaming service for a long time, the file you’re working with is likely to be pretty big. My own viewing activity data, for example, was over 27,000 rows long.

Opening a file that big in Excel is no problem. But to do our analysis, we’ll need to do a bunch of filtering and performing calculations. With that much data, Excel can get seriously bogged-down, especially if your computer isn’t particularly powerful.

Scrolling through such a huge dataset trying to find specific cells and formulas can also become confusing fast.

Python can handle large datasets and calculations like this much more smoothly because it doesn’t have to render everything visually. And since we can do everything with just a few lines of code, it’ll be really easy to see everything we’re doing, without having to scroll through a big spreadsheet looking for cells with formulas.

Step 1: Download Your Netflix Data

For the purposes of this tutorial, I’ll be using my own Netflix data. To grab your own, make sure you’re logged in to Netflix and then visit this page. From the main Netflix screen, you can also find this page by clicking your account icon in the top right, clicking "Account", and then clicking "Download your personal information" on the page that loads.

On the next page, you should see this:

download-personal-netflix-data

Click the red button to submit your data download request.

Click "Submit a Request." Netflix will send you a confirmation email, which you’ll need to click.

Then, unfortunately, you’ll have to wait. Netflix says preparing your data report can take up to 30 days. I once got one report within 24 hours, but another one took several weeks. Consider bookmarking this page so that you can come back once you’ve got your data.

If you’d like, I’ve also made a small sample from my own data available for download here. If you’d like, you can download that file and use it work through this project. Then, when your own data becomes available, simply substitute your file for the same, run your code again, and you’ll get your answers almost instantly!

When Netflix says it may take a month to get your data.

Netflix will email you when your report is available to download. When it is, act fast because the download will "expire" and disappear again after a couple of weeks!

The download will arrive as a .zip file that contains roughly a dozen folders, most of which contain data tables in .csv format. There are also two PDFs with additional information about the data.

Step 2: Familiarize Yourself with the Data

This is a critical step in the data analysis process. The better we understand our data, the better our chances are of producing meaningful analysis.

Let’s take a look at what we’ve got. Here’s what we’ll see when we unzip the file:

Our goal here is to figure out how much time I’ve spent watching Netflix. Content Interaction seems like the most likely folder to contain that data. If we open it, we’ll find a file called ViewingActivity.csv that looks exactly like what we want — a log of everything we’ve viewed over the history of the account.

A sample of what the data looks like as a spreadsheet.

Looking at the data, we can quickly spot one potential challenge. There’s a single column, Title, that contains both show and episode titles, so we’ll need to do a little extra work to filter for only episodes of The Office.

At this point, it would be tempting to dive right into the analysis using that data, but let’s make sure we understand it first! In the downloaded zip file, there’s a file called Cover sheet.pdf that contains data dictionaries for all of the .csv files, including ViewingActivity.csv.

This data dictionary can help us answer questions and avoid errors. For example, consulting the dictionary for ViewingActivity.csv, we can see that the column Start Time uses the UTC timezone. If we want to analyze which times of day we most often watch Netflix, for example, we’ll need to convert this column to our local timezone.

Take some time to look over the data in ViewingActivity.csv and the data dictionary in Cover sheet.pdf before moving on to the next step!

Step 3: Load Your Data into a Jupyter Notebook

For this tutorial, we’ll be analyzing our data using Python and pandas in a Jupyter notebook. If you don’t already have that set up, you can find a quick, beginner-friendly guide at the beginning of this tutorial, or check out a more in depth Jupyter Notebook for Beginners post.

Once we’ve got a notebook open, we’ll import the pandas library and read our Netflix data CSV into a pandas dataframe we’ll call df:

import pandas as pd

df = pd.read_csv('ViewingActivity.csv')

Now, let’s do a quick preview of the data to make sure everything looks correct. We’ll start with df.shape, which will tell us the number of rows and columns in the dataframe we’ve just created.

df.shape
(27354, 10)

That result means we have 27,353 rows and 10 columns. Now, let’s see what it looks like by previewing the first few rows of data using df.head().

To maintain some privacy, I’ll be adding the additional argument 1 inside the .head() parentheses so that only a single row prints in this blog post. In your own analysis, however, you can use the default .head() to print the first five rows.

df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #2A54A7; color: #FFF;
font-weight: bold;
}

Profile Name Start Time Duration Attributes Title Supplemental Video Type Device Type Bookmark Latest Bookmark Country
0 Charlie 2020-10-29 3:27:48 0:00:02 NaN The Office (U.S.): Season 7: Ultimatum (Episod… NaN Sony PS4 0:00:02 0:00:02 US (United States)

Perfect!

Step 4: Preparing the Data for Analysis

Before we can do our number-crunching, let’s clean up this data a bit to make it easier to work with.

Dropping Unnecessary Columns (Optional)

First, we’ll start by dropping the columns we’re not planning to use. This is totally optional, and it’s probably not a good idea for large-scale or ongoing projects. But for a small-scale personal project like this, it can be nice to work with a dataframe that includes only columns we’re actually using.

In this case, we’re planning to analyze how much and when I’ve watched The Office, so we’ll need to keep the Start Time, Duration, and Title columns. Everything else can go.

To do this, we’ll use df.drop() and pass it two arguments:

  1. A list of the columns we’d like to drop
  2. axis=1, which tells pandas to drop columns

Here’s what it looks like:

df = df.drop(['Profile Name', 'Attributes', 'Supplemental Video Type', 'Device Type', 'Bookmark', 'Latest Bookmark', 'Country'], axis=1)
df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #2A54A7; color: #FFF;
font-weight: bold;
}

Start Time Duration Title
0 2020-10-29 3:27:48 0:00:02 The Office (U.S.): Season 7: Ultimatum (Episod…

Great! Next, let’s work with the time data.

Converting Strings to Datetime and Timedelta in Pandas

The data in our two time-related columns certainly looks correct, but what format is this data actually being stored in? We can use df.dtypes to get a quick list of the data types for each column in our dataframe:

df.dtypes
Start Time    object
Duration      object
Title         object
dtype: object

As we can see here, all three columns are stored as object, which means they’re strings. That’s fine for the Title column, but we need to change the two time-related columns into the correct datatypes before we can work with them.

Specifically, we need to do the following:

  • Convert Start Time to datetime (a data and time format pandas can understand and perform calculations with)
  • Convert Start Time from UTC to our local timezone
  • Convert Duration to timedelta (a time duration format pandas can understand and perform calculations with)

So, let’s approach those tasks in that order, starting with converting Start Time to datetime using pandas’s pd.to_datetime().

We’ll also add the optional argument utc=True so that our datetime data has the UTC timezone attached to it. This is important, since we’ll need to convert it to a different timezone in the next step.

We’ll then run df.dtypes again just to confirm that this has worked as expected.

df['Start Time'] = pd.to_datetime(df['Start Time'], utc=True)
df.dtypes
Start Time    datetime64[ns, UTC]
Duration                   object
Title                      object
dtype: object

Now we’ve got that column in the correct format, it’s time to change the timezone so that when we do our analysis, we’ll see everything in local time.

We can convert datetimes to any timezone using the .tz_convert() and passing it an argument with the string for the timezone we want to convert to. In this case, that’s 'US/Eastern'. To find your specific timezone, here’s a handy reference of TZ timezone options.

The tricky bit here is that we can only use .tz_convert() on a DatetimeIndex, so we need to set our Start Time column as the index using set_index() before we perform the conversion.

In this tutorial, we’ll then use reset_index() to turn it back into a regular column afterwards. Depending on your preference and goals, this may not be necessary, but for the purposes of simplicity here, we’ll try to do our analysis with all of our data in columns rather than having some of it as the index.

Putting all of that together looks like this:

# change the Start Time column into the dataframe's index
df = df.set_index('Start Time')

# convert from UTC timezone to eastern time
df.index = df.index.tz_convert('US/Eastern')

# reset the index so that Start Time becomes a column again
df = df.reset_index()

#double-check that it worked
df.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #104E8B; color: #FFF;
font-weight: bold;
}

Start Time Duration Title
0 2020-10-28 23:27:48-04:00 0:00:02 The Office (U.S.): Season 7: Ultimatum (Episod…

We can see this is correct because the previous first row in our dataset had a Start Time of 2020-10-29 03:27:48. During Daylight Savings Time, the U.S. Eastern time zone is four hours behind UTC, so we can see that our conversion has happened correctly!

Now, let’s deal with our Duration column. This is, as the name suggests, a duration — a measure of a length of time. So, rather than converting it to a datetime, we need to convert it to a timedelta, which is a measure of time duration that pandas understands.

This is very similar to what we did when converting the Start Time column. We’ll just need to use pd.to_timedelta() and pass it the column we want to convert as an argument.

Once again, we’ll use df.dtypes to quickly check our work.

df['Duration'] = pd.to_timedelta(df['Duration'])
df.dtypes
Start Time    datetime64[ns, US/Eastern]
Duration                 timedelta64[ns]
Title                             object
dtype: object

Perfect! But we’ve got one more data preparation task to handle: filtering that Title column so that we can analyze only views of The Office.

Filtering Strings by Substring in pandas Using str.contains

There are many ways we could approach filtering The Office views. For our purposes here, though, we’re going to create a new dataframe called office and populate it only with rows where the Title column contains 'The Office (U.S.)'.

We can do this using str.contains(), giving it two arguments:

  • 'The Office (U.S.)', which is the substring we’re using to pick out only episodes of The Office.
  • regex=False, which tells the function that the previous argument is a string and not a regular expression.

Here’s what it looks like in practice:

# create a new dataframe called office that that takes from df
# only the rows in which the Title column contains 'The Office (U.S.)'
office = df[df['Title'].str.contains('The Office (U.S.)', regex=False)]

Once we’ve done this, there are a few ways we could double-check our work. For example, we could use office.sample(20) to inspect a random ten rows of our new office dataframe. If all twenty rows contained Office episodes, we could be pretty confident things worked as expected.

For the purposes of preserving a little privacy in this tutorial, though, I’ll run office.shape to check the size of the new dataframe. Since this dataframe should contain only my views of The Office, we should expect it to have significantly fewer rows than the 27,000+ row df dataset.

office.shape
(5479, 3)

Filtering Out Short Durations Using Timedelta

Before we really dig in and analyze, we should probably take one final step. We noticed in our data exploration that when something like an episode preview auto-plays on the homepage, it counts as a view in our data.

However, watching two seconds of a trailer as you scroll past isn’t the same as actually watching an episode! So let’s filter our office dataframe down a little bit further by limiting it to only rows where the Duration value is greater than one minute. This should effectively count the watchtime for partially watched episodes, while filtering out those short, unavoidable "preview" views.

Again, office.head() or office.sample() would be good ways to check our work here, but to maintain some semblance of privacy, I’ll again use df.shape just to confirm that some rows were removed from the dataframe.

office = office[(office['Duration'] > '0 days 00:01:00')]
office.shape
(5005, 3)

That looks good, so let’s move on to the fun stuff!

Analyzing the Data

When you realize how much time you’ve spent watching the same show.

How much time have I spent watching The Office?

First, let’s answer the big question: How much time have I spent watching The Office?

Since we’ve already got our Duration column in a format that pandas can compute, answering this question is quite straightforward. We can use .sum() to add up the total duration:

office['Duration'].sum()
Timedelta('58 days 14:03:33')

So, I’ve spent a total of 58 days, 14 hours, 3 minutes and 33 seconds watching The Office on Netflix. That is . . . a lot.

In my defense, that’s over the course of a decade, and a good percentage of that time wasn’t spent actively watching! When I’m doing brain-off work, working out, playing old video games, etc., I’ll often turn The Office on as a kind of background noise that I can zone in and out of. I also used to use it as a kind of white noise while falling asleep.

But we’re not here to make excuses for my terrible lifestyle choices! Now that we’ve answered the big question, let’s dig a little deeper into my The Office-viewing habits:

When do I watch The Office?

Let’s answer this question in two different ways:

  • On which days of the week have I watched the most Office episodes?
  • During which hours of the day do I most often start Office episodes?

We’ll start with a little prep work that’ll make these tasks a little more straightforward: creating new columns for "weekday" and "hour".

We can use the .dt.weekday and .dt.hour methods on the Start Time column to do this and assign the results to new columns named weekday and hour:

office['weekday'] = office['Start Time'].dt.weekday
office['hour'] = office['Start Time'].dt.hour

# check to make sure the columns were added correctly
office.head(1)
table.tableizer-table {
font-size: 12px;
border: 1px solid #CCC; font-family: Arial, Helvetica, sans-serif;
} .tableizer-table td {
padding: 4px;
margin: 3px;
border: 1px solid #CCC;
}
.tableizer-table th {
background-color: #104E8B; color: #FFF;
font-weight: bold;
}

Start Time Duration Title weekday hour
1 2020-10-28 23:09:43-04:00 0 days 00:18:04 The Office (U.S.): Season 7: Classy Christmas:… 2 23

Now, let’s do a little analysis! These results will be easier to understand visually, so we’ll start by using the %matplotlib inline magic to make our charts show up in our Jupyter notebook. Then, we’ll import matplotlib.

%matplotlib inline
import matplotlib

Now, let’s plot a chart of my viewing habits by day of the week. To do this, we’ll need to work through a few steps:

  • Tell pandas the order we want to chart the days in using pd.Categorical — by default, it will plot them in descending order based on the number of episodes watched on each day, but when looking at a graph, it’ll be more intuitive to see the data in Monday-Sunday order.
  • Count the number of episodes I viewed on each day in total
  • Sort and plot the data

(There are also many other ways we could approach analyzing and visualizing this data, of course.)

Let’s see how it looks step by step:

# set our categorical and define the order so the days are plotted Monday-Sunday
office['weekday'] = pd.Categorical(office['weekday'], categories=
    [0,1,2,3,4,5,6],
    ordered=True)

# create office_by_day and count the rows for each weekday, assigning the result to that variable
office_by_day = office['weekday'].value_counts()

# sort the index using our categorical, so that Monday (0) is first, Tuesday (1) is second, etc.
office_by_day = office_by_day.sort_index()

# optional: update the font size to make it a bit larger and easier to read
matplotlib.rcParams.update({'font.size': 22})

# plot office_by_day as a bar chart with the listed size and title
office_by_day.plot(kind='bar', figsize=(20,10), title='Office Episodes Watched by Day')

The Office views by day, Mon-Sun.

As we can see, I’ve actually tended to watch The Office more during the week than on weekends. This makes sense based on my habits, since it’s often background noise during evening work, workouts, etc.

Now, let’s take a look at the same data by hour. The process here is very similar to what we just did above:

# set our categorical and define the order so the hours are plotted 0-23
office['hour'] = pd.Categorical(office['hour'], categories=
    [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23],
    ordered=True)

# create office_by_hour and count the rows for each hour, assigning the result to that variable
office_by_hour = office['hour'].value_counts()

# sort the index using our categorical, so that midnight (0) is first, 1 a.m. (1) is second, etc.
office_by_hour = office_by_hour.sort_index()

# plot office_by_hour as a bar chart with the listed size and title
office_by_hour.plot(kind='bar', figsize=(20,10), title='Office Episodes Watched by Hour')

The Office views by hour, AM-PM

From the data, it looks like 12 a.m. and 1 a.m. were the hours during which I most often started episodes of The Office. This is due to my (unhealthy) habit of using the show as white noise while going to sleep — many of these episodes probably auto-played while I was already asleep!

Outside of that, it’s no surprise to see that most of my viewing happened during the evenings.

(Note: This data actually may not reflect my real habits very well, because I lived in China for a significant portion of my Netflix account ownership. We didn’t account for that in this tutorial because it’s a unique situation that won’t apply for most people. If you’ve spent significant time in different timezones during your Netflix usage, then you may need to do some additional date filtering and timezone conversion in the data cleaning stage before analysis.)

What’s Next?

In this tutorial, we’ve taken a quick dive into some personal Netflix data and learned that — among other things — I watch The Office too much. But there are tons of places you could go from here! Here are some ideas for expanding this project for yourself:

  • Do the same or similar analysis for another show.
  • See if you can create separate columns for show titles and episode titles using regular expressions [learn to use those in our Advanced Data Cleaning course)
  • Figure out which specific episodes you’ve watched most and least
  • Create prettier charts (our Storytelling with Data Visualization course can help with that)

When you realize your Netflix viewing habits have led to you finishing a cool project.

You can also try out some other fun projects using your own personal data. For example:

Want to learn to do this kind of project on your own, whenever you want? Our interactive data science courses will teach you to do all of this — and a whole lot more! — right in your browser window.

Charlie Custer

Charlie is a student of data science, and also a content marketer at Dataquest. In his free time, he’s learning to mountain bike and making videos about it.

The post Beginner Python Tutorial: Analyze Your Personal Netflix Data appeared first on Dataquest.

Python

via Planet Python https://ift.tt/1dar6IN

November 5, 2020 at 08:51PM

The Revolution is here and it’s called Statamic 3

The Revolution is here and it’s called Statamic 3

https://ift.tt/3mUCLGt


I’ve finally realized and come to appreciate the power of a CMS system. After maintaining my own blog built on Laravel Nova. After publishing on Medium. After hosting a static site from various free and open source software that got abandoned by their maintainer or never really had much of a community in the first place. Publishing content is paramount to any web operation. The internet is media and how you manage your media can determine a lot of the success of your business. If you need a developer to commit to the codebase or write entries to the database to publish a blog post or update copy there’s a lot of overhead there. Pre-release when I found out that you’d be able to install Statamic 3 into a new or existing Laravel application I was intrigued.

The Statamic team is grounded in the Laravel community. Their copy is hilarious. Their leader Jack McDade is a far out designer. His Radical Design course is expected to be out soonish and his personal website reads I’m Jack McDade and I’m tired of boring websites. Statamic has been around since 2012. In addition to Jack it was cofounded by repeat Product Hunt maker of the year Mubashar Iqbal (aka Mubs). Statamic 3 was launched with a magical unicorn on June 11th, 2020. You can read the announcement blog post titled Everything You Need to Know About Statamic 3.

Let’s start with the end in mind shall we?

I’m typing this on a beautiful editor in my browser. There’s no code changes I need to make to get this post out. I don’t need to write it in a google doc and paste it over. The Statamic dashboard gives me powers. Hell, Mr. McDade even live streamed building an AirBnB clone with Statamic. It’ll be the first video result when you google “Statamic Airbnb for chairs“. Though I’m familiar with Laravel and love to code I don’t want to start with anything too crazy because with great power comes great responsibility. P.S. they’re casting Toby McGuire and Andrew Garfield in the new Spiderman movie with Doctor Strange. It’s gunna be lit. In this tutorial we’ll go over how I built and launched this very site you’re on using Statamic 3. We’re hosted on Netlify and bought the domain name with Google Domains.

How it all began

This project began on the internet. The internet is a series of interconnected tubes. Birds fly in some of these tubes and they come out on an internet website called twitter dot com.

Epic Laravel origin story on twitter

I’d followed William since purchasing his book on how to Break Into Tech With Twitter. To be perfectly frank I have not started the book but I’m looking forward to reading it. As another side project I run a site called Employbl that is a resource for job seekers. I figured the book would be good reading to learn about how people break in and get their start in the tech industry. Everyone’s experience is different! I’d categorize myself as a Laravel fanboi. I use it in my day job. I build my side project with it. I like to learn about it. I follow Laravel devs on Twitter. There’s lots I like about it. I’ve blogged a fair bit about Laravel but never really dedicated a site to it. I run a Full Stack Developer Meetup group. But even there we don’t really have a blog. Until recently I published my Laravel tutorials and blog posts on Employbl. That’s fine but it’s not strictly related to the company mission. I’d like Employbl for the tech industry more broadly, even other departments outside of engineering. It’s ultimately about giving you the tools to help you get hired. I needed a space for Laravel developers. Plus, Statamic 3 was out and I wanted to give it a test drive. I bought the domain epiclaravel.dev on Google Domains for $12 for one year of hosting.

Create a project

I had a domain then I needed to create a Statamic website. Here was the first “AHA moment”. Statamic 3 has a Static Site Generator package, open source on GitHub. This enables us to host our statamic sites anywhere we could store a bunch of flat files. That could be S3, Netlify, GitHub Pages, Vercel (formerly Zeit). It doesn’t require us spin a server like we would if we were hosting a PHP application on something like Digital Ocean, EC2, Laravel Forge, Ploi or Render. I was excited. It simplified the whole process, reduced cost and would be easier to maintain and set up a deployment process for.

The Statamic team has built some starter templates for our ease of use. There are only a couple right now but I could see this being a growth area. They already have a Marketplace for Addons and display copy that a starter kit section is coming soon. Start building statamic starter kits now and you could be one of the first themes available on the platform!

Potential aside today we have a few options:

Going with the Doogie Browser theme was tempting but making my website look like a 90s PC was a bit too much to swallow so Starter’s Creek it was. Once I’d picked a starter kit I could generate my project. Of course I could have started from complete scratch but I’m hella lazy like that and would like to be up and blogging / running building ChairBnB so I used a starter kit. The only difference being an argument when generating the project:

git clone git@github.com:statamic/starter-kit-starters-creek.git epiclaravel
cd epiclaravel
rm -rf .git
composer install
cp .env.example .env && php artisan key:generate

You can view the source code for the starter template here. The Statamic team has ingeniously named their main php worker file “please”. To use the command line interface that’s included with Laravel you use “artisan“. To use the command line interface that’s included with Statamic you use “please“. So we create a user:

php please make:user

Fan Fact: Ecamm Live is a streaming tool for mac.

I have Laravel Valet set up on my local machine so will use that for running the website locally. The site is visible on my local though the browser at http://epiclaravel.test. Setting up Laravel Valet can be a little tricky if you’re completely unfamiliar but it’s worth it! For Laravel applications you can sites with Laravel Valet and use Takeout Docker containers to host other services your app needs like Postgres, Redis, Meilisearch, ElasticSearch and more. Laravel Takeout is built and maintained by the Tighten team. For my purposes having composer installed and Valet configured is enough to run the site in development.

Enter the dashboard

To login to the dashboard head to your domain /cp. From there you’re off and running. It’s probably best to start reading the documentation. Statamic is really powerful and I’ll probably write some more blog posts as I explore it more. Statamic Collections are very promising and I’m looking forward to implementing and learning about Statamic handles Search. Their documentation reads “There are three components — coincidentally the same number of Hanson brothers — whose powers combine to provide you the power of search. The form, the index, and the driver.” With the site running locally and my user created I see this:

Open the project in a code editor (for example PHPStorm or VSCode) and you can play with the values or the HTML/CSS. The Starter’s Creek starter kit is built with TailwindCSS. I’m excited to play with that. Previously I’d been plagued by build process errors when trying to set up Tailwind. I’d stuck to Bootstrap 4 out of habit. For now though we have the template, not a lot of feature development to be done. Let’s deploy!

Deploy

One of the awesome new features of Statamic 3 (along with being able to install Statamic into any Laravel project as a composer package) is the Static Site Generator. Why is this awesome you ask? Static sites are easier to host than running your own server. When a site is “static” it pretty much just means it’s a bunch of files sitting on a server somewhere. All the computer needs to do is serve the files (HTML, CSS and Javascript) to the end users, in most cases a web browser. The alternative is having your own server that you maintain or doing “serverless” things (still involves servers). Static sites you can host with Netlify, Amazon S3, GitHub Pages or Vercel. If an app requires a server (and probably a database) you’re more in the Digital Ocean / Google Cloud / AWS / Azure space. Render and Heroku are great options too 🙂

We could deploy our Statamic site using a server and a database like a normal option. I think it’s going to be easier to deploy a static site to start off. All we want to do is host content for now. I’ve used Netlify before so going to stick with that.

We first need to require the static site generator composer package into our app:

composer require statamic/ssg

We’ll publish the config file to be explicit about what we have going on:

php artisan vendor:publish --provider="Statamic\StaticSite\ServiceProvider"

This normally generates a file in the config directory. There’s a config/statamic directory. It looks like the starter template already had this ready to go. You can view the config file here if that’s what floats your boat 🚣

Now we can build our static site: ✨

php please ssg:generate

This is the output I got:

The Statamic team outlined some Deployment Examples for us. It looks pretty straightforward and awesome:

Here are the steps to deploy a static Statamic site. Your app will be powered by flat files and stored safely in version control

Deployment Step 1: Deploy to a GitHub repo

You could also deploy to GitLab or BitBucket. Honestly I’ve heard great things about GitLab but use GitHub mostly out of habit and for the platform’s social aspects. Maybe GitLab has that too idk. Anywho create your repo. From the root of your project run:

git init
git add -A
git commit -m 'initial commit'

Deployment Step 2: Deploy with Netlify

We can link Netlify to our git repo and configure the build command PHP version as an environment variable and set the publish directory:

This deploys our site to a Netlify URL like: https://boring-noyce-0f134b.netlify.app/ 

Woohoo! It’s live on the internet with continuous deployment set up. Pushing to the master branch with git will redeploy our site. We also need to set some environment variables on the Netlify dashboard. The .env file is not stored in git. The Netlify dashboard provides space to specify these variables for production.

Deployment Step 3: Hook up domain name

I bought my domain name through Google Domains. This in hindsight was a mistake. The Google Domains UI is easy to navigate and I have other domains there but if I’m hosting through Netlify shoulda just bought the domain through them too. To point the domain name at Netlify’s servers. We’ll be using “Netlify DNS”.

This takes up to 48 hours to propagate over so let’s hope it worked! We can view the propagation status in the Netlify dashboard under Settings > Domain Management.

After the DNS changes propagate your site will be live. The flow for future updates is login to the control panel on your test domain, write content, make edits and do CMS things. This will change the flat files in your project. When your site is looking good locally push the changes up to GitHub and your site will automatically be deployed! That’s what I’m doing for Epic Laravel and it’s working great 😉

Conclusion

In this post we’ve gone from no website to a functional one with a CMS. The most complicated or technical part is probably setting up Laravel Valet for local development. Once the site is running we can do lots of edits from the Control Panel. We can also use our Laravel, PHP and Tailwind knowledge to build custom functionality or buy pre-built solutions from the Statamic marketplace. Moving forward I’m looking forward to exploring the Statamic core concepts to build the site and maybe even install Statamic into my existing Laravel projects.

programming

via Laravel News Links https://ift.tt/2dvygAJ

November 3, 2020 at 02:24PM

Laravel Has Many Through

Laravel Has Many Through

https://ift.tt/3k0db11

Laravel Has Many Through generates the code for your has many through relationships by asking a few simple questions.

programming

via Laravel News Links https://ift.tt/2dvygAJ

November 3, 2020 at 02:24PM

23,600 Hacked Databases Have Leaked From a Defunct ‘Data Breach Index’ Site

23,600 Hacked Databases Have Leaked From a Defunct ‘Data Breach Index’ Site

https://ift.tt/2I1x4HB

More than 23,000 hacked databases have been made available for download on several hacking forums and Telegram channels in what threat intel analysts are calling the biggest leak of its kind. From a report: The database collection is said to have originated from Cit0Day.in, a private service advertised on hacking forums to other cybercriminals. Cit0day operated by collecting hacked databases and then providing access to usernames, emails, addresses, and even cleartext passwords to other hackers for a daily or monthly fee. Cybercriminals would then use the site to identify possible passwords for targeted users and then attempt to breach their accounts at other, more high-profile sites. The idea behind the site isn’t unique, and Cit0Day could be considered a reincarnation of similar "data breach index" services such as LeakedSource and WeLeakInfo, both taken down by authorities in 2018 and 2020, respectively.


Read more of this story at Slashdot.

geeky

via Slashdot https://slashdot.org/

November 4, 2020 at 12:56PM

Why 2A Supporters Love The Mandalorian

Why 2A Supporters Love The Mandalorian

https://ift.tt/3jNgW9L


Why 2A Supporters Love The Mandalorian
This image released by Disney Plus shows Pedro Pascal, as Din Djarin, right, with The Child, in a scene from “The Mandalorian,” premiering its second season on Friday. (Disney Plus via AP)

One of the first movies I saw in the theater was when I wasn’t even old enough for kindergarten. I was just four years old when Star Wars premiered. My uncle, then all of 16 and a newly licensed driver, took me to see what was becoming a cultural phenomenon. I became a massive science fiction fan at that moment, a genre I still love all these many years later.

Then the prequels came, and they were…not good.

Then we got the new movies. While I liked The Force Awakened, The Last Jedi was freaking awful. Rise of Skywalker was better, but that was a low bar.

Disney had all but destroyed my beloved Star Wars.

Then Disney Plus launched and premiered The Mandalorian. It showed that the issue wasn’t Disney, but something else.

What I noticed, though, were just how many of my fellow Second Amendment lovers also loved The Mandalorian.

Now in its second season–which premiered this past Friday–the show is continuing where it left off, and I think the show’s popularity with the Second Amendment crowd will continue to grow. In fact, I expect to start seeing Mandalorian-themed stuff begin to replace Punisher skulls any day now.

But the question is, why? Here are a few reasons I’ve seen.

In one season one episode, the Mandalorian has to talk to jawas about parts for his ship. He’s advised to leave his guns behind. The character, Din Djarin, simply replies that he’s a Mandalorian and that “weapons are part of my religion.”

While guns aren’t religious for most of us, the refusal to leave our guns behind speaks to a part of the Second Amendment supporter’s soul. Guns are for self-defense. Leaving them behind exposes you to danger. While Djarin has more reason than most of us to be concerned–he’s a bounty hunter, after all–but at this point, no one is actively hunting him so far as he’s aware. He simply won’t leave his weapons behind.

It’s kind of hard not to look at that and think about how similar it feels to how many of us approach things. A “Gun Free Zone” sign is basically telling us to go away, conduct business somewhere else. An espoused anti-Second Amendment opinion is much the same thing.

While guns aren’t part of our religion necessarily, they’re a part of our life and we recognize that people danger doesn’t go away just because you wish it would.

Over the course of the show, there are a couple of episodes that show evil people preying on the peaceful but disarmed folks just trying to get by in life. It takes someone with a gun to make armed bad guys go away.

Of course, while this is fiction, the reality of it appears everywhere in real life. Criminals prey on the innocent citizen unless that citizen is armed. Some who can afford it hire private security to bring their guns, but many of us can’t afford to outsource it.

Whether it’s protecting a village as Djarin did in season one or watching a would-be marshal put slavers down, the only thing that really stops bad people with guns is good people with guns.

I mean, I don’t have to lay out why that appeals to the Second Amendment crowd.

More importantly than the symbolism, of course, is the story. One of the worst things about much of modern science fiction is the idea that politics should trump telling a good story.

In The Mandalorian, story doesn’t play second fiddle to anything. The plot is engaging and entertaining. It fully embraces the idea of it being a space western in a way that no show has since Firefly. In fact, there’s some debate as to which is better, but since I absolutely love both, I’m staying out of that one.

For fans of westerns, you’ll recognize the similar themes. For example, there’s the episode with MMA legend Gina Carano which is reminiscent of The Magnificant Seven. Episode one of season two gives a bit of a shout-out to Justified and Deadwood with guest star Timothy Olyphant showing up.

And through it all, there’s a weapon on his side.

See, while it tells great stories, it doesn’t beat you over the head with all the ways you suck like so much of modern media tries to do. Instead, it just entertains you while, admittedly, showing all the things that Second Amendment fans have been saying for years.

I know that a lot of people aren’t fans of Disney, and I get that. However, let’s be better than the other side and not try to destroy businesses that disagree with us on stuff.

Instead, support good fiction that maybe shows a bit of what we believe. Do that enough and they’ll start making more of it, especially when so much of their other stuff isn’t getting that support. You win the culture war surrounding the Second Amendment by making sure to support stuff that might not be intended to be pro-2A but actually is.

Author’s Bio:

Tom Knighton


Tom Knighton is a Navy veteran, a former newspaperman, a novelist, and a blogger and lifetime shooter. He lives with his family in Southwest Georgia. He’s also the host of Unloaded TV on YouTube.

More posts from Tom Knighton

guns

via Bearing Arms https://ift.tt/2WiVJN5

November 2, 2020 at 06:05PM