If you use SQL on a regular basis, then you are well aware that Window Functions are powerful. They allow us to simplify queries that would otherwise be quite the mess. We can provide meaningful insight across rows of data without collapsing the results into a single value. I have written numerous blog posts on Window Functions, many here recently. I decided to make this blog post a compilation of all the Window Function posts I have written, providing a one-stop source for any readers interested in learning more about Window Functions…
Like what you have read? See anything incorrect? Please comment below and thank you for reading!!!
A Call To Action!
Thank you for taking the time to read this post. I truly hope you discovered something interesting and enlightening. Please share your findings here, with someone else you know who would get the same value out of it as well.
Visit the Portfolio-Projects page to see blog post/technical writing I have completed for clients.
To receive email notifications (Never Spam) from this blog (“Digital Owl’s Prose”) for the latest blog posts as they are published, please subscribe (of your own volition) by clicking the ‘Click To Subscribe!’ button in the sidebar on the homepage! (Feel free at any time to review the Digital Owl’s Prose Privacy Policy Page for any questions you may have about: email updates, opt-in, opt-out, contact forms, etc…)
Be sure and visit the “Best Of” page for a collection of my best blog posts.
Josh Otwell has a passion to study and grow as a SQL Developer and blogger. Other favorite activities find him with his nose buried in a good book, article, or the Linux command line. Among those, he shares a love of tabletop RPG games, reading fantasy novels, and spending time with his wife and two daughters.
Disclaimer: The examples presented in this post are hypothetical ideas of how to achieve similar types of results. They are not the utmost best solution(s). The majority, if not all, of the examples provided, are performed on a personal development/learning workstation-environment and should not be considered production quality or ready. Your particular goals and needs may vary. Use those practices that best benefit your needs and goals. Opinions are my own.
AWS goes after Microsoft’s SQL Server with Babelfish for Aurora PostgreSQL
https://ift.tt/36pmIuY
AWS today announced a new database product that is clearly meant to go after Microsoft’s SQL Server and make it easier — and cheaper — for SQL Server users to migrate to the AWS cloud. The new service is Babelfish for Aurora PostgreSQL. The tagline AWS CEO Andy Jassy used for this service in his re:Invent keynote today is probably telling: “Stop paying for SQL Server licenses you don’t need.” And to show how serious it is about this, the company is even open-sourcing the tool.
What Babelfish does is provide a translation layer for SQL Server’s proprietary SQL dialect (T-SQL) and communications protocol so that businesses can switch to AWS’ Aurora relational database at will (though they’ll still have to migrate their existing data). It provides translations for the dialect, but also SQL commands, cursors, catalog views, data types, triggers, stored procedures and functions.
The promise here is that companies won’t have to replace their database drivers or rewrite and verify their database requests to make this transition.
“We believe Babelfish stands out because it’s not another migration service, as useful as those can be. Babelfish enables PostgreSQL to understand database requests—both the command and the protocol—from applications written for Microsoft SQL Server without changing libraries, database schema, or SQL statements,” AWS’s Matt Asay writes in today’s announcement. “This means much faster ‘migrations’ with minimal developer effort. It’s also centered on ‘correctness,’ meaning applications designed to use SQL Server functionality will behave the same on PostgreSQL as they would on SQL Server.”
PostgreSQL, AWS rightly points out, is one of the most popular open-source databases in the market today. A lot of companies want to migrate their relational databases to it — or at least use it in conjunction with their existing databases. This new service is going to make that significantly easier.
The open-source Babelfish project will launch in 2021 and will be available on GitHub under the Apache 2.0 license.
“It’s still true that the overwhelming majority of relational databases are on-premise,” AWS CEO Andy Jassy said. “Customers are fed up with and sick of incumbents.” As is tradition at re:Invent, Jassy also got a few swipes at Oracle into his keynote, but the real target of the products the company is launching in the database area today is clearly Microsoft.
There’s not much new to report on the fight over the results of the 2020 Presidential election. It becomes clearer by the day that Joe Biden did not win it; his votes were largely obtained by criminal means, vote-rigging and outright electoral fraud. It’s no longer possible for any objective individual to doubt that. Two recent summaries point out the highlights of the facts of the matter. If you need ammunition, read them for yourselves, and follow the links.
7. In the Rust Belt, Biden lost black support everywhere except in Detroit, Philadelphia, and Milwaukee. In those cities, every single black person apparently voted for Biden.
9. The fact that Pennsylvania, Wisconsin, Arizona, Nevada, and Georgia simultaneously pretended to halt ballot-counting while continuing to count is evidence of election fraud collusion.
12. In the contested states, the voting machines were alleged to have processed hundreds of thousands of ballots within a short time, which is a physical impossibility.
19. Over 100,000 Pennsylvania absentee ballots were returned a day after they were mailed out, on the day they were mailed out, or on the day before they were mailed out.
20. In all the contested areas, and at Dominion’s website, Democrats have been systematically failing to create or have destroyed all data that could be used to demonstrate fraud. This creates the legal presumption that the data do, in fact, show fraud.
A growing body of evidence ranging from straightforward ballot audits to complex quantitative analyses suggests that the tabulation of the votes was characterized by enough chicanery to alter the outcome of the election. Consequently, a consensus has gradually developed among the auditors of publicly available information released by the states, and it contradicts the narrative promulgated by the Democrats and the media. The more data experts see, the less convinced they are that Biden won.
Among the analysts who question the legitimacy of Biden’s victory is Dr. Navid Keshavarz-Nia, a cybersecurity expert whose technical expertise was touted by the New York Times last September and who has been described as a hero in the Washington Monthly … His nine-page affidavit describes how it is possible to manipulate votes, where this occurred, and sums up his findings as follows:
I conclude with high confidence that the election 2020 data were altered in all battleground states resulting in hundreds of thousands of votes that were cast for President Trump to be transferred to Vice President Biden. These alterations were the result of systemic and widespread exploitable vulnerabilities in DVS, Scytl/SOE Software and Smartmatic systems that enabled operators to achieve the desired results. In my view, the evidence is overwhelming and incontrovertible.
. . .
Meanwhile, no discussion of 2020 election skulduggery is complete without a discussion of the Democrat precincts that record more votes than registered voters. Rep. Bill Posey (R-Fla.) tweeted the following on that perennial topic: “According to an affidavit in the MI lawsuit, one Michigan precinct/twnship had 781.91% turnout. How does this happen?”
Good question. No fewer than six precincts listed by Rep. Posey experienced turnout exceeding 120 percent. Another 10 allegedly enjoyed 100 percent turnout. This is an insult to the electorate’s intelligence, and it happened in Democrat precincts all across the nation.
I’m dumbfounded by the sheer chutzpah of the Democratic Party in expecting us to surrender to such naked, unmistakeable chicanery. They really seem to believe that Americans will "roll over and play dead" in the face of a political fait accompli. Millions of us will not. I wasn’t joking when I warned, some weeks ago, that civil war was now a real possibility. If it comes, it’s this electoral fraud that will have struck the spark and ignited the flame.
What’s equally astonishing is the complicity of the mainstream media in all this. They seem to think that people still believe them: that they can sway public opinion through their propaganda. For a great many Americans, that’s no longer the case. We don’t trust the news media at all, and regard journalists as no more trustworthy than politicians.
The journalists themselves don’t seem to get it. They write articles with titles like "US election results: Why the most accurate bellwether counties were wrong" – but they never stop to consider that it’s the (false) election results that were wrong, not the bellwether counties. The bellwethers voted for President Trump, and according to any authentic, non-criminally-influenced count of the votes, he did win. They were right.
This is far from over. President Trump’s biggest challenge is to get his evidence in front of the Supreme Court. If he can do that, I can’t believe that our highest court will disregard or overrule the quantity and quality of that evidence.
If Joe Biden becomes President, it’ll be a sham, a fake and a public lie. Our constitutional republic will effectively have ceased to exist – so it’ll be up to Americans who support the constitution to restore it to its rightful place. If we can no longer trust the ballot box to produce an accurate, verifiable election result, then other means will gain stronger support. May God preserve us from that!
A perfect storm has basically made cheap ammunition as scarce as hen’s teeth:
The state and federal threats to our gun rights is driving people to buy more ammunition to keep in reserve. COVID-19 is limiting the number of employees able to make ammunition and a wave of millions of new gun owners that need to feed their recent firearms purchases for purposes of training and carry are all eating up or holding back the supply lines.
Ammunition manufacturers customarily have a pretty good handle on how much of each type of ammunition they are going to sell annually. Knowing this, they tool up and crank out x number of rounds of say, .22 LR. They then retool and make y number of rounds of 9mm, then z rounds of .223, etc. The perfect storm has thrown a monkey wrench into those calculations and the industry is trying desperately to catch up to the exploding demand. See this related article “How Much Ammunition is Produced for the United States Market?“.
Even components for reloading are in short supply.
Where Can I Find Ammunition That is Cheap?
ROFL! Good luck on that cheap part. If you do find some cheap ammo at 2019 prices, you might also stumble on a unicorn or two in your backyard.
Barrels of Ammunition XM855
OK, OK. Where Can I Find Ammunition, Even If Pricey?
Fortunately, the free market can help. When a commodity is in short supply vs demand, the price of that commodity tends to rise, which helps lower demand, and keeps the product at least available to those who need it and are willing to pay extra to have it.
There are three places I go to look for the best prices and availability on ammunition:
AmmoLand’s Gun Deal (go to the “Daily Deals Page” to see the deals and it links you to the seller’s website for that item)
AmmoSeek at www.ammoseek.com (links each deal to the seller’s website for that item)
Virginia Gun Trader at www.vaguntrader.com (the ammunition section has private sellers, with their locality, in Virginia who have ammunition they want to sell – there are often good deals to be had this way)
Good luck and happy hunting!
Editors Note: Lots of folks are looking for 9mm ammunition check out these fast links for checking select ammunition retailers’ inventory online.
About Virginia Citizens Defense League, Inc. (VCDL):
Virginia Citizens Defense League, Inc. (VCDL). VCDL is an all-volunteer, non-partisan grassroots organization dedicated to defending the human rights of all Virginians. The Right to Keep and Bear Arms is a fundamental human right.
Cyber Monday deal: get up to 43% off Anker charging accessories for iPhone, iPad, and Mac
https://ift.tt/3fNwPwC
Now’s the time to make sure you never run out of charge on your iPhone, iPad, or even your Mac, as Anker chargers and cables are on sale for Cyber Monday.
Apple devices are now featuring longer battery life than before, but it would still be great if we didn’t have to keep charging them, or if we could just forget about charging. We’re not there yet, but get an Anker battery charging accessor and you’ll be close.
Your iPhone has run out of power at a key moment, and so have the iPhones of everyone you know. So now is the time to fix that for yourself by taking advantage of Anker’s Cyber Monday sale. Once you’ve got yours, you could stock up on Christmas presents for everyone else, too
Anker chargers for iPhone and iPad
The greatest saving in Anker’s Cyber Monday sale is on its Anker PowerCore 26800 Portable Charger. This is now $37.49 instead of $65.98 — a saving of 43%.
Charge up this device once and it can then recharge your iPhone more than six times. Alternatively, it can recharge certain iPads at least twice.
The PowerCore 26800 Portable Charger has with three USB ports so you can recharge multiple devices at once. You also get a micro USB cable for charging the PowerCore, and a travel pouch.
Also in the Cyber Monday sale is Anker’s PowerCore Slim 10000 PD, USB-C Power Bank (18W). Usually $29.99, the black version is on sale for $19.99, or a 33% saving.
It’s is a slimline charger, handy for travelling, which can still recharge two devices at once. Anker claims that it will recharge, for example, an iPhone XS, more than twice – and provide almost a full charge for an 11-inch iPad Pro.
Anker chargers for iPhone, iPad — and Mac
Plug the Anker USB C Fast Charger into a power adapter and you can simultaneously recharge four devices — including your USB-C-powered Mac. The Anker USB C Fast Charger is on sale for Cyber Monday at $39.99, a 31% saving on its regular price of $57.99.
It’s another slimline Anker charger, this time meant to be used at your desk instead of travelling, and offers 45W via its USB-C port for the Mac, or 18W for iPhones and iPads.
Anker cables
If you can never have enough power, you also can never have enough power cables. Anker is offering its twin-pack of USB-C to Lightning cables for $19.99.
That’s a 43% saving on its usual price of $34.99, and both cables are the full 6ft long.
In this tutorial, we will explore numerous examples of using the BeautifulSoup library in Python. For a better understanding let us follow a few guidelines/steps that will help us to simplify things and produce an efficient code. Please have a look at the framework/steps that we are going to follow in all the examples mentioned below:
Inspect the HTML and CSS code behind the website/webpage.
Import the necessary libraries.
Create a User Agent (Optional).
Send get() request and fetch the webpage contents.
Check the Status Code after receiving the response.
Create a Beautiful Soup Object and define the parser.
Implement your logic.
❖Disclaimer: This article considers that you have gone through the basic concepts of web scraping. The sole purpose of this article is to list and demonstrate examples of web scraping. The examples mentioned have been created only for educational purposes. In case you want to learn the basic concepts before diving into the examples, please follow the tutorial at this link.
Without further delay let us dive into the examples. Let the games begin!
Example 1: Scraping An Example Webpage
Let’s begin with a simple example where we are going to extract data from a given table in a webpage. The webpage from which we are going to extract the data has been mentioned below:
The code to scrape the data from the table in the above webpage has been given below.
# 1. Import the necessary LIBRARIES
import requests
from bs4 import BeautifulSoup
# 2. Create a User Agent (Optional)
headers = {"User-Agent": "Mozilla/5.0 (Linux; U; Android 4.2.2; he-il; NEO-X5-116A Build/JDQ39) AppleWebKit/534.30 ("
"KHTML, like Gecko) Version/4.0 Safari/534.30"}
# 3. Send get() Request and fetch the webpage contents
response = requests.get("https://shubhamsayon.github.io/python/demo_html.html", headers=headers)
webpage = response.content
# 4. Check Status Code (Optional)
# print(response.status_code)
# 5. Create a Beautiful Soup Object
soup = BeautifulSoup(webpage, "html.parser")
# 6. Implement the Logic.
for tr in soup.find_all('tr'):
topic = "TOPIC: "
url = "URL: "
values = [data for data in tr.find_all('td')]
for value in values:
print(topic, value.text)
topic = url
print()
Output:
TOPIC: __str__ vs __repr__ In Python
URL: https://blog.finxter.com/python-__str__-vs-__repr__/
TOPIC: How to Read a File Line-By-Line and Store Into a List?
URL: https://blog.finxter.com/how-to-read-a-file-line-by-line-and-store-into-a-list/
TOPIC: How To Convert a String To a List In Python?
URL: https://blog.finxter.com/how-to-convert-a-string-to-a-list-in-python/
TOPIC: How To Iterate Through Two Lists In Parallel?
URL: https://blog.finxter.com/how-to-iterate-through-two-lists-in-parallel/
TOPIC: Python Scoping Rules – A Simple Illustrated Guide
URL: https://blog.finxter.com/python-scoping-rules-a-simple-illustrated-guide/
TOPIC: Flatten A List Of Lists In Python
URL: https://blog.finxter.com/flatten-a-list-of-lists-in-python/
VideoWalkthrough of The Above Code:
Example 2: Scraping Data From The Finxter Leaderboard
This example shows how we can easily scrape data from the Finxter dashboard which lists the elos/points. The image given below depicts the data that we are going to extract from https://app.finxter.com.
The code to scrape the data from the table in the above webpage has been given below.
# import the required libraries
import requests
from bs4 import BeautifulSoup
# create User-Agent (optional)
headers = {"User-Agent": "Mozilla/5.0 (CrKey armv7l 1.5.16041) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/31.0.1650.0 Safari/537.36"}
# get() Request
response = requests.get("https://app.finxter.com/learn/computer/science/", headers=headers)
# Store the webpage contents
webpage = response.content
# Check Status Code (Optional)
print(response.status_code)
# Create a BeautifulSoup object out of the webpage content
soup = BeautifulSoup(webpage, "html.parser")
# The logic
for table in soup.find_all('table',class_='w3-table-all',limit=1):
for tr in table.find_all('tr'):
name = "USERNAME: "
elo = "ELO: "
rank = "RANK: "
for td in tr.find_all('td'):
print(name,td.text.strip())
name = elo
elo = rank
print()
Output: Please download the file given below to view the extracted data as a result of executing the above code.
Data scraping can prove to be extremely handy while automating searches on Job websites. The example given below is a complete walkthrough of how you can scrape data from job websites. The image given below depicts the website whose data we shall be scraping.
In the code given below, we will try and extract the job title, location, and company name for each job that has been listed. Please feel free to run the code on your system and visualize the output.
import requests
from bs4 import BeautifulSoup
# create User-Agent (optional)
headers = {"User-Agent": "Mozilla/5.0 (CrKey armv7l 1.5.16041) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/31.0.1650.0 Safari/537.36"}
# get() Request
response = requests.get("http://pythonjobs.github.io/", headers=headers)
# Store the webpage contents
webpage = response.content
# Check Status Code (Optional)
# print(response.status_code)
# Create a BeautifulSoup object out of the webpage content
soup = BeautifulSoup(webpage, "html.parser")
# The logic
for job in soup.find_all('section', class_='job_list'):
title = [a for a in job.find_all('h1')]
for n, tag in enumerate(job.find_all('div', class_='job')):
company_element = [x for x in tag.find_all('span', class_='info')]
print("Job Title: ", title[n].text.strip())
print("Location: ", company_element[0].text.strip())
print("Company: ", company_element[3].text.strip())
print()
Output:
Job Title: Software Engineer (Data Operations)
Location: Sydney, Australia / Remote
Company: Autumn Compass
Job Title: Developer / Engineer
Location: Maryland / DC Metro Area
Company: National Institutes of Health contracting company.
Job Title: Senior Backend Developer (Python/Django)
Location: Vienna, Austria
Company: Bambus.io
Video Walkthrough Of Above Code:
Example 4: Scraping Data From An Online Book Store
Web scraping has a large scale usage when it comes to extracting information about products from shopping websites. In this example, we shall see how we can extract data about books/products from alibris.com.
The image given below depicts the webpage from which we are going to scrape data.
# import the required libraries
import requests
from bs4 import BeautifulSoup
# create User-Agent (optional)
headers = {"User-Agent": "Mozilla/5.0 (Linux; U; Android 4.2.2; he-il; NEO-X5-116A Build/JDQ39) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Safari/534.30"}
# get() Request
response = requests.get(
"https://www.alibris.com/search/books/subject/Fiction", headers=headers)
# Store the webpage contents
webpage = response.content
# Check Status Code (Optional)
# print(response.status_code)
# Create a BeautifulSoup object out of the webpage content
soup = BeautifulSoup(webpage, "html.parser")
# The logic
for parent in soup.find_all('ul',{'class':'primaryList'}):
for n,tag in enumerate(parent.find_all('li')):
title = [x for x in tag.find_all('p', class_='bookTitle')]
author = [x for x in tag.find_all('p', class_='author')]
price = [x for x in tag.find_all('a', class_='buy')]
for item in title:
print("Book: ",item.text.strip())
for item in author:
author = item.text.split("\n")
print("AUTHOR: ",author[2])
for item in price:
if 'eBook' in item.text.strip():
print("eBook PRICE: ", item.text.strip())
else:
print("PRICE: ", item.text.strip())
print()
Output: Please download the file given below to view the extracted data as a result of executing the above code.
Until now we have seen examples where we scraped data directly from a webpage. Now, we will find out how we can extract data from websites that have hyperlinks. In this example, we shall extract data from https://codingbat.com/. Let us try and extract all the questions listed under the Python category in codingbat.com.
The demonstartion given below depicts a sample data that we are going to extract from the website.
I hope you enjoyed the examples discussed in the article. Please subscribe and stay tuned for more articles and video contents in the future!
Where to Go From Here?
Enough theory, let’s get some practice!
To become successful in coding, you need to get out there and solve real problems for real people. That’s how you can become a six-figure earner easily. And that’s how you polish the skills you really need in practice. After all, what’s the use of learning theory that nobody ever needs?
Practice projects is how you sharpen your saw in coding!
Do you want to become a code master by focusing on practical code projects that actually earn you money and solve problems for people?
Then become a Python freelance developer! It’s the best way of approaching the task of improving your Python skills—even if you are a complete beginner.
It might seem like zooming around the galaxy with Baby Yoda is all action and adventure. The Mandalorian’s Din Djarin explains how much work it is to take care of a 50-year-old poop machine in this clip that we’re sure many parents can relate to. A fun and charming fan parody from the guys at The Warp Zone.