Someone had a warning for New Yorkers visiting former President Donald Trump’s new hometown — leave if you are “woke.”
Palm Beach police are investigating after someone placed fliers over the weekend on New York-licensed cars parked in the wealthy island reading, “If you are one of the those ‘woke’ people — leave Florida. You will be happier elsewhere, as will we.”
You wrote a Python script that you’re proud of, and now you want to show it off to the world. But how? Most people won’t know what to do with your .py file. Converting your script into a Python web application is a great solution to make your code usable for a broad audience.
In this course, you’ll learn how to go from a local Python script to a fully deployed Flask web application that you can share with the world.
By the end of this course, you’ll know:
What web applications are and how you can host them online
How to convert a Python script into a Flask web application
How to improve user experience by adding HTML to your Python code
How to deploy your Python web application to Google App Engine
[ Improve Your Python With ð Python Tricks ð â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
You wrote a Python script that you’re proud of, and now you want to show it off to the world. But how? Most people won’t know what to do with your .py file. Converting your script into a Python web application is a great solution to make your code usable for a broad audience.
In this course, you’ll learn how to go from a local Python script to a fully deployed Flask web application that you can share with the world.
By the end of this course, you’ll know:
What web applications are and how you can host them online
How to convert a Python script into a Flask web application
How to improve user experience by adding HTML to your Python code
How to deploy your Python web application to Google App Engine
[ Improve Your Python With ð Python Tricks ð â Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]
In this blog post, we will compare the performance of performing a backup from a MySQL database using mysqldump, MySQL Shell feature called Instance Dump, mysqlpump, mydumper, and Percona XtraBackup. All these available options are open source and free to use for the entire community.
To start, let’s see the results of the test.
Benchmark Results
The benchmark was run on an m5dn.8xlarge instance, with 128GB RAM, 32 vCPU, and 2xNVMe disks of 600GB (one for backup and the other one for MySQL data). The MySQL version was 8.0.26 and configured with 89Gb of buffer pool, 20Gb of redo log, and a sample database of 177 GB (more details below).
We can observe the results in the chart below:
And if we analyze the chart only for the multi-threaded options:
As we can see, for each software, I’ve run each command three times in order to experiment using 16, 32, and 64 threads. The exception for this is mysqldump, which does not have a parallel option and only runs in a single-threaded mode.
We can observe interesting outcomes:
When using zstd compression, mydumper really shines in terms of performance. This option was added not long ago (MyDumper 0.11.3).
When mydumper is using gzip, MySQL Shell is the fastest backup option.
In 3rd we have Percona XtraBackup.
mysqlpump is the 4th fastest followed closer by mydumper when using gzip.
mysqldump is the classic old-school style to perform dumps and is the slowest of the four tools.
In a server with more CPUs, the potential parallelism increases, giving even more advantage to the tools that can benefit from multiple threads.
Before starting the comparison, I ran mysqldump once and discarded the results to warm up the cache, otherwise our test would be biased because the first backup would have to fetch data from the disk and not the cache.
With everything set, I started the mysqldump with the following options:
PS: To use zstd, there are no changes in the command line, but you need to download the zstd binaries.
For mysqlpump:
$ time mysqlpump --default-parallelism=16 --all-databases > backup.out
For xtrabackup:
$ time xtrabackup --backup --parallel=16 --compress --compress-threads=16 --datadir=/mysql_data/ --target-dir=/backup/
Analyzing the Results
And what do the results tell us?
Parallel methods have similar performance throughput. The mydumper tool cut the execution time by 50% when using zstd instead of gzip, so the compression method makes a big difference when using mydumper.
For the util.dumpInstance utility, one advantage is that the tool stores data in both binary and text format and uses zstd compression by default. Like mydumper, it uses multiple files to store the data and has a good compression ratio.
XtraBackup got third place with a few seconds of difference from MySQL shell. The main advantage of XtraBackup is its flexibility, providing PITR and encryption for example.
Next, mysqlpump is more efficient than mydumper with gzip, but only by a small margin. Both are logical backup methods and works in the same way. I tested mysqlpump with zstd compression, but the results were the same, hence the reason I didn’t add it to the chart. One possibility is because mysqlpump streams the data to a single file.
Lastly, for mysqldump, we can say that it has the most predictable behavior and has similar execution times with different runs. The lack of parallelism and compression is a disadvantage for mysqldump; however, since it was present in the earliest MySQL versions, based on Percona cases, it is still used as a logical backup method.
Please leave in the comments below what you thought about this blog post, if I missed something, or if it helped you. I will be glad to discuss it!
Useful Resources
Finally, you can reach us through the social networks, our forum, or access our material using the links presented below:
If you’ve been spending more time at home during the pandemic, you may not realize how important proper footwear is for keeping certain injuries at bay.
Sean Peden, an orthopaedic foot and ankle specialist from Yale University Medicine, says not wearing supportive footwear on a regular basis can lead to foot pain and other problems.
“Many people are continuing to work at home part- or full-time, which for some can mean wearing slippers or walking around barefoot,” Peden says. “And because of that, many patients are coming to us with foot problems.”
Taking good care of your feet will not only help you avoid common injuries like tendonitis and plantar fasciitis, but it can also prevent other issues with your hips, knees, and back from developing, he adds.
Here, Peden shares some of the most common foot problems he sees—and simple treatments to get relief:
Think of shoes as shock absorbers
Just as you would pick out an appropriate shoe for your commute into the office, it’s important to put the same level of thought into selecting an at-home shoe.
Walking barefoot at home is not recommended for the same reason walking barefoot outside is ill-advised, Peden says.
“All kinds of footwear protect your feet. Over the course of weeks or months, the strain of walking barefoot can add significant stress to your arches, tendons, plantar fascia, and joints,” he says. “This can lead to a range of complications, from minor conditions such as calluses to major issues such as arch collapse.”
It may help to think of footwear as shock absorbers and, based on body type and gait, some of us need more shock absorption than others, Peden says.
“If you have sore feet—or have had foot problems in the past—wearing a pair of what I call ‘house shoes,’ or ‘house slippers,’ is a good idea.”
By that, Peden means a hard-soled, slip-on shoe or slipper that is worn exclusively inside the home (ideally) to avoid bringing in dirt or bacteria.
“To be practical, I suggest a slip-on clog or slipper without laces. That way, you don’t have to tie and untie your shoes 10 times a day,” Peden says. “A hard sole is important because the harder the sole, the less stress the joints and tendons in your foot experience with each step. The hard sole transfers that stress to the shoe rather than to the foot.”
In general, avoid fluffy, formless slippers, he advises. “If you are at home, you might go up and down stairs dozens of times a day—or do chores around the house. And those are not activities to do with footwear that doesn’t have any support,” Peden says. “A good rule of thumb is if it isn’t something you could walk in for a few blocks comfortably, you shouldn’t wear it around the house all day, either.”
Painful tendonitis
One of the most common foot problems Peden has seen in patients since the pandemic started is Achilles tendonitis, or inflammation of a tendon (a thick cord of tissue that connects muscles to bones). The Achilles tendon runs from the back of your calf to your heel bone. Achilles tendonitis can cause pain and swelling in the foot and ankle.
An injury, overuse, and flat fleet are all causes for Achilles tendonitis, Peden says. “It can be an issue especially if people with flat feet spend six months to a year not wearing supportive shoes on a regular basis,” he says. “The tendon in the arch of the foot becomes inflamed as the foot gets flatter. It is quite painful and can be debilitating.”
Peden says he is also seeing more patients with posterior tibial tendonitis, which causes a collapsed arch or flat foot.
The Fix: For acute pain, the first things to try are rest, ice, and staying off your feet as much as possible. Finding footwear with good arch support is another must, Peden says.
“Some people might need an ankle brace or additional inserts for their shoes, but for the vast majority, proper footwear is the answer. These tendon flares generally last a few months, but patients usually see improvement within a week or two.”
People with tendon issues should get proper treatment, Peden says. “You want to avoid developing a chronic tendon issue, because those are harder to cure.”
Plantar fasciitis: ‘stabbing’ heal pain
Many patients have developed plantar fasciitis, inflammation of the band of tissue on the bottom of your foot.
A common symptom is a stabbing pain in the heel that can be the most intense when you first step out of bed in the morning. That’s because the plantar fascia, which runs from the heel to the base of your toes, tightens overnight.
The plantar fascia supports the arch of the foot and absorbs stress. Too much stress—from standing on your feet on a hard surface for a long time, improper shoes, or running—can cause irritation and tiny tears in the band of tissue.
“The pain is usually on the bottom part of the heel,” Peden says. “It’s associated with tight Achilles tendons and calf muscles. If people spend a lot of their day sitting, for example, the muscles can tighten up, and wearing improper footwear can exacerbate the issue.
“For people who work outside the home and are on their feet all day, including nurses, they should wear a supportive shoe—and not something too soft or flexible. This can include sneakers, a hard clog, or a work shoe, depending on personal preference.”
The Fix: Besides supportive footwear and avoiding walking around barefoot, treatment should include a home stretching program to address the tightness in the calf muscles and Achilles tendons, Peden says.
Another effective treatment is to wear a soft, flexible splint that holds your foot at a 90-degree angle while you are sleeping; this keeps the plantar fascia stretched out. You can also wear a splint while lying on the couch watching TV.
As painful as plantar fasciitis can be, Peden says it is not a progressive condition. “People often worry it’s the start of something like arthritis, which continues to get worse,” he says. “It might take a few months of conservative, noninvasive, nonsurgical treatments, but patients with plantar fasciitis typically get better.”
Physical therapy and lifestyle
Exercise, physical therapy, and weight loss can all make a difference in addressing foot pain, too.
“One pound of additional weight on your body leads to six pounds of additional pressure on your foot. So, if you lose 10 pounds, that is really taking 60 pounds of pressure off your foot,” Peden says.
With the pandemic, many people have gained weight, which compounds the problem. But the key is not to do too much too quickly to try to reverse it, Peden says.
“If you try to lose weight by suddenly walking too much, that’s hard on your feet, too, and may lead to other foot problems. So, I often recommend cross-training, including low-impact cardio activities like biking or swimming. You can walk, but try to take it easy and, as always, wear good, supportive shoes.”
Hiking shoes are often a good option, particularly if you walk on uneven surfaces, including trails. “They are a little safer than sneakers, and protect your foot and ankle better,” he says.
In certain cases, physical therapy is recommended for lingering foot issues. “Physical therapists have many techniques that can speed up the recovery process,” Peden says.
Surgery is rarely needed for chronic conditions like tendonitis or plantar fasciitis. “We always treat our patients first with nonsurgical options to hopefully manage the condition before we ever talk about surgery,” Peden says.
But if you are feeling foot pain, don’t be afraid to seek medical help, Peden advises.
“I know people have different comfort levels right now about seeking medical care during the pandemic, but if you have a foot issue and it’s been hurting for a while, you should go see your doctor. There are likely easy solutions.”
The Pandas DataFrame has several methods concerning Computations and Descriptive Stats. When applied to a DataFrame, these methods evaluate the elements and return the results.
Part 1 focuses on the DataFrame methods abs(), all(), any(), clip(), corr(), and corrwith().
Part 2 focuses on the DataFrame methods count(), cov(), cummax(), cummin(), cumprod(), cumsum().
Part 3 focuses on the DataFrame methods describe(), diff(), eval(), kurtosis().
Part 4 focuses on the DataFrame methods mad(), min(), max(), mean(), median(), and mode().
Getting Started
Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
Required Starter Code
import pandas as pd
import numpy as np
Before any data manipulation can occur, two new libraries will require installation.
The pandas library enables access to/from a DataFrame.
The numpy library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.
To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter> key on the keyboard to start the installation process.
$ pip install numpy
Hit the <Enter> key on the keyboard to start the installation process.
Feel free to check out the correct ways of installing those libraries here:
Line [1] creates a DataFrame from a Dictionary of Lists and saves it to df_teams.
Line [2] uses the mad() method with the axis parameter set to columns to calculate MAD from the DataFrame. The lambda function formats the output to three (3) decimal places. This output saves to the result variable.
Line [1] creates a DataFrame from a dictionary of lists and saves it to df_teams.
Line [2] uses the min() method with the axis parameter set to columns to retrieve the minimum value(s) from the DataFrame. This output saves to the result variable.
Line [3] outputs the result to the terminal.
Output:
Bruins
4
Oilers
3
Leafs
2
Flames
8
dtype:
int64
This example uses two (2) arrays and retrieves the minimum value(s) of the Series.
Line [1] creates a DataFrame from a Dictionary of Lists and saves it to df_teams.
Line [2] uses max() with the axis parameter set to columns to retrieve the maximum value(s) from the DataFrame. This output saves to the result variable.
Line [3] outputs the result to the terminal.
Output:
Bruins
9
Oilers
14
Leafs
11
Flames
21
dtype:
int64
This example uses two (2) arrays and retrieves the maximum value(s) of the Series.
Line [1-2] create lists of random grades and assigns them to the appropriate variable.
Line [3] uses the NumPy library maximum function to compare the two (2) arrays. This output saves to the result variable.
Line [4] outputs the result to the terminal.
Output:
[73 84 83 93]
DataFrame mean()
The mean() method returns the average of the DataFrame/Series across a requested axis. If a DataFrame is used, the results will return a Series. If a Series is used, the result will return a single number (float).
Line [1] creates a DataFrame from a Dictionary of Lists and saves it to df_teams.
Line [2] uses the mean() method with the axis parameter set to columns to calculate means (averages) from the DataFrame. The lambda function formats the output to two (2) decimal places. This output saves to the result variable.
Line [3] outputs the result to the terminal.
Output:
Bruins
6.00
Oilers
7.67
Leafs
6.67
Flames
12.00
dtype:
float64
For this example, Alice Accord, an employee of Rivers Clothing has logged her hours for the week. Let’s calculate the mean (average) hours worked per day.
Code Example 2:
hours = pd.Series([40.5, 37.5, 40, 55])
result = hours.mean()
print(result)
Line [1] creates a Series of hours worked for the week and saves to hours.
Line [2] uses the mean() method to calculate the mean (average). This output saves to the result variable.
Line [3] outputs the result to the terminal.
Output:
42.25
DataFrame median()
The median() method calculates and returns the median of DataFrame/Series elements across a requested axis. In other words, the median determines the middle number(s) of the dataset.
To fully understand median from a mathematical point of view, watch this short tutorial:
The Pandas DataFrame has several methods concerning Computations and Descriptive Stats. When applied to a DataFrame, these methods evaluate the elements and return the results.
Part 1 focuses on the DataFrame methods abs(), all(), any(), clip(), corr(), and corrwith().
Part 2 focuses on the DataFrame methods count(), cov(), cummax(), cummin(), cumprod(), cumsum().
Part 3 focuses on the DataFrame methods describe(), diff(), eval(), kurtosis().
Part 4 focuses on the DataFrame methods mad(), min(), max(), mean(), median(), and mode().
Part 5 focuses on the DataFrame methods pct_change(), quantile(), rank(), round(), prod(), and product().
Getting Started
Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
Required Starter Code
import pandas as pd
import numpy as np
Before any data manipulation can occur, two new libraries will require installation.
The pandas library enables access to/from a DataFrame.
The numpy library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.
To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter> key on the keyboard to start the installation process.
$ pip install numpy
Hit the <Enter> key on the keyboard to start the installation process.
Feel free to check out the correct ways of installing those libraries here:
If the installations were successful, a message displays in the terminal indicating the same.
DataFrame pct_change()
The pct_change() method calculates and returns the percentage change between the current and prior element(s) in a DataFrame. The return value is the caller.
To fully understand this method and other methods in this tutorial from a mathematical point of view, feel free to watch this short tutorial:
If zero (0) or index, apply the function to each column. Default is None. If one (1) or column, apply the function to each row.
method
Determines how to rank identical values, such as: – The average rank of the group. – The lowest (min) rank value of the group. – The highest (max) rank value of the group. – Each assigns in the same order they appear in the array. – Density increases by one (1) between the groups.
numeric_only
Only include columns that contain integers, floats, or boolean values.
na_option
Determines how NaN values rank, such as: – Keep assigns a NaN to the rank values. – Top: The lowest rank to any NaN values found. – Bottom: The highest to any NaN values found.
ascending
Determines if the elements/values rank in ascending or descending order.
pct
If set to True, the results will return in percentile form. By default, this value is False.
For this example, a CSV file is read in and is ranked on Population and sorted. Click here to download and move this file to the current working directory.
Andre Antunes improves another viral video by adding a heavy metal soundtrack to it. This time, he amped up the already formidable footage of New Zealand’s All Blacks rugby team performing a traditional Maori war dance known as the Haka. Original video here.
Webhooks are a common integration mechanism between systems. A small detail I don’t enjoy about them is
their ambiguity, so let me start by specifying what I’ll be talking about. At my organization we call
them Outgoing Webhooks when we are a Webhook Provider and we’re sending out our system’s data out.
And we call Incoming Webhooks when we’re the receiver of data from other systems. This post will describe
my journey around being a Webhook Provider.
The Context
When talking about providing real-time data out of the system, we have a few scope definition established
upfront. It’s a process that goes well with Pub/Sub systems where we internally notify that an event
happened and have a subscriber loop through all webhooks configured to send out the event.
Webhooks can have different forms depending on whether the project is developer-facing like Algolia, GitHub,
AWS or Netlify. In my context, the goal is to provide Outgoing Webhooks for business people that are
looking to save Engineering time and hook things up themselves as easily and simple as possible. This means
that it’s more important to be able to send out data to off-the-shelf API solutions. Nobody is going to be
writing custom code on the receiving end to be able to receive my webhook calls. The business value is
that users can request our system to send one of their other system a piece of data in real time.
The Stack
The ingredients for this project are:
Pub/Sub
HTTP Client
Payload Transformation
3rd-party Authentication
For the sake of understanding, let’s give a concrete goal for the pub/sub: Everytime a user logs in, we should
be able to notify another system. The event here may be meaningless in a lot of contexts, but the
implementation for other events would roughly be the same. With the event published, we can then
prepare an HTTP Client so that we may perform an authentication on a 3rd party system and send
an API call providing the data configured. Here the most powerful tech decision I made was to support
OAuth2. This means a business user can log into our platform, fill out an OAuth2 Form which looks
exactly like Postman OAuth2 configuration and configure an endpoint from a system like Microsoft Dynamics
to be the receiver of the Webhook call. Microsoft Dynamics doesn’t need to be prepared to be a Webhook
Receiver from my small company. Their public native REST HTTP APIs will do.
The Flow
We first identify which part of our system is responsible for fulfilling the event and then publish
that as an event. We may use Laravel Pub/Sub or something else. In my case we opted for AWS SNS.
Once the subscriber kicks in we need to load all webhooks stored in our database and trigger one by one.
This is a simple data query and looping.
For a great User Experience, I like to create a record of the webhook execution prior to starting the
actual execution. This means that we can wrap the execution on an eager try/catch and awlays
update the delivery execution with how it went. If something goes wrong on the remote server,
we can record the 400/500 so that our users can beware of what’s happening.
To actually execute the call to an external system, we’ll need an HTTP Client. The beauty here is to
either factory a clean standard HTTP Client or, if needed, factory an HTTP Client with an OAuth2 Bearer
Token already configured. We can do that roughly in the following way:
Here we’re checking if there is a OAuth2 relation on the webhook configured and if so we try to
load a Bearer Token to be used in the Authorization Header. But if the authentication fails, we can
already return a failed webhook delivery. If there is no OAuth2 configured, we can deliver a regular
API call.
If a customer wants our system to make an API call with a signature for verification (much like GitHub does)
we can easily allow for that with the following snippet:
PHP’s hash_hmac will compute a signature of the array $body using the user’s secret if they provided one.
Here I stored the user provided secret encrypted with AWS Secret.
The Body of the Request
The most interesting part is the body of the request. Here I combined Eloquent with Twig. The choice of
Twig as opposed to Blade is because Twig has an Array Environment
and a secure Sandbox.
Eloquent is great for it because we can write Accessor methods that will act as the data source and
transform the data into an array so that Twig can parse it. Here is a sample of a Request Body:
This is what we store as the body of the webhook request. A frontend application can offer some
dropdown options and build this JSON automatically for the users. In the backend we will parse it
with the following script:
Twig’s ArrayLoader is perfect for inline/user-provided template and unfortunately Laravel Blade
is very tied to the file system so this is what led me to choose Twig over Blade. The Security Policies
will disallow any attempt at remote code execution or scripting attack from users. In order for this out
to work we only need the Eloquent model to have attributes called email or accessors such as the following:
The last important bit is the TwigSecurityPolicyForWebhooks. I wrote it because Twig doesn’t have any
sort of wildcard * character to allow property access.
With all of this setup, all we have left to do is to actually send out the webhook call and record
any result from it. The next snippet represents that portion:
I enjoyed working on this project A LOT. It combines a lot of simple and straight-forward tech to give
a huge business benefit with easy drag-and-drop system integration. Users are able to pick virtually any
API out there and call them from our system in real-time as things happens. The webhook configuration
consist of allowing users to define the Body of the Request, custom headers, an endpoint and merge tags
for body transformation. Our users can call an API that expects a URL token, Basic Auth, OAuth2 or custom
headers. We are also able to sign the body of the request in case the receiving end wants/is able to
validate it. Twig security policies protect us against code injection and our users are able to build
their request body as the target system expects it. And finally the Webhook Delivery list will
always be up-to-date with every API call we made and their response status, timestamp and any relevant
diagnostic information.
This is an extremely simple and powerful webhook provider implementation that empower businesses to
seamlessly integrate data flows without having to write code.
As always, hit me up on Twitter with any
questions.