https://media.notthebee.com/articles/61e8300dc80ee61e8300dc80ef.jpg
What… in… the… world?
Not the Bee
Just another WordPress site
https://media.notthebee.com/articles/61e8300dc80ee61e8300dc80ef.jpg
What… in… the… world?
Not the Bee
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/01/children-having-fun-with-linux-commands.jpg
The Linux terminal is a powerful utility. You can use it to control the whole system, crafting and typing commands as you go about doing your everyday tasks. But it can quickly become overwhelming to keep staring at a command line and carry on with your work.
Lucky for you, the terminal is also a source of fun. You can play around with commands, listen to music, and even play games. Although expecting a great deal of entertainment from a window full of commands would be carrying it too far, you can find utilities to bind some time when bored.
Here are some fun and entertaining commands every Linux user should try at least once.
Starting off the list with a fun tool every Linux user loves, CMatrix is a command-line utility that generates the classic “The Matrix” animation from the popular movie franchise of the same name. You can expect to see some great animations in different colors that you also get to customize.
Albeit CMatrix uses regular fonts instead of the original Japanese characters, you’ll definitely enjoy every moment you spend with the tool. Either use it as your desktop screensaver or include the program in your window manager rice screenshots, the choice is yours. You can even go to the extremes and set up a CMatrix server on a laptop that runs the program 24/7.
To install Cmatrix on Debian-based distros like Ubuntu:
sudo apt install cmatrix
On Arch Linux and its derivatives:
sudo pacman -S cmatrix
On RHEL-based distros like Fedora:
sudo dnf install cmatrix
What does the cow say? Definitely, not just “moo.”
cowsay is an ASCII-art-based command-line utility that displays the specified input with a neat ASCII cow art. While there’s not much to this program, you can use it as a Bash prompt by invoking the program with random quotes whenever you launch a terminal instance.
cowsay "Mooooo"
To install cowsay on Debian and Ubuntu:
sudo apt install cowsay
On Arch Linux:
sudo pacman -S cowsay
On Fedora, CentOS, and RHEL:
sudo dnf install cowsay
Everyone loves trains, especially steam locomotives. The Linux utility sl brings your favorite steam locomotive to your desk, using the terminal of course.
Running the sl command is very simple.
sl
Installing sl on Ubuntu and Debian is easy.
sudo apt install sl
Similarly, on Arch-based distributions:
sudo pacman -S sl
On Fedora, CentOS, and RHEL:
sudo dnf install sl
Have you ever seen a Linux terminal with beautifully crafted ASCII art at the top? You can achieve the same results using FIGlet, a command-line tool that converts user input into ASCII banners.
Unlike some other ASCII art generators, FIGlet doesn’t have a character limit, which is what sets it apart. You can create ASCII arts of unlimited length with the tool, although the characters might break if you supply lengthier strings.
FIGlet uses the following command syntax:
figlet "Your string here"
You can install FIGlet on Debian/Ubuntu using:
sudo apt install figlet
To install FIGlet on Arch-based distributions:
sudo pacman -S figlet
On Fedora, CentOS, and RHEL:
sudo dnf install figlet
Want to read a quote? Maybe something funny, or perhaps an educational message? The excitement is there every time you run fortune, as you don’t know what’s going to hit you next. fortune is a Linux utility that returns random messages and quotes on execution.
fortune
It’s easy to get engrossed in the command, reading the entertaining (mostly funny) quotes that fortune outputs. The best thing about the tool? You can pipe it with cowsay and similar programs to produce an engaging Bash prompt for yourself.
cowsay | fortune
To install fortune on Ubuntu/Debian:
sudo apt install fortune
On Arch Linux and similar distributions:
sudo pacman -S fortune-mod
Installing fortune on RHEL-based distros like Fedora and CentOS is easy as well.
sudo dnf install fortune-mod
If you are someone who likes to have a pair of eyes on you every time you need to get something done, xeyes might be the best Linux tool for you. Literally, xeyes brings a pair of eyes to your desktop. The best part? The eyeballs move depending on your mouse pointer’s position.
Launching the program is easy. Simply type xeyes in the terminal and hit Enter. By default, the position of the eyes will be the top left, but you can easily change it using the -geometry flag.
On Ubuntu and Debian-based distros, you can install xeyes with APT.
sudo apt install x11-apps
To install xeyes on Arch-based distros:
sudo pacman -S xorg-xeyes
On Fedora, CentOS, and RHEL:
sudo dnf install xeyes
Want to make your Linux desktop lit? You need aafire. It is a terminal-based utility that starts an ASCII art fire right inside your terminal. Although you won’t physically feel the heat aafire brings to the table, it’s definitely a “cool” Linux program to have on your system.
To install aafire on Ubuntu and Debian:
sudo apt install libaa-bin
On Arch Linux and its derivatives:
sudo pacman -S aalib
On Fedora, CentOS, and other RHEL-based distros:
sudo dnf install aalib
Have you ever wanted your Linux desktop to speak, exactly what you want it to? espeak is a text-to-speech utility that converts a specified string to speech and returns the output in real-time. You can play around with espeak by invoking the command with song lyrics or movie dialogues.
For the test run, you can try specifying a basic string first. Don’t forget to turn up your desktop’s speaker volume.
espeak "Hello World"
You can also change the amplitude, word gap and play around with the voices with espeak. Writers can use this tool to transform their words into speech, making it a perfect tool to assess the content quality.
On Ubuntu/Debian:
sudo apt install espeak
You can install espeak on Arch Linux from the AUR.
yay -S espeak
On Fedora, CentOS, and RHEL:
sudo dnf install espeak
For those who wish to own an aquarium someday, here’s your chance. As the name aptly suggests, asciiquarium creates a virtual aquarium inside your terminal using ASCII characters.
The fishes and the plants are colorized and that’s what makes them come to life, leaving the dull terminal screen behind. You also get to see ducks swimming in the water occasionally.
To install asciiquarium on Ubuntu and Debian:
sudo add-apt-repository ppa:ytvwld/asciiquarium
sudo apt install asciiquarium
On Arch-based distributions:
sudo pacman -S asciiquarium
Installing asciiquarium on RHEL-based distros is also easy.
sudo dnf install asciiquarium
Want to quickly generate a fake identity for some reason? rig is what you need. Being a command-line utility, it returns output in an easy-to-read manner, for both users and computers. You can implement the functionality of rig in scripts, to test functions that require user information in bulk.
To install rig on Ubuntu and Debian:
sudo apt install rig
On Arch-based distributions:
yay -S rig
On RHEL-based distros like Fedora and CentOS:
sudo dnf install rig
All the tools mentioned in the above list will guarantee you a moment of fun amidst the busy life that we’re all living. You can either install these utilities to simply play around with, or you can make something productive out of them by using them in your code.
Whatever the practical applications are, Linux programs always deliver what you expect them to. There are several other software and applications that every Linux user should know about.
Whether you’re new to Linux or you’re a seasoned user, here are the best Linux software and apps you should be using today.
Read Next
About The Author
Deepesh Sharma
(108 Articles Published)
Deepesh is the Junior Editor for Linux at MUO. He writes informational guides on Linux, aiming to provide a blissful experience to all newcomers. Not sure about movies, but if you want to talk about technology, he’s your guy.
MUO – Feed
https://cdn0.thetruthaboutguns.com/wp-content/uploads/2022/01/IMG_0461-scaled.jpg
Holosun just introduced their new RML Rail Mounted Laser. It’s tiny and affordable and will come in both red and green laser versions. The RML will come in five models with MSRPs ranging from $105 to $162 and is expected to hit stores in March or April.
Here’s their press release . . .
Lasers are becoming invaluable to verify an accurate and effective aim, especially in low-light environments. Pistols and rifles fixed with lasers have been shown to improve fast target acquisition. With the growth of red dot optics, lasers have fast been growing in the industry as an alternative to mounted optics. Not only does this help to improve users’ response time, but it also makes a potential Point of Impact clear.
Holosun is known for optics and lasers. This year, Holosun releases the RML (Rail-Mounted Laser). The RML comes in at a very manageable 1.97″×1.18″×0.91″ and 1.3 ounces. Made with a durable polymer housing, the RML is IPX8 rated for water and dust resistance. Additionally, Holosun tests each unit to 2,000G shock resistance. This guarantees that the RML is suited for use in extreme environments.
The RML is available in either a red or green laser version, both of which are class 3R and <5mW output power. The RML package includes one CR1/3N lithium battery. The laser can be adjusted by 4MOA per click and can travel a total of +/-60 MOA. The rate of travel makes it ideal for a primary or even secondary zero, providing an alternate distance point of aim from iron sights or a pistol mounted optic.
With many features, it is easy to see why the RML is a strong contender. Holosun has made it easy to utilize the laser in multiple roles with multiple color options. For the hiker who carries a defensive pistol, the uniformed officer that relies on an alternate color laser and red dot, and everything in between, the RML fills their needs.
Specifications:
The Truth About Guns
http://img.youtube.com/vi/uEepEyrHmtE/0.jpg
Are you ready to head back to Middle Earth? Amazon Studios revealed the title and trailer Wednesday for its highly anticipated prequel to the “Lord of the Rings” series, called “The Lord of the Rings: The Rings of Power.”
The series will debut on Prime Video on Sept. 2.
“The Rings of Power” is set in the Second Age of Middle Earth, thousands of years before the events of J.R.R. Tolkien’s “The Hobbit” and “The Lord of the Rings.”
The series “will take viewers back to an era in which great powers were forged, kingdoms rose to glory and fell to ruin, unlikely heroes were tested, hope hung by the finest of threads, and the greatest villain that ever flowed from Tolkien’s pen threatened to cover all the world in darkness,” Amazon said in its YouTube description for the trailer.
Amazon founder Jeff Bezos tweeted an image of himself holding a big slab of wood with the series title on it. “Can’t wait for you to see it,” he wrote.
IGN has behind-the-scenes details on how the title sequence was created, and it wasn’t with CGI, but rather with molten metal and a “hunk of reclaimed redwood.”
Amazon first announced that it had acquired the rights to adapt Tolkien’s work in 2017.
“’The Lord of the Rings’ is a cultural phenomenon that has captured the imagination of generations of fans through literature and the big screen,” Sharon Tal Yguado, head of Scripted Series for Amazon Studios, said in a statement at the time.
Tolkien’s book series was named Amazon customers’ favorite book of the millennium in 1999. Director Peter Jackson’s theatrical adaptations included “The Fellowship of the Ring” (2001); “The Two Towers” (2002); and “The Return of the King” (2003). The films grossed nearly $6 billion worldwide and won a combined 17 Academy Awards, including Best Picture for “King.”
GeekWire
https://theawesomer.com/photos/2022/01/our_flag_means_death_t.jpg
Rhys Darby (Murray from Flight of the Conchords) stars in this high-seas comedy adventure series about a wealthy man who abandons his life of privilege to become a pirate. Taika Waititi, the busiest man in Hollywood, does double duty as Executive Producer and performs as Blackbeard. Premieres 3.2022 on HBO Max.
The Awesomer
http://img.youtube.com/vi/PMKuZoQoYE0/0.jpg
The Pandas DataFrame/Series has several methods to handle Missing Data. When applied to a DataFrame/Series, these methods evaluate and modify the missing elements.
This is Part 13 of the DataFrame methods series:
abs()
, all()
, any()
, clip()
, corr()
, and corrwith()
. count()
, cov()
, cummax()
, cummin()
, cumprod()
, cumsum()
. describe()
, diff()
, eval()
, kurtosis()
. mad()
, min()
, max()
, mean()
, median()
, and mode()
.pct_change()
, quantile()
, rank()
, round()
, prod()
, and product()
.add_prefix()
, add_suffix()
, and align()
.at_time()
, between_time()
, drop()
, drop_duplicates()
and duplicated()
.equals()
, filter()
, first()
, last(), head()
, and tail()
equals()
, filter()
, first()
, last()
, head()
, and tail()
reset_index()
, sample()
, set_axis()
, set_index()
, take()
, and truncate()
backfill()
, bfill()
, fillna()
, dropna()
, and interpolate()
isna()
, isnull()
, notna()
, notnull()
, pad()
and replace()
drop_level()
, pivot()
, pivot_table()
, reorder_levels()
, sort_values()
and sort_index()
Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
Required Starter Code
import pandas as pd import numpy as np
Before any data manipulation can occur, two new libraries will require installation.
pandas
library enables access to/from a DataFrame.numpy
library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.To install these libraries, navigate to an IDE terminal. At the command prompt ($
), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($
). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install numpy
Hit the <Enter>
key on the keyboard to start the installation process.
Feel free to check out the correct ways of installing those libraries here:
If the installations were successful, a message displays in the terminal indicating the same.
The drop_level()
method removes the specified index or column from a DataFrame/Series. This method returns a DataFrame/Series with the said level/column removed.
The syntax for this method is as follows:
DataFrame.droplevel(level, axis=0)
Parameter | Description |
---|---|
level |
If the level is a string, this level must exist. If a list, the elements must exist and be a level name/position of the index. |
axis |
If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row. |
For this example, we generate random stock prices and then drop (remove) level Stock-B from the DataFrame.
nums = np.random.uniform(low=0.5, high=13.3, size=(3,4)) df_stocks = pd.DataFrame(nums).set_index([0, 1]).rename_axis(['Stock-A', 'Stock-B']) print(df_stocks) result = df_stocks.droplevel('Stock-B') print(result)
size=3,4
). The output saves to nums
.df_stocks
.result
variable.Output:
df_stocks
2 | 3 | ||
Stock-A | Stock-B | ||
12.327710 | 10.862572 | 7.105198 | 8.295885 |
11.474872 | 1.563040 | 5.915501 | 6.102915 |
result
2 | 3 | |
Stock-A | ||
12.327710 | 7.105198 | 8.295885 |
11.474872 | 5.915501 | 6.102915 |
The pivot()
method reshapes a DataFrame/Series and produces/returns a pivot table based on column values.
The syntax for this method is as follows:
DataFrame.pivot(index=None, columns=None, values=None)
Parameter | Description |
---|---|
index |
This parameter can be a string, object, or a list of strings and is optional. This option makes up the new DataFrame/Series index. If None , the existing index is selected. |
columns |
This parameter can be a string, object, or a list of strings and is optional. Makes up the new DataFrame/Series column(s). |
values |
This parameter can be a string, object, or a list of the previous and is optional. |
For this example, we generate 3-day sample stock prices for Rivers Clothing. The column headings display the following characters.
cdate_idx = ['01/15/2022', '01/16/2022', '01/17/2022'] * 3 group_lst = list('AAABBBCCC') vals_lst = np.random.uniform(low=0.5, high=13.3, size=(9)) df = pd.DataFrame({'dates': cdate_idx, 'group': group_lst, 'value': vals_lst}) print(df) result = df.pivot(index='dates', columns='group', values='value') print(result)
cdate_idx
.group_lst
.np.random.uniform
to create a random list of nine (9) numbers between the set range. The output saves to vals_lst
.df
.result
.Output:
df
dates | group | value | |
0 | 01/15/2022 | A | 9.627767 |
1 | 01/16/2022 | A | 11.528057 |
2 | 01/17/2022 | A | 13.296501 |
3 | 01/15/2022 | B | 2.933748 |
4 | 01/16/2022 | B | 2.236752 |
5 | 01/17/2022 | B | 7.652414 |
6 | 01/15/2022 | C | 11.813549 |
7 | 01/16/2022 | C | 11.015920 |
8 | 01/17/2022 | C | 0.527554 |
result
group | A | B | C |
dates | |||
01/15/2022 | 8.051752 | 9.571285 | 6.196394 |
01/16/2022 | 6.511448 | 8.158878 | 12.865944 |
01/17/2022 | 8.421245 | 1.746941 | 12.896975 |
The pivot_table()
method streamlines a DataFrame to contain only specific data (columns). For example, say we have a list of countries with associated details. We only want to display one or two columns. This method can accomplish this task.
The syntax for this method is as follows:
DataFrame.pivot_table(values=None, index=None, columns=None, aggfunc='mean', fill_value=None, margins=False, dropna=True, margins_name='All', observed=False, sort=True)
Parameter | Description |
---|---|
values |
This parameter is the column to aggregate and is optional. |
index |
If the parameter is an array, it must be the same length as the data. It may contain any other data types (but not a list). |
columns |
If an array, it must be the same length as the data. It may contain any other data types (but not a list). |
aggfunc |
This parameter can be a list of functions. These name(s) will display at the top of the relevant column names (see Example 2). |
fill_value |
This parameter is the value used to replace missing values in the table after the aggregation has occurred. |
margins |
If set to True , this parameter will add the row/column data to create subtotal(s) or total(s). False , by default. |
dropna |
This parameter will not include any columns where the value(s) are NaN . True by default. |
margins_name |
This parameter is the name of the row/column containing the totals if margins parameter is True . |
observed |
If True , display observed values. If False , display all observed values. |
sort |
By default, sort is True . The values automatically sort. If False , no sort is applied. |
For this example, a comma-delimited CSV file is read in. A pivot table is created based on selected parameters.
Code – Example 1:
df = pd.read_csv('countries.csv') df = df.head(5) print(df) result = pd.pivot_table(df, values='Population', columns='Capital') print(result)
df
).df
(over-writing df
).result
.Output:
df
Country | Capital | Population | Area | |
0 | Germany | Berlin | 83783942 | 357021 |
1 | France | Paris | 67081000 | 551695 |
2 | Spain | Madrid | 47431256 | 498511 |
3 | Italy | Rome | 60317116 | 301338 |
4 | Poland | Warsaw | 38383000 | 312685 |
result
Capital | Berlin | Madrid | Paris | Rome | Warsaw |
Population | 83783942 | 47431256 | 67081000 | 60317116 | 38383000 |
For this example, a comma-delimited CSV file is read in. A pivot table is created based on selected parameters. Notice the max
function.
Code – Example 2
df = pd.read_csv('countries.csv') df = df.head(5) result = pd.pivot_table(df, values='Population', columns='Capital', aggfunc=[max]) print(result)
df
).df
(over-writing df
).aggfunc
. The output saves to result
.Output:
result
max | |||||
Capital | Berlin | Madrid | Paris | Rome | Warsaw |
Population | 83783942 | 47431256 | 67081000 | 60317116 | 38383000 |
The reorder_levels()
method re-arranges the index of a DataFrame/Series. This method can not contain any duplicate level(s) or drop level(s).
The syntax for this method is as follows:
DataFrame.reorder_levels(order, axis=0)
Parameter | Description |
---|---|
order |
This parameter is a list containing the new order levels. These levels can be a position or a label. |
axis |
If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row. |
For this example, there are five (5) students. Each student has some associated data with it. Grades generate by using np.random.randint()
.
index = [(1001, 'Micah Smith', 14), (1001, 'Philip Jones', 15), (1002, 'Ben Grimes', 16), (1002, 'Alicia Heath', 17), (1002, 'Arch Nelson', 18)] m_index = pd.MultiIndex.from_tuples(index) grades_lst = np.random.randint(45,100,size=5) df = pd.DataFrame({"Grades": grades_lst}, index=m_index) print(df) result = df.reorder_levels([1,2,0]) print(result)
index
.MultiIndex
from the List of Tuples created on line [1] and saves to m_index
.grades_lst
.df
.result
.Output:
df
Grades | |||
1001 | Micah Smith | 14 | 52 |
Philip Jones | 15 | 65 | |
1002 | Ben Grimes | 16 | 83 |
Alicia Heath | 17 | 99 | |
Arch Nelson | 18 | 78 |
result
Grades | |||
Micah Smith | 14 | 1001 | 52 |
Philip Jones | 15 | 1001 | 65 |
Ben Grimes | 16 | 1002 | 83 |
Alicia Heath | 17 | 1002 | 99 |
Arch Nelson | 18 | 1002 | 78 |
The sort_values()
method sorts (re-arranges) the elements of a DataFrame.
The syntax for this method is as follows:
DataFrame.sort_values(by, axis=0, ascending=True, inplace=False, kind='quicksort', na_position='last', ignore_index=False, key=None)
Parameter | Description |
---|---|
by |
This parameter is a string or a list of strings. These comprise the index levels/columns to sort. Dependent on the selected axis. |
axis |
If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row. |
ascending |
By default, True . Sort is conducted in ascending order. If False , descending order. |
inplace |
If False , create a copy of the object. If True , the original object updates. By default, False . |
kind |
Available options are quicksort , mergesort , heapsort , or stable . By default, quicksort . See numpy.sort for additional details. |
na_position |
Available options are first and last (default). If the option is first , all NaN values move to the beginning, last to the end. |
ignore_index |
If True , the axis numbering is 0, 1, 2, etc. By default, False . |
key |
This parameter applies the function to the values before a sort. The data must be in a Series format and applies to each column. |
For this example, a comma-delimited CSV file is read in. This DataFrame sorts on the Capital column in descending order.
df = pd.read_csv('countries.csv') result = df.sort_values(by=['Capital'], ascending=False) print(result)
df
.result
.Output:
Country | Capital | Population | Area | |
6 | USA | Washington | 328239523 | 9833520 |
4 | Poland | Warsaw | 38383000 | 312685 |
3 | Italy | Rome | 60317116 | 301338 |
1 | France | Paris | 67081000 | 551695 |
5 | Russia | Moscow | 146748590 | 17098246 |
2 | Spain | Madrid | 47431256 | 498511 |
8 | India | Dheli | 1352642280 | 3287263 |
0 | Germany | Berlin | 83783942 | 357021 |
7 | India | Beijing | 1400050000 | 9596961 |
The sort_index()
method sorts the DataFrame.
The syntax for this method is as follows:
DataFrame.sort_index(axis=0, level=None, ascending=True, inplace=False, kind='quicksort', na_position='last', sort_remaining=True, ignore_index=False, key=None)
Parameter | Description |
---|---|
axis |
If zero (0) or index is selected, apply to each column. Default is 0 (column). If zero (1) or columns, apply to each row. |
level |
This parameter is an integer, level name, or a list of integers/level name(s). If not empty, a sort is performed on values in the selected index level(s). |
ascending |
By default, True . Sort is conducted in ascending order. If False , descending order. |
inplace |
If False , create a copy of the object. If True , the original object updates. By default, False . |
kind |
Available options are quicksort , mergesort , heapsort , or stable . By default, quicksort . See numpy.sort for additional details. |
na_position |
Available options are first and last (default). If the option is first , all NaN values move to the beginning, last to the end. |
ignore_index |
If True , the axis numbering is 0, 1, 2, etc. By default, False . |
key |
This parameter applies the function to the values before a sort. The data must be in a Series format and applies to each column. |
For this example, a comma-delimited CSV file is read into a DataFrame. This DataFrame sorts on the index Country column.
df = pd.read_csv('countries.csv') df = df.set_index('Country') result = df.sort_index() print(result)
df
.df
(over-writing original df
).df
) on the indexed column (Country) in ascending order (default). The output saves to result
.Output:
Country | Population | Area | |
China | Beijing | 1400050000 | 9596961 |
France | Paris | 67081000 | 551695 |
Germany | Berlin | 83783942 | 357021 |
India | Dheli | 1352642280 | 3287263 |
Italy | Rome | 60317116 | 301338 |
Poland | Warsaw | 38383000 | 312685 |
Russia | Moscow | 146748590 | 17098246 |
Spain | Madrid | 47431256 | 498511 |
USA | Washington | 328239523 | 9833520 |
Finxter
https://www.futurity.org/wp/wp-content/uploads/2022/01/alzheimers-disease-neurodegeneration-1600.jpg
Boosting levels of the neurotransmitter norepinephrine with atomoxetine, a repurposed ADHD medication, may be able to stall neurodegeneration in people with early signs of Alzheimer’s disease, according to a new study.
The results appear in the journal Brain.
This is one of the first published clinical studies to show a significant effect on the protein tau, which forms neurofibrillary tangles in the brain in Alzheimer’s. In 39 people with mild cognitive impairment (MCI), six months of treatment with atomoxetine reduced levels of tau in study participants’ cerebrospinal fluid (CSF), and normalized other markers of neuro-inflammation.
The study points toward an alternative drug strategy against Alzheimer’s that does not rely on antibodies against tau or another Alzheimer’s-related protein, beta-amyloid. A recent FDA-approved drug, adacanumab, targets beta-amyloid but its benefits are controversial among experts in the field.
Larger and longer studies of atomoxetine in MCI and Alzheimer’s are warranted, the researchers conclude. The drug did not have a significant effect on cognition or other clinical outcomes, which was expected given the relatively short study duration.
“One of the major advantages of atomoxetine is that it is already FDA-approved and known to be safe,” says senior author David Weinshenker, professor of human genetics at Emory University School of Medicine. “The beneficial effects of atomoxetine on both brain network activity and CSF markers of inflammation warrant optimism.”
“We are encouraged by the results of the trial,” says lead author Allan Levey, professor of neurology at Emory University School of Medicine and director of the Goizueta Institute @Emory Brain Health. “The treatment was safe, well tolerated in individuals with mild cognitive impairment, and modulated the brain neurotransmitter norepinephrine just as we hypothesized. Moreover, our exploratory studies show promising results on imaging and spinal fluid biomarkers which need to be followed up in larger studies with longer period of treatment.”
The researchers picked atomoxetine, which is commercially available as Strattera, with the goal of boosting brain levels of norepinephrine, which they thought could stabilize a vulnerable region of the brain against Alzheimer’s-related neurodegeneration.
Norepinephrine is produced mainly by the locus coeruleus, a region of the brainstem that appears to be the first to show Alzheimer’s-related pathology—even in healthy, middle-aged people. Norepinephrine is thought to reduce inflammation and to encourage trash-removing cells called microglia to clear out aggregates of proteins such as beta-amyloid and tau. Increasing norepinephrine levels has positive effects on cognition and pathology in mouse and rat models of Alzheimer’s.
“Something that might seem obvious, but was absolutely essential, was our finding that atomoxetine profoundly increased CSF norepinephrine levels in these patients,” Weinshenker says. “For many drugs and trials, it is very difficult to prove target engagement. We were able to directly assess target engagement.”
Weinshenker also emphasizes that the trial grew out of pre-clinical research conducted in animal models, which demonstrated the potential for norepinephrine.
The researchers conducted the study between 2012 and 2018 with a cross-over design, such that half the group received atomoxetine for the first six months and the other half received placebo—then individuals switched. It is possible that participants who received atomoxetine for the first six months experienced carryover effects after treatment stopped, so their second six month period wasn’t necessarily a pure placebo.
Study participants were all diagnosed with mild cognitive impairment and had markers of potential progression to Alzheimer’s in their CSF, based on measuring tau and beta-amyloid. More information about inclusion criteria is available at clinicaltrials.gov.
The researchers measured levels of dozens of proteins in participants’ CSF; the reduction of tau from atomoxetine treatment was small—about 5% over six months—but if sustained, it could have a larger effect on Alzheimer’s pathology. No significant effect on beta-amyloid was seen.
In addition, in participants taking atomoxetine, researchers were able to detect an increase in metabolism in the medial temporal lobe, critical for memory, via PET (positron emission tomography) brain imaging.
Study participants started with a low dose of atomoxetine and ramped up to a higher dose, up to 100mg per day. Participants did experience weight loss (4 pounds, on average) and an increase in heart rate (about 5 beats per minute) while on atomoxetine, but they did not display a significant increase in blood pressure. Some people reported side effects such as gastrointestinal symptoms, dry mouth, or dizziness.
The FDA approved atomoxetine in 2002 for ADHD (attention deficit hyperactivity disorder) in children and adults, and the drug has been shown to be safe in older adults. It is considered to have low abuse potential, compared with conventional stimulants that are commonly prescribed for ADHD.
Looking ahead, it is now possible to visualize the integrity of the locus coeruleus in living people using MRI techniques, so that could be an important part of a larger follow-up study, Weinshenker says. Atomoxetine’s effects were recently studied in people with Parkinson’s disease—the benefits appear to be greater in those who have reduced integrity of the locus coeruleus.
Funding for the study was provided by the Cox and Kenan Family foundations and the Alzheimer’s Drug Discovery Foundation.
Source: Emory University
The post ADHD drug may protect against Alzheimer’s neurodegeneration appeared first on Futurity.
Futurity
https://www.youtube.com/embed/r9Gaauyf1Qk?feature=oembed
The Pandas DataFrame/Series has several methods to handle Missing Data. When applied to a DataFrame/Series, these methods evaluate and modify the missing elements.
This is Part 12 of the DataFrame methods series:
abs()
, all()
, any()
, clip()
, corr()
, and corrwith()
. count()
, cov()
, cummax()
, cummin()
, cumprod()
, cumsum()
. describe()
, diff()
, eval()
, kurtosis()
. mad()
, min()
, max()
, mean()
, median()
, and mode()
.pct_change()
, quantile()
, rank()
, round()
, prod()
, and product()
.add_prefix()
, add_suffix()
, and align()
.at_time()
, between_time()
, drop()
, drop_duplicates()
and duplicated()
.equals()
, filter()
, first()
, last(), head()
, and tail()
equals()
, filter()
, first()
, last()
, head()
, and tail()
reset_index()
, sample()
, set_axis()
, set_index()
, take()
, and truncate()
backfill()
, bfill()
, fillna()
, dropna()
, and interpolate()
isna()
, isnull()
, notna()
, notnull()
, pad()
and replace()
Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.
Required Starter Code
import pandas as pd import numpy as np
Before any data manipulation can occur, two new libraries will require installation.
pandas
library enables access to/from a DataFrame.numpy
library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.To install these libraries, navigate to an IDE terminal. At the command prompt ($
), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($
). Your terminal prompt may be different.
$ pip install pandas
Hit the <Enter>
key on the keyboard to start the installation process.
$ pip install numpy
Hit the <Enter>
key on the keyboard to start the installation process.
Feel free to check out the correct ways of installing those libraries here:
If the installations were successful, a message displays in the terminal indicating the same.
The DataFrame isna()
and isnull()
methods return Boolean (True
/False
) values in the same shape as the DataFrame/Series passed. If any empty values are of the following type, they will resolve to True
.
None
NaN
NaT
NA
All other values (valid data) will resolve to False
.
Note: Any empty strings or
numpy.inf
are not considered empty unless use_inf_as_na
is set to True
.
The syntax for these methods is as follows:
DataFrame.isna() DataFrame.isnull()
Parameters:
These methods contain no parameters.
For this example, three (3) temperatures over three (3) days for Anchorage, Alaska save to a DataFrame. Unfortunately, some temperatures did not accurately record.
The code below returns a new DataFrame containing True
values in the same position as the missing temperatures and False
in the remainder.
Code – isna()
:
df_temps = pd.DataFrame({'Day-1': [np.nan, 11, 12], 'Day-2': [13, 14, pd.NaT], 'Day-3': [None, 15, 16]}, index=['Morning', 'Noon', 'Evening']) print(df_temps) result = df_temps.isna() print(result)
df_temps
.isna()
to set the empty values (np.nan
, pd.NaT
, None
) to True
and the remainder (valid values) to False
. This output saves to the result
variable.Output:
original df_temps
Day-1 | Day-2 | Day-3 | |
Morning | NaN | 13 | NaN |
Noon | 11.0 | 14 | 15.0 |
Evening | 12.0 | NaT | 16.0 |
result
Day-1 | Day-2 | Day-3 | |
Morning | True | False | True |
Noon | False | False | False |
Evening | False | True | False |
Code – isnull()
:
df_temps = pd.DataFrame({'Day-1': [np.nan, 11, 12], 'Day-2': [13, 14, pd.NaT], 'Day-3': [None, 15, 16]}, index=['Morning', 'Noon', 'Evening']) print(df_temps) result = df_temps.isnull() print(result)
df_temps
.isnull()
to set the empty values (np.nan
, pd.NaT
, None
) to True
and the remainder (valid values) to False
. This output saves to the result
variable.Output:
original df_temps
Day-1 | Day-2 | Day-3 | |
Morning | NaN | 13 | NaN |
Noon | 11.0 | 14 | 15.0 |
Evening | 12.0 | NaT | 16.0 |
result
Day-1 | Day-2 | Day-3 | |
Morning | True | False | True |
Noon | False | False | False |
Evening | False | True | False |
Note: The
isnull()
method is an alias of the isna()
method. The output from both examples is identical.
The DataFrame notna()
and notnull()
methods return Boolean (True
/False
) values. These values returned are in the same shape as the DataFrame/Series passed. If any empty values are of the following type, they will resolve to False
.
None
NaN
NaT
NA
All other values that are not of the above type (valid data) will resolve to True
.
The syntax for these methods is as follows:
DataFrame.notna() DataFrame.notnull()
Parameters:
These methods contain no parameters.
For this example, three (3) temperatures over three (3) days for Anchorage, Alaska save to a DataFrame. Unfortunately, some temperatures did not accurately record.
The code below returns a new DataFrame containing True
values in the same position as the missing temperatures and False
in the remainder.
Code – notna()
:
df_temps = pd.DataFrame({'Day-1': [np.nan, 11, 12], 'Day-2': [13, 14, pd.NaT], 'Day-3': [None, 15, 16]}, index=['Morning', 'Noon', 'Evening']) print(df_temps) result = df_temps.notna() print(result)
df_temps
.notna()
to set the empty values (np.nan
, pd.NaT
, None
) to False
and the remainder (valid values) to True
. This output saves to the result
variable.Output:
original df_temps
Day-1 | Day-2 | Day-3 | |
Morning | NaN | 13 | NaN |
Noon | 11.0 | 14 | 15.0 |
Evening | 12.0 | NaT | 16.0 |
result
Day-1 | Day-2 | Day-3 | |
Morning | False | True | False |
Noon | True | True | True |
Evening | True | False | True |
Code – notnull()
:
df_temps = pd.DataFrame({'Day-1': [np.nan, 11, 12], 'Day-2': [13, 14, pd.NaT], 'Day-3': [None, 15, 16]}, index=['Morning', 'Noon', 'Evening']) print(df_temps) result = df_temps.notnull() print(result)
df_temps
.notnull()
to set the empty values (np.nan
, pd.NaT
, None
) to False
and the remainder (valid values) to True
. This output saves to the result
variable.Output:
original df_temps
Day-1 | Day-2 | Day-3 | |
Morning | NaN | 13 | NaN |
Noon | 11.0 | 14 | 15.0 |
Evening | 12.0 | NaT | 16.0 |
result
Day-1 | Day-2 | Day-3 | |
Morning | False | True | False |
Noon | True | True | True |
Evening | True | False | True |
Note: The
notnull()
method is an alias of the notna()
method. The output from both examples is identical.
The pad()
method is an alias for DataFrame/Series fillna()
with the parameter method set to 'ffill'
. Click here for details.
The replace()
method substitutes values in a DataFrame/Series with a different value assigned. This operation is performed dynamically on the object passed.
Note: The
.loc
/.iloc
methods are slightly different from replace()
as they require a specific location in order to change the said value(s).
The syntax for this method is as follows:
DataFrame.replace(to_replace=None, value=None, inplace=False, limit=None, regex=False, method='pad')
Parameter | Description |
---|---|
to_replace |
Determines how to locate values to replace . The following parameters are: – Numeric, String, or Regex. – List of Strings, Regex, or Numeric. – Dictionary: a Dictionary, DataFrame Dictionary, or Nested Dictionary Each one must exactly match the to_replace parameter to cause any change. |
value |
The value to replace any values that match. |
inplace |
If set to True , the changes apply to the original DataFrame/Series. If False , the changes apply to a new DataFrame/Series. By default, False . |
limit |
The maximum number of elements to backward/forward fill. |
regex |
A regex expression to match. Matches resolve to the value parameter. |
method |
The available options for this method are pad , ffill , bfill , or None . Specify the replacement method to use. |
Possible Errors Raised:
Error | When Does It Occur? |
AssertionError |
If regex is not a Boolean (True /False ), or the to_replace parameter is None . |
TypeError |
If to_replace is not in a valid format, such as: – Not scalar, an array, a dictionary, or is None . – If to_replace is a dictionary and the value parameter is not a list. – If multiple Booleans or date objects and to_replace fails to match the value parameter. |
ValueError |
Any error returns if a list/ndarray and value are not the same length. |
The examples below show how versatile the replace()
method is. We recommend you spend some time reviewing the code and output.
In this example, we have five (5) grades for a student. Notice that one (1) grade is a failing grade. To rectify this, run the following code:
Code – Example 1
grades = pd.Series([55, 64, 52, 76, 49]) print(grades) result = grades.replace(49, 51) print(result)
grades
.result
.result
to the terminal.Output:
O | 55 |
1 | 64 |
2 | 52 |
3 | 76 |
4 | 51 |
dtype: int64 |
This example shows a DataFrame of three (3) product lines for Rivers Clothing. They want the price of 11.35 changed to 12.95. Run the code below to change the pricing.
Code – Example 2:
df = pd.DataFrame({'Tops': [10.12, 12.23, 11.35], 'Tanks': [11.35, 13.45, 14.98], 'Sweats': [11.35, 21.85, 35.75]}) result = df.replace(11.35, 12.95) print(result)
df
.result
.Output:
Tops | Tanks | Sweats | |
0 | 10.12 | 12.95 | 12.95 |
1 | 12.23 | 13.45 | 21.85 |
2 | 12.95 | 14.98 | 35.75 |
Code – Example 3:
This example shows a DataFrame with two (2) teams. Each team contains three (3) members. This code removes one (1) member from each team and replaces it with quit.
df = pd.DataFrame({'Team-1': ['Barb', 'Todd', 'Taylor'], 'Team-2': ['Arch', 'Bart', 'Alex']}) result = df.replace(to_replace=r'^Bar.$', value='quit', regex=True) print(result)
df
.Bar
and contain one (1) additional character (.
). This match changed to the word quit
. The output saves to result
.Finxter
https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/01/shaking_hands_interview.jpg
You’re on your way to an important job interview and suddenly your hands start sweating, your heart rate skyrockets, and your mouth is drier than the Sahara Desert. It’s completely normal to be a nervous wreck before a big moment in your life, like an interview, and the anxiety you’re feeling means that you want to do well.
However, anxiety can also trip you up and prevent you from having a successful interview. If you’re struggling to calm your nerves, try these 10 helpful tips.
When people don’t know what to expect in a situation, they become nervous. That’s why there’s always so much stress and nervousness surrounding job interviews. If you prepare for the interview beforehand, you’ll be able to handle your nerves a lot better.
Preparing can be anything from researching the company, rehearsing answers to important questions, or coming up with some questions of your own. By doing your research and being prepared, you’ll know what to expect and get rid of that anxiety.
Related: Common Job Interview Questions and How to Answer Them
Your day will go a lot smoother if you plan it around the interview. To ensure you’re not rushed, anxious, and stressed out the entire day, schedule your interview to be held in the morning.
Once you’ve planned out your day to avoid unnecessary stress, like traffic, make sure you get enough sleep the night before and stick to the timetable the next day. By doing this, you’ll feel more productive and the job interview anxiety will fade away.
If you’d like your interview to go positively, you need to start the day on a positive note, so why not eat a great meal? Choose your favorite breakfast food, whether it’s something healthy like a smoothie or comfort food like bacon and eggs.
As long as you eat something that you enjoy before the interview, you’ll have the energy to do a good job, and you won’t have to worry about a growling stomach.
The way you speak to yourself will affect your actions, so it’s always best to avoid negative thoughts and focus more on the positive ones. Embracing positive self-talk before an interview can be the difference between getting the job and being rejected, so instead of thinking negatively about the interview, turn it into a positive experience.
It’s important to concentrate on being excited about going for a job interview. After all, you’re not going to get every job you apply for, but you can learn from the experience.
Before going to a job interview, listen to your favorite uplifting music, whatever pumps you up, be it Taylor Swift or Beyonce. Can’t find your favorite song? Simply download it before the big interview by using one of these music download apps for Android and iPhone. Listening to music not only enhances your mindset, but also does wonders for your confidence.
Plus, putting on your favorite soundtrack can distract you from feeling the nerves as the interview draws nearer. Fill your ears with excitement and energy to get you in the right mood before your interview, and the anxiety will disappear. Maybe you can even dance away the nerves.
Doing some exercise before an important job interview can do wonders in terms of getting rid of anxiety and stress. Whether you just take a brisk walk around the block, go for a lengthy jog, or do some yoga in your living room, it’ll release positive endorphins and calm your nerves.
Even just a short stroll can clear your head, plus, you’ll get a healthy dose of fresh air and vitamin D.
Related: Free Fitness Apps to Build an Exercise Habit of Regular Workouts
According to science, negative emotions, like anxiety and stress, can be reduced if you’re anticipating a positive event. This is why planning to treat yourself after an interview is so important.
Think of something you’d be eager to get an interview done for. Is it lunch out with a friend? Your favorite movie? A visit to the beauty salon? Whatever you choose to do post-interview, prepare to do it once you’re done with your interview, so you have something exciting to look forward to.
The STOP Technique is a mindfulness trick to calm you down during a stressful situation. Here’s how it works:
S: Stop. Stop whatever you’re doing, and pause.
T: Take. Take a few deep breaths, and follow your breath in and out of your nose.
O: Observe. Observe what’s happening inside and outside of your body, mind, and emotions.
P: Proceed. Proceed to do what you were doing or change course depending on what you observed.
This technique is vital if you’re feeling overwhelmed before an interview because it allows you to stop and take control, and not allow the stress and anxiety to overcome you.
There is nothing that will help you get rid of pre-interview anxiety more than a few words with a caring friend or family member. Sometimes, because we’re so nervous, we get wrapped up in negative thoughts. That’s why it’s best to turn to our loved ones, who will shower us with positive words.
Fundamentally, if you cannot give yourself enough positive self-talk to boost your confidence before the interview, turn to your loved ones to do it for you.
Is your breathing shallow or shaky? If you do feel like you’re getting overcome with anxiety, don’t panic. Breathe in slowly through your mouth and out through your nose a couple of times. This simple breathing exercise will help you to calm your nerves and feel less jittery.
By using an easy breathing technique to control your breathing, you can regain your focus on the interview and get your head back in the game.
It’s impossible not to feel a level of anxiety and nervousness before a job interview, and even though anxiety can sometimes be motivational and give you a boost of energy, it can also cause your interview to go bad.
So use these helpful tips to stay calm and collected, and if that overwhelming feeling comes over you, stop, breathe, and center yourself. You can do it!
When it comes to a job interview in a competitive field or a hard to get position, it’s often the unique ways that’ll make you stand out. Here’s how!
Read Next
About The Author
Christine Romans
(6 Articles Published)
Christine is a content creator with over five years of experience writing about tech as well as a ridiculously wide range of other topics. She is a proud home cook, plant mom, and self-proclaimed wine taster.
MUO – Feed
https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/b5f236eb6fc1522972935ce44db5c088.gif
Despite the show’s finale airing almost 20 years ag,o the technology in Star Trek: Voyager (and even TNG) still looks convincingly futuristic, and we’d happily trade our folding smartphones like the Galaxy Z Fold 3 or the Surface Duo 2 for this incredible recreation of one of Voyager’s tricorders.
Producing a sci-fi TV series based on one of the most beloved franchises of all time isn’t cheap. You not only have to build standing sets recreating the interior of a giant starship, there’s also alien worlds to construct, loads of special effects, and mountains of futuristic props for the cast to interact with. According to Hackaday, For Star Trek: Voyager, the second follow-up to the wildly successful Star Trek: The Next Generation, there were plans to introduce an updated design for the ubiquitous tricorder—a futuristic PDA that can do almost anything a script requires of it—but concept sketches were replaced with hand-me-down props from TNG to keep costs down.
At least one Star Trek: Voyager fan felt that was a great injustice, but instead of voicing their concerns during a Q&A session at a Star Trek convention, they set out to build the Voyager Tricorder, as they call it, in real life. The first version that YouTuber Mangy_Dog (a UI designer who’s also skilled at electronics) took over a year to build was impressively capable and looked straight out of the 24th century. But when a friend commissioned a replica of the tricorder for themselves, Mangy_Dog took the opportunity to thoroughly update the prop inside and out, and while it took several years to complete, the results look even better than anything Hollywood has ever delivered.
Mangy_Dog has delved into the design and engineering process behind the Voyager Tricord V2 build in three videos. The first video goes into some of the challenges of the hardware itself, including custom PCBs and problems with sourcing high-quality displays, while the second video delves into the custom user interface and animations created for the prop, which are all generated and rendered on the fly, instead of just being pre-rendered videos played back on queue. The third video goes much deeper into the internal hardware including the custom PCB created for the project and the extensive code that powers it.
In addition to LCD displays displaying what appear to be Starfleet standard user interfaces, the Voyager Tricorder V2 includes countless touch-sensitive buttons used to switch modes or activate secret features after a long press. There’s also blinking, flashing, and pulsing LEDs all over the device, making it look like the tricorder is actually scanning and interacting with its environment, when in reality the only thing this replica tricorder can actually do is make other Star Trek fans incredibly envious.
Wondering where our RSS feed went? You can pick the new up one here.
G/O Media may get a commission
Gizmodo