A comprehensive guide on how to design future-proof controllers: Part 1

https://hashnode.com/utility/r?url=https%3A%2F%2Fcdn.hashnode.com%2Fres%2Fhashnode%2Fimage%2Fupload%2Fv1642053183825%2FTrLri6Qo6.jpeg%3Fw%3D1200%26h%3D630%26fit%3Dcrop%26crop%3Dentropy%26auto%3Dcompress%2Cformat%26format%3Dwebp%26fm%3Dpng

Introduction

When you are still in that learning phase with any technology, your focus is to make your app work, but I think the exponential progress starts coming when you begin to ask yourself “How can I make this better?”. One simple principle that you can immediately apply to your existing or new codebase for cleaner and more maintainable code is the Separation of Concerns principle.

Most of the server-side codebases I have come across have controllers that contain code specifying the real-world business rules on how data can be created, stored, and changed on the system. If you want to learn how to build controllers that are clean, concise, and easily maintainable, then this series is for you.

What you will learn after reading this article

  1. You will have a solid understanding of the Separation of concerns principle

  2. You will be able to identify the major steps involved in the lifecycle of a request on the server-side

  3. You will understand the role of the controller on the server side. This will ensure that the lines of code present in your controller functions are the ones that absolutely need to be in the controller

Prerequisites

  1. Understanding of client-server architecture
  2. Familiarity with model-view-controller architecture
  3. A basic understanding of Object-Oriented Programming

With all that out of the way, let’s move 🚀

Understanding Separation of concerns

What is a concern?

A concern is a section of a feature or software that handles a particular functionality in the system. A good example of a concern in a well-designed backend system is request validation, which means there is a part of the code that accepts the data coming from the client to make sure all the information is valid or at least in the right format before sending it to the other parts of the system.

What does the term Separation of concerns mean?

Since we know what a concern is, understanding the idea behind the Separation of concerns will not be difficult. Separation of concerns promotes the idea of building software in such a way that our code should be broken down into separate components or layers such that each layer handles a specific concern. An example is a feature that retrieves data from the database and then formats the data based on the client’s request. Placing both logic in the same function is a really bad idea since retrieving data from the database is a specific concern, then formatting the retrieved data is another concern.

Lifecycle of a request on the server

I have worked on building many backend systems that provide services to clients and many of them usually follow a similar pattern with 3 major steps which are mainly.

  1. Request validation: This refers to the part of your code that ensures that the data sent by a client is in a valid and acceptable format. A simple example is making sure that a value sent from the client as an email is actually a valid email address.
  2. Business logic execution: This is the section of your codebase that contains the code that enforces real-world business rules. Let’s use an app that allows a customer to transfer money from one account to another. A valid business rule is that you cannot transfer an amount that is greater than your current balance. For this app to work properly, there has to be a section of your code that compares the amount you are trying to transfer and your current balance and makes sure the business rule is obeyed. That is what we refer to as business logic.
  3. Response formatting and return:This refers to the section of your codebase responsible for making sure the data returned to the client after business logic execution is properly formatted and well presented. eg JSON, XML

Backend receives request.png

I have come across a lot of codebases that perform these three steps using a single function or method, where line 10 to 15 handles request validation, line 16 to 55 handles all the business logic with long if-else statements, loops, etc, then line 56-74 formats the response based on certain conditions, and finally, line 75 returns the data to the client. That’s about 65 lines of code in a single function! That is a ticking time bomb waiting to explode when a new engineer joins the team or when you come back to add more changes to the code.

Understanding the role of the controller in the backend request lifecycle

Imagine we have 3 tasks involving the same feature.

  1. Fix a bug in request validation
  2. Change the way data is retrieved from the database (use Eloquent ORM instead of raw queries)
  3. Add extra meta-data to the response returned to the client

If our controllers are designed in a way where each method contains all these three major steps involved in fulfilling a request on the server-side, then the flow looks something like the image below

Backend receives request 1.png

Handling these tasks become a nightmare because everyone in the team will be modifying the same function, and good luck merging all these changes without the need to resolve merge conflicts 🙄.

So, what exactly should the controller do?

The controller should serve as a delegator, meaning that the controller accepts a request with the associated data from the client, then the controller should assign the different tasks involved in fulfilling the client’s request to different parts of the codebase, then finally, the controller sends a proper response (success or failure) to the client depending on the result of the executed code.
I have illustrated this using the image below.

controller delegator.png

If our controllers are built this way, making changes will be incredibly easy, bugs will be easy to trace and the responsibility of each class and other classes it depends on to fulfill its tasks will be immediately visible even to someone looking at the code for the first time.

There is more juicy stuff to come 😋

Like I said at the beginning of the article, this is going to be a series because there is a lot of information to digest and I want to make sure you can remember a lot after reading each article. I want to separate the concerns you know 😂, so that this article does not become massive like the controllers we will be refactoring starting from the next part. If you didn’t get that joke, then maybe next time.

In the next article, Part 2. We will be refactoring a controller function with a specific focus on the Request validation concern. We will build a Request validator class and abstract all the logic involved in validating the client’s request away from the controller. I will be using Laravel in the rest of the series.

Quick Recap

  1. Separation of concerns is simply a concept that promotes the idea that you should always look at your code to identify the different functionalities involved, and think of how you can break down the code into smaller components for clarity, easy debugging, and maintenance among other benefits.
  2. All the logic involved in fulfilling a client’s request on the server-side should not be in a single location, like a controller.
  3. The controllers should serve as classes that delegate tasks to other parts of the codebase and get’s back a response from those other parts. Depending on the response received by the controller, the controller decides if the response to send back to the client should be a success or failure response.
  4. More code might be involved when trying to break down a feature into smaller components but remember, it pays off in the long run.

It’s a wrap 🎉

I sincerely hope that you have learned something, no matter how small, after reading this. If you did, kindly drop a thumbs up.

Thanks for sticking with me till the end. If you have any suggestions or feedback, kindly drop them in the comment section. Enjoy the rest of your day…bye 😊.

Laravel News Links

A Star Trek: Voyager Fan Built a Replica Tricorder That’s Better Than Any Prop Hollywood Has Ever Made

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/b5f236eb6fc1522972935ce44db5c088.gif

Despite the show’s finale airing almost 20 years ag,o the technology in Star Trek: Voyager (and even TNG) still looks convincingly futuristic, and we’d happily trade our folding smartphones like the Galaxy Z Fold 3 or the Surface Duo 2 for this incredible recreation of one of Voyager’s tricorders.

Producing a sci-fi TV series based on one of the most beloved franchises of all time isn’t cheap. You not only have to build standing sets recreating the interior of a giant starship, there’s also alien worlds to construct, loads of special effects, and mountains of futuristic props for the cast to interact with. According to Hackaday, For Star Trek: Voyager, the second follow-up to the wildly successful Star Trek: The Next Generation, there were plans to introduce an updated design for the ubiquitous tricorder—a futuristic PDA that can do almost anything a script requires of it—but concept sketches were replaced with hand-me-down props from TNG to keep costs down.

At least one Star Trek: Voyager fan felt that was a great injustice, but instead of voicing their concerns during a Q&A session at a Star Trek convention, they set out to build the Voyager Tricorder, as they call it, in real life. The first version that YouTuber Mangy_Dog (a UI designer who’s also skilled at electronics) took over a year to build was impressively capable and looked straight out of the 24th century. But when a friend commissioned a replica of the tricorder for themselves, Mangy_Dog took the opportunity to thoroughly update the prop inside and out, and while it took several years to complete, the results look even better than anything Hollywood has ever delivered.

Mangy_Dog has delved into the design and engineering process behind the Voyager Tricord V2 build in three videos. The first video goes into some of the challenges of the hardware itself, including custom PCBs and problems with sourcing high-quality displays, while the second video delves into the custom user interface and animations created for the prop, which are all generated and rendered on the fly, instead of just being pre-rendered videos played back on queue. The third video goes much deeper into the internal hardware including the custom PCB created for the project and the extensive code that powers it.

In addition to LCD displays displaying what appear to be Starfleet standard user interfaces, the Voyager Tricorder V2 includes countless touch-sensitive buttons used to switch modes or activate secret features after a long press. There’s also blinking, flashing, and pulsing LEDs all over the device, making it look like the tricorder is actually scanning and interacting with its environment, when in reality the only thing this replica tricorder can actually do is make other Star Trek fans incredibly envious.


Wondering where our RSS feed went? You can pick the new up one here.

G/O Media may get a commission

Gizmodo

10 Ways to Get Rid of Anxiety Before a Job Interview

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/01/shaking_hands_interview.jpg

You’re on your way to an important job interview and suddenly your hands start sweating, your heart rate skyrockets, and your mouth is drier than the Sahara Desert. It’s completely normal to be a nervous wreck before a big moment in your life, like an interview, and the anxiety you’re feeling means that you want to do well.

However, anxiety can also trip you up and prevent you from having a successful interview. If you’re struggling to calm your nerves, try these 10 helpful tips.

1. Be Prepared

When people don’t know what to expect in a situation, they become nervous. That’s why there’s always so much stress and nervousness surrounding job interviews. If you prepare for the interview beforehand, you’ll be able to handle your nerves a lot better.

Preparing can be anything from researching the company, rehearsing answers to important questions, or coming up with some questions of your own. By doing your research and being prepared, you’ll know what to expect and get rid of that anxiety.

Related: Common Job Interview Questions and How to Answer Them

2. Plan Your Day

Your day will go a lot smoother if you plan it around the interview. To ensure you’re not rushed, anxious, and stressed out the entire day, schedule your interview to be held in the morning.

Once you’ve planned out your day to avoid unnecessary stress, like traffic, make sure you get enough sleep the night before and stick to the timetable the next day. By doing this, you’ll feel more productive and the job interview anxiety will fade away.

MAKEUSEOF VIDEO OF THE DAY

3. Eat Breakfast

If you’d like your interview to go positively, you need to start the day on a positive note, so why not eat a great meal? Choose your favorite breakfast food, whether it’s something healthy like a smoothie or comfort food like bacon and eggs.

As long as you eat something that you enjoy before the interview, you’ll have the energy to do a good job, and you won’t have to worry about a growling stomach.

4. Positive Self-Talk

The way you speak to yourself will affect your actions, so it’s always best to avoid negative thoughts and focus more on the positive ones. Embracing positive self-talk before an interview can be the difference between getting the job and being rejected, so instead of thinking negatively about the interview, turn it into a positive experience.

It’s important to concentrate on being excited about going for a job interview. After all, you’re not going to get every job you apply for, but you can learn from the experience.

5. Listen to Music

Before going to a job interview, listen to your favorite uplifting music, whatever pumps you up, be it Taylor Swift or Beyonce. Can’t find your favorite song? Simply download it before the big interview by using one of these music download apps for Android and iPhone. Listening to music not only enhances your mindset, but also does wonders for your confidence.

Plus, putting on your favorite soundtrack can distract you from feeling the nerves as the interview draws nearer. Fill your ears with excitement and energy to get you in the right mood before your interview, and the anxiety will disappear. Maybe you can even dance away the nerves.

6. Do Some Exercise

Doing some exercise before an important job interview can do wonders in terms of getting rid of anxiety and stress. Whether you just take a brisk walk around the block, go for a lengthy jog, or do some yoga in your living room, it’ll release positive endorphins and calm your nerves.

Even just a short stroll can clear your head, plus, you’ll get a healthy dose of fresh air and vitamin D.

Related: Free Fitness Apps to Build an Exercise Habit of Regular Workouts

7. Plan Something Post-Interview

According to science, negative emotions, like anxiety and stress, can be reduced if you’re anticipating a positive event. This is why planning to treat yourself after an interview is so important.

Think of something you’d be eager to get an interview done for. Is it lunch out with a friend? Your favorite movie? A visit to the beauty salon? Whatever you choose to do post-interview, prepare to do it once you’re done with your interview, so you have something exciting to look forward to.

8. Try the STOP Technique

The STOP Technique is a mindfulness trick to calm you down during a stressful situation. Here’s how it works:

S: Stop. Stop whatever you’re doing, and pause.

T: Take. Take a few deep breaths, and follow your breath in and out of your nose.

O: Observe. Observe what’s happening inside and outside of your body, mind, and emotions.

P: Proceed. Proceed to do what you were doing or change course depending on what you observed.

This technique is vital if you’re feeling overwhelmed before an interview because it allows you to stop and take control, and not allow the stress and anxiety to overcome you.

9. Call a Loved One

There is nothing that will help you get rid of pre-interview anxiety more than a few words with a caring friend or family member. Sometimes, because we’re so nervous, we get wrapped up in negative thoughts. That’s why it’s best to turn to our loved ones, who will shower us with positive words.

Fundamentally, if you cannot give yourself enough positive self-talk to boost your confidence before the interview, turn to your loved ones to do it for you.

10. Breathe

Is your breathing shallow or shaky? If you do feel like you’re getting overcome with anxiety, don’t panic. Breathe in slowly through your mouth and out through your nose a couple of times. This simple breathing exercise will help you to calm your nerves and feel less jittery.

By using an easy breathing technique to control your breathing, you can regain your focus on the interview and get your head back in the game.

Tackle That Interview Anxiety Head-On

It’s impossible not to feel a level of anxiety and nervousness before a job interview, and even though anxiety can sometimes be motivational and give you a boost of energy, it can also cause your interview to go bad.

So use these helpful tips to stay calm and collected, and if that overwhelming feeling comes over you, stop, breathe, and center yourself. You can do it!

6 Unique Ways to Stand Out in a Job Interview

When it comes to a job interview in a competitive field or a hard to get position, it’s often the unique ways that’ll make you stand out. Here’s how!

Read Next

About The Author

Christine Romans
(6 Articles Published)

Christine is a content creator with over five years of experience writing about tech as well as a ridiculously wide range of other topics. She is a proud home cook, plant mom, and self-proclaimed wine taster.

More
From Christine Romans

Subscribe to our newsletter

Join our newsletter for tech tips, reviews, free ebooks, and exclusive deals!

Click here to subscribe

MUO – Feed

Pandas DataFrame Missing Data Handling – isna(), isnull(), notna(), notnull(), pad() and replace()

https://www.youtube.com/embed/r9Gaauyf1Qk?feature=oembed

The Pandas DataFrame/Series has several methods to handle Missing Data. When applied to a DataFrame/Series, these methods evaluate and modify the missing elements.

This is Part 12 of the DataFrame methods series:

  • Part 1 focuses on the DataFrame methods abs(), all(), any(), clip(), corr(), and corrwith().
  • Part 2 focuses on the DataFrame methods count(), cov(), cummax(), cummin(), cumprod(), cumsum().
  • Part 3 focuses on the DataFrame methods describe(), diff(), eval(), kurtosis().
  • Part 4 focuses on the DataFrame methods mad(), min(), max(), mean(), median(), and mode().
  • Part 5 focuses on the DataFrame methods pct_change(), quantile(), rank(), round(), prod(), and product().
  • Part 6 focuses on the DataFrame methods add_prefix(), add_suffix(), and align().
  • Part 7 focuses on the DataFrame methods at_time(), between_time(), drop(), drop_duplicates() and duplicated().
  • Part 8 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 9 focuses on the DataFrame methods equals(), filter(), first(), last(), head(), and tail()
  • Part 10 focuses on the DataFrame methods reset_index(), sample(), set_axis(), set_index(), take(), and truncate()
  • Part 11 focuses on the DataFrame methods backfill(), bfill(), fillna(), dropna(), and interpolate()
  • Part 12 focuses on the DataFrame methods isna(), isnull(), notna(), notnull(), pad() and replace()

Getting Started

Remember to add the Required Starter Code to the top of each code snippet. This snippet will allow the code in this article to run error-free.

Required Starter Code

import pandas as pd
import numpy as np 

Before any data manipulation can occur, two new libraries will require installation.

  • The pandas library enables access to/from a DataFrame.
  • The numpy library supports multi-dimensional arrays and matrices in addition to a collection of mathematical functions.

To install these libraries, navigate to an IDE terminal. At the command prompt ($), execute the code below. For the terminal used in this example, the command prompt is a dollar sign ($). Your terminal prompt may be different.

$ pip install pandas

Hit the <Enter> key on the keyboard to start the installation process.

$ pip install numpy

Hit the <Enter> key on the keyboard to start the installation process.

Feel free to check out the correct ways of installing those libraries here:

If the installations were successful, a message displays in the terminal indicating the same.

DataFrame isna() & Dataframe isnull()

The DataFrame isna() and isnull() methods return Boolean (True/False) values in the same shape as the DataFrame/Series passed. If any empty values are of the following type, they will resolve to True.

  • None
  • NaN
  • NaT
  • NA

All other values (valid data) will resolve to False.

💡 Note: Any empty strings or numpy.inf are not considered empty unless use_inf_as_na is set to True.

The syntax for these methods is as follows:

DataFrame.isna()
DataFrame.isnull()

Parameters:

These methods contain no parameters.

For this example, three (3) temperatures over three (3) days for Anchorage, Alaska save to a DataFrame. Unfortunately, some temperatures did not accurately record.

The code below returns a new DataFrame containing True values in the same position as the missing temperatures and False in the remainder.

Code – isna():

df_temps = pd.DataFrame({'Day-1':  [np.nan, 11, 12], 
                         'Day-2':  [13, 14, pd.NaT],
                         'Day-3':  [None, 15, 16]},
                         index=['Morning', 'Noon', 'Evening'])
print(df_temps)

result = df_temps.isna()
print(result)
  • Line [1] creates a dictionary of lists and saves it to df_temps.
  • Line [2] outputs the DataFrame to the terminal.
  • Line [3] uses isna() to set the empty values (np.nan, pd.NaT, None) to True and the remainder (valid values) to False. This output saves to the result variable.
  • Line [4] outputs the result to the terminal.

Output:

original df_temps

  Day-1 Day-2  Day-3
Morning    NaN    13 NaN   
Noon      11.0    14 15.0
Evening   12.0   NaT   16.0

result

  Day-1 Day-2  Day-3
Morning    True False True
Noon      False False False
Evening   False True False

Code – isnull():

df_temps = pd.DataFrame({'Day-1':  [np.nan, 11, 12], 
                   'Day-2':  [13, 14, pd.NaT],
                   'Day-3':  [None, 15, 16]},
                   index=['Morning', 'Noon', 'Evening'])
print(df_temps)

result = df_temps.isnull()
print(result)
  • Line [1] creates a dictionary of lists and saves it to df_temps.
  • Line [2] outputs the DataFrame to the terminal.
  • Line [3] uses isnull() to set the empty values (np.nan, pd.NaT, None) to True and the remainder (valid values) to False. This output saves to the result variable.
  • Line [4] outputs the result to the terminal.

Output:

original df_temps

  Day-1 Day-2  Day-3
Morning    NaN    13 NaN   
Noon      11.0    14 15.0
Evening   12.0   NaT   16.0

result

  Day-1 Day-2  Day-3
Morning    True False True
Noon      False False False
Evening   False True False

💡 Note: The isnull() method is an alias of the isna() method. The output from both examples is identical.

DataFrame notna() & notnull()

The DataFrame notna() and notnull() methods return Boolean (True/False) values. These values returned are in the same shape as the DataFrame/Series passed. If any empty values are of the following type, they will resolve to False.

  • None
  • NaN
  • NaT
  • NA

All other values that are not of the above type (valid data) will resolve to True.

The syntax for these methods is as follows:

DataFrame.notna()
DataFrame.notnull()

Parameters:

These methods contain no parameters.

For this example, three (3) temperatures over three (3) days for Anchorage, Alaska save to a DataFrame. Unfortunately, some temperatures did not accurately record.

The code below returns a new DataFrame containing True values in the same position as the missing temperatures and False in the remainder.

Code – notna():

df_temps = pd.DataFrame({'Day-1':  [np.nan, 11, 12], 
                   'Day-2':  [13, 14, pd.NaT],
                   'Day-3':  [None, 15, 16]},
                   index=['Morning', 'Noon', 'Evening'])
print(df_temps)

result = df_temps.notna()
print(result)
  • Line [1] creates a dictionary of lists and saves it to df_temps.
  • Line [2] outputs the DataFrame to the terminal.
  • Line [3] uses notna() to set the empty values (np.nan, pd.NaT, None) to False and the remainder (valid values) to True. This output saves to the result variable.
  • Line [4] outputs the result to the terminal.

Output:

original df_temps

  Day-1 Day-2  Day-3
Morning    NaN    13 NaN   
Noon      11.0    14 15.0
Evening   12.0   NaT   16.0

result

  Day-1 Day-2  Day-3
Morning    False True False
Noon      True True True
Evening   True False True

Code – notnull():

df_temps = pd.DataFrame({'Day-1':  [np.nan, 11, 12], 
                   'Day-2':  [13, 14, pd.NaT],
                   'Day-3':  [None, 15, 16]},
                   index=['Morning', 'Noon', 'Evening'])
print(df_temps)

result = df_temps.notnull()
print(result)
  • Line [1] creates a dictionary of lists and saves it to df_temps.
  • Line [2] outputs the DataFrame to the terminal.
  • Line [3] uses notnull() to set the empty values (np.nan, pd.NaT, None) to False and the remainder (valid values) to True. This output saves to the result variable.
  • Line [4] outputs the result to the terminal.

Output:

original df_temps

  Day-1 Day-2  Day-3
Morning    NaN    13 NaN   
Noon      11.0    14 15.0
Evening   12.0   NaT   16.0

result

  Day-1 Day-2  Day-3
Morning    False True False
Noon      True True True
Evening   True False True

💡 Note: The notnull() method is an alias of the notna() method. The output from both examples is identical.

DataFrame pad()

The pad() method is an alias for DataFrame/Series fillna() with the parameter method set to 'ffill'. Click here for details.

DataFrame replace()

The replace() method substitutes values in a DataFrame/Series with a different value assigned. This operation is performed dynamically on the object passed.

💡 Note: The .loc/.iloc methods are slightly different from replace() as they require a specific location in order to change the said value(s).

The syntax for this method is as follows:

DataFrame.replace(to_replace=None, value=None, 
                  inplace=False, limit=None, 
                  regex=False, method='pad')
Parameter Description
to_replace Determines how to locate values to replace. The following parameters are:
– Numeric, String, or Regex.
– List of Strings, Regex, or Numeric.
– Dictionary: a Dictionary, DataFrame Dictionary, or Nested Dictionary
Each one must exactly match the to_replace parameter to cause any change.
value The value to replace any values that match.
inplace If set to True, the changes apply to the original DataFrame/Series. If False, the changes apply to a new DataFrame/Series. By default, False.
limit The maximum number of elements to backward/forward fill.
regex A regex expression to match. Matches resolve to the value parameter.
method The available options for this method are pad, ffill, bfill, or None. Specify the replacement method to use.

Possible Errors Raised:

Error When Does It Occur?
AssertionError If regex is not a Boolean (True/False), or the to_replace parameter is None.
TypeError If to_replace is not in a valid format, such as:
– Not scalar, an array, a dictionary, or is None.
– If to_replace is a dictionary and the value parameter is not a list.
– If multiple Booleans or date objects and to_replace fails to match the value parameter.
ValueError Any error returns if a list/ndarray and value are not the same length.

The examples below show how versatile the replace() method is. We recommend you spend some time reviewing the code and output.

In this example, we have five (5) grades for a student. Notice that one (1) grade is a failing grade. To rectify this, run the following code:

Code – Example 1

grades = pd.Series([55, 64, 52, 76, 49])
print(grades)

result = grades.replace(49, 51)
print(result)
  • Line [1] creates a Series of Lists and saves it to grades.
  • Line [2] modifies the failing grade of 49 to a passing grade of 51. The output saves to result.
  • Line [3] outputs the result to the terminal.

Output:

O 55
1 64
2 52
3 76
4 51
dtype: int64

This example shows a DataFrame of three (3) product lines for Rivers Clothing. They want the price of 11.35 changed to 12.95. Run the code below to change the pricing.

Code – Example 2:

df = pd.DataFrame({'Tops':     [10.12, 12.23, 11.35],
                   'Tanks':    [11.35, 13.45, 14.98],
                   'Sweats':  [11.35, 21.85, 35.75]})

result = df.replace(11.35, 12.95)
print(result)
  • Line [1] creates a dictionary of lists and saves it to df.
  • Line [2] replaces the value 11.35 to 12.95 for each occurrence. The output saves to result.
  • Line [3] outputs the result to the terminal.

Output:

  Tops Tanks Sweats
0 10.12  12.95  12.95
1 12.23  13.45   21.85
2 12.95  14.98   35.75

Code – Example 3:

This example shows a DataFrame with two (2) teams. Each team contains three (3) members. This code removes one (1) member from each team and replaces it with quit.

df = pd.DataFrame({'Team-1': ['Barb', 'Todd', 'Taylor'],
                   'Team-2': ['Arch', 'Bart', 'Alex']})

result = df.replace(to_replace=r'^Bar.$', value='quit', regex=True)
print(result)
  • Line [1] creates a Dictionary of Lists and saves it to df.
  • Line [2] replaces any values that start with Bar and contain one (1) additional character (.). This match changed to the word quit. The output saves to result.
  • Line [3] outputs the result to the terminal.

Finxter

ADHD drug may protect against Alzheimer’s neurodegeneration

https://www.futurity.org/wp/wp-content/uploads/2022/01/alzheimers-disease-neurodegeneration-1600.jpgWhite pills form the shape of a brain on a black background

Boosting levels of the neurotransmitter norepinephrine with atomoxetine, a repurposed ADHD medication, may be able to stall neurodegeneration in people with early signs of Alzheimer’s disease, according to a new study.

The results appear in the journal Brain.

This is one of the first published clinical studies to show a significant effect on the protein tau, which forms neurofibrillary tangles in the brain in Alzheimer’s. In 39 people with mild cognitive impairment (MCI), six months of treatment with atomoxetine reduced levels of tau in study participants’ cerebrospinal fluid (CSF), and normalized other markers of neuro-inflammation.

The study points toward an alternative drug strategy against Alzheimer’s that does not rely on antibodies against tau or another Alzheimer’s-related protein, beta-amyloid. A recent FDA-approved drug, adacanumab, targets beta-amyloid but its benefits are controversial among experts in the field.

Larger and longer studies of atomoxetine in MCI and Alzheimer’s are warranted, the researchers conclude. The drug did not have a significant effect on cognition or other clinical outcomes, which was expected given the relatively short study duration.

“One of the major advantages of atomoxetine is that it is already FDA-approved and known to be safe,” says senior author David Weinshenker, professor of human genetics at Emory University School of Medicine. “The beneficial effects of atomoxetine on both brain network activity and CSF markers of inflammation warrant optimism.”

“We are encouraged by the results of the trial,” says lead author Allan Levey, professor of neurology at Emory University School of Medicine and director of the Goizueta Institute @Emory Brain Health. “The treatment was safe, well tolerated in individuals with mild cognitive impairment, and modulated the brain neurotransmitter norepinephrine just as we hypothesized. Moreover, our exploratory studies show promising results on imaging and spinal fluid biomarkers which need to be followed up in larger studies with longer period of treatment.”

The researchers picked atomoxetine, which is commercially available as Strattera, with the goal of boosting brain levels of norepinephrine, which they thought could stabilize a vulnerable region of the brain against Alzheimer’s-related neurodegeneration.

Norepinephrine is produced mainly by the locus coeruleus, a region of the brainstem that appears to be the first to show Alzheimer’s-related pathology—even in healthy, middle-aged people. Norepinephrine is thought to reduce inflammation and to encourage trash-removing cells called microglia to clear out aggregates of proteins such as beta-amyloid and tau. Increasing norepinephrine levels has positive effects on cognition and pathology in mouse and rat models of Alzheimer’s.

“Something that might seem obvious, but was absolutely essential, was our finding that atomoxetine profoundly increased CSF norepinephrine levels in these patients,” Weinshenker says. “For many drugs and trials, it is very difficult to prove target engagement. We were able to directly assess target engagement.”

Weinshenker also emphasizes that the trial grew out of pre-clinical research conducted in animal models, which demonstrated the potential for norepinephrine.

The researchers conducted the study between 2012 and 2018 with a cross-over design, such that half the group received atomoxetine for the first six months and the other half received placebo—then individuals switched. It is possible that participants who received atomoxetine for the first six months experienced carryover effects after treatment stopped, so their second six month period wasn’t necessarily a pure placebo.

Study participants were all diagnosed with mild cognitive impairment and had markers of potential progression to Alzheimer’s in their CSF, based on measuring tau and beta-amyloid. More information about inclusion criteria is available at clinicaltrials.gov.

The researchers measured levels of dozens of proteins in participants’ CSF; the reduction of tau from atomoxetine treatment was small—about 5% over six months—but if sustained, it could have a larger effect on Alzheimer’s pathology. No significant effect on beta-amyloid was seen.

In addition, in participants taking atomoxetine, researchers were able to detect an increase in metabolism in the medial temporal lobe, critical for memory, via PET (positron emission tomography) brain imaging.

Study participants started with a low dose of atomoxetine and ramped up to a higher dose, up to 100mg per day. Participants did experience weight loss (4 pounds, on average) and an increase in heart rate (about 5 beats per minute) while on atomoxetine, but they did not display a significant increase in blood pressure. Some people reported side effects such as gastrointestinal symptoms, dry mouth, or dizziness.

The FDA approved atomoxetine in 2002 for ADHD (attention deficit hyperactivity disorder) in children and adults, and the drug has been shown to be safe in older adults. It is considered to have low abuse potential, compared with conventional stimulants that are commonly prescribed for ADHD.

Looking ahead, it is now possible to visualize the integrity of the locus coeruleus in living people using MRI techniques, so that could be an important part of a larger follow-up study, Weinshenker says. Atomoxetine’s effects were recently studied in people with Parkinson’s disease—the benefits appear to be greater in those who have reduced integrity of the locus coeruleus.

Funding for the study was provided by the Cox and Kenan Family foundations and the Alzheimer’s Drug Discovery Foundation.

Source: Emory University

The post ADHD drug may protect against Alzheimer’s neurodegeneration appeared first on Futurity.

Futurity

The Biblical Basis for Self-Defense – Kevin Creighton

I believe that armed self-defense is an extension of the Christian’s mandate to protect the innocent, to “watch over widows and orphans in their distress.” No one doubts that a policeman who carries a gun and watches over society can do such things and still be a Christian: Why, therefore, is there any doubt that an armed individual like me can carry a gun and watch over a small portion of society (my family) and yet still have a deep, abiding faith in God?

The Biblical Basis for Self-Defense – Ricochet

Go read the whole thing.

I like to say I am the most peaceful man you will ever know, but I am not a Pacifist.  Not understanding that difference can be hurtful to your health and ability to remain above room temperature. This does not counter the doctrine of the Catholic Church or any real Christian belief. It does piss off something fierce the Lefties using alleged Christian values and political jump points for the most antichrist behavior they can think of.

Comic for January 16, 2022

https://assets.amuniversal.com/c7a90d003b9f013a8b1f005056a9545d

Thank you for voting.

Hmm. Something went wrong. We will take a look as soon as we can.

Dilbert Daily Strip

Why Is Python Popular for Data Science?

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/01/woman-programming-and-a-python-textbook.png

Python is a popular high-level programming language used mainly for data science, automation, web development, and Artificial Intelligence. It is a general-purpose programming language supporting functional programming, object-oriented programming, and procedural programming. Over the years, Python is known to be the best programming language for data science, and it is commonly used by big tech companies for data science tasks.

In this tutorial, you will learn why Python is so popular for data science and why it will stay popular in the future.

What Can Python Be Used For?

As said earlier, Python is a general-purpose programming language, which means that it can be used for almost everything.

One common application of Python in web development is where Django or Flask is used as the backend for a website. For example, Instagram’s backend runs on Django, and it’s one of the largest deployments of Django.

You can also use Python for game development with Pygame, Kivy, Arcade, etcetera; though it’s rarely used. Mobile app development is not left out, Python offers many app development libraries such as Kivy and KivyMD which you can use for developing multiplatform apps; and many other libraries like Tkinter, PyQt, etc.

The main talk of this tutorial is the application of Python in Data Science. Python has been proven to be the best programming language for Data Science and you will know why in this tutorial.

MAKEUSEOF VIDEO OF THE DAY

What Is Data Science?

According to Oracle, data science combines multiple fields, including statistics, scientific methods, artificial intelligence (AI), and data analysis, to extract value from data. It encompasses preparing data for analysis, including cleansing, aggregating, and manipulating the data to perform advanced data analysis.

Data science is applicable in different industries, and it’s helping to solve problems and discover more about the universe. In the health industry, data science helps doctors to make use of past data in making decisions, for example, diagnosis, or the right treatment for a disease. The education sector is not left out, you can now predict students dropping out of school, all thanks to data science.

Python Has a Simple Syntax

What else can make programming a lot easier than having an intuitive syntax? In Python, you need just one line to run your first program: simply type print(“Hello World!”) and run – it’s that easy.

Python has a very simple syntax, and it makes programming a lot easier and faster. There is no need for curly braces when writing functions, no semicolon is your enemy, and you don’t even need to import libraries before you write basic code.

This is one advantage Python has over other programming languages. You have fewer tendencies to make errors, and you can easily notice bugs.

Data Science is one complex field you can’t do without needing any help. Python offers all the help you need through its wide community. Whenever you get stuck, just browse it and your answer is waiting for you. Stack Overflow is a very popular website where questions and answers are posted to programming problems.

If your problem is new, which is rare, you can ask questions and people would be willing to provide answers.

Python Offers All the Libraries

You badly need water, and you have just two cups on the table. One is a quarter filled with water while the other one is almost full. Would you carry the cup with much water or the other one, though they both have water? You’d want to carry the cup containing a lot of water because you really need water. This is relatable to Python, it offers all the libraries you’d ever need for data science, you would definitely not want to use another programming language with only a few libraries available.

You will have a great experience working with these libraries because they are really easy to use. If you need to install any library, search for the library name at PyPI.org and follow the instructions towards the end of this article to install the library.

Related: Data Science Libraries for Python Every Data Scientist Should Use

Numerical Python – NumPy

NumPy is one of the most commonly used data science libraries. It allows you to work with numeric and scientific tasks in Python. Data is represented using arrays or what you may refer to as lists, which can be in any dimension: 1-dimensional (1D) array, 2-dimensional (2D) array, 3-dimensional (3D) array, and so on.

Pandas

Pandas is also a popular data science library used in data preparation, data processing, data visualization. With Pandas, you can import data in different formats such as CSV (comma-separated values) or TSV (Tab-separated values). Pandas works like Matplotlib because it allows you to make different types of plots. Another cool feature Pandas offers is that it allows you to read SQL queries. So, if you have connected to your database, and you want to write and run SQL queries in Python, Pandas is a great choice.

Matplotlib and Seaborn

Matplotlib is another awesome library Python offers. It has been developed on top of MatLab – a programming language used mainly for scientific and visualization purposes. Matplotlib allows you to plot different kinds of graphs with just a few lines of code.

You can plot graphs to visualize any data, helping you to gain insights from your data, or giving you a better representation of the data. Other libraries like Pandas, Seaborn, and OpenCV also use Matplotlib for plotting sophisticated graphs.

Seaborn (not Seaborne) is just like Matplotlib, just that you have more options – to give different parts of your graphs different colors, or hues. You can plot nice graphs and customize the look to make the data representation better.

Open Computer Vision – OpenCV

Perhaps you want to build an Optical Character Recognition (OCR) system, document scanner, image filter, motion sensor, security system, or anything else related to computer vision, you should try OpenCV. This amazing and free library offered by Python allows you to build computer vision systems over just a few lines of code. You can work with images, videos, or even your webcam feed and deploy.

Scikit-learn – Sklearn

Scikit-learn is the most popular library used specifically for machine learning tasks in data science. Sklearn offers all the utilities you need to make use of your data and build machine learning models in just a few lines of code.

There are various machine learning tasks like linear regression (simple and multiple), logistic regression, k-nearest neighbors, naive bayes, support vector regression, random forest regression, polynomial regression, including classification and clustering tasks.

Though Python is simple because of its syntax; there are tools that have been specifically designed with data science in mind. Jupyter notebook is the first tool, it is a development environment built by Anaconda, to write Python code for data science tasks. You can write and instantly run codes in cells, group them, or even include documentation, as provided by its markdown capability.

A popular alternative is Google Colaboratory, also known as Google Colab. They are similar and used for the same purpose but Google Colab has more advantages because of its cloud support. You have access to more space, not having to worry about your computer storage getting full. You can also share your notebooks, log in on any device and access it, or even save your notebook to GitHub.

How to Install Any Data Science Library in Python

Given you already have Python installed on your computer, this step-by-step section will guide you through how to install any data science library on your Windows computer. NumPy will be installed in this case, follow the steps below:

  1. Press Start and type cmd. Right-click the result and choose Run as administrator.
  1. You need PIP to install Python libraries from PyPi. If you already have, feel free to skip this step; if not, please read how to install PIP on your computer.
  2. Type pip install numpy and press Enter to run. This process will install NumPy on your computer and you can now import and use NumPy on your computer. This process should look similar to the screenshot shown below, ignore the warning and blank spaces. (If you use Linux or macOS, simply open a terminal and enter the pip install command).

It’s Time to Use Python for Data Science

Among other programming languages like R, C++, and Java; Python stands to be the best for data science. This tutorial has guided you through why Python is so popular for data science. You now know what Python offers and why big companies like Google, Meta, NASA, Tesla, etcetera use Python.

Did this tutorial succeed in convincing you that Python will remain the best programming language for data science? If yes, go on and build nice data science projects; help make life easier.

How to Import Excel Data Into Python Scripts Using Pandas

For advanced data analysis, Python is better than Excel. Here’s how to import your Excel data into a Python script using Pandas!

Read Next

About The Author

Subscribe to our newsletter

Join our newsletter for tech tips, reviews, free ebooks, and exclusive deals!

Click here to subscribe

MUO – Feed