Capture Your Family History With These Tools

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/b2dea49a7c3b0f4b22fdf23d46d9b7d6.jpg

Imagine popping your head into the attic of your childhood home and finding a box overflowing with recordings, photos, documents, and diaries compiled by relatives who have long passed. Congratulations: you’ve won the family history lottery.

Even if no such stash has been left behind for you, you can easily become the person who starts documenting family stories and events for curious progeny to discover one day. We’ve listed the best tools to capture your family history—from fill-in-the-blank workbooks to decent recording tools for saving audio and video.

Grab a notebook

For starters, you might want a book to record historical facts and connections you discover in your research and conversations.

One simple option is the Genealogy Organizer, a notebook to record facts like vital statistics and family connections with space for photos and notes. It’s a portable size (6×9 inches) with 100 pages for records and additional pages for notes.

For a little more guidance, try the Family Tree Workbook: 30+ Step-by-Step Worksheets to Build Your Family History. It’s the starting point for a heftier research project you can work on with other family members both older and younger than you are.

Record your relatives’ voices

You probably already have the only tool you need to start recording conversations (with proper legal consent, of course): your phone. Congratulations, you are a documentarian.

If you’re recording an in-person conversation, just center the phone between you and your subject(s) and start recording through the Voice Memo app on your iPhone. Android phones have a similar built-in audio recording app called Recorder or Voice Recorder, depending on your phone. To up your audio quality a little more, try a wireless lapel mic like the Maybesta Professional Wireless Lavalier Lapel Microphone.

Recording phone calls is trickier. Because phones can’t record a call in progress (and apps that claim to are unreliable), you probably need one device to make the call and a second device to record. Ethically, it’s nice to inform the person you are talking to that the conversation is being recorded, but it’s not always legally required. Check here to see what’s required by law in your state.

If you find yourself recording a lot of audio, you might want to pick up a separate digital recorder like the Sony ICD-PX370 Mono Digital Voice Recorder. It has space for 59 hours of recorded audio and room for an SD card if you need more. Transfer your recordings with the built-in USB.

Capture video too

If you want to record the look on Dad’s face when he talks about returning to his hometown after hitchhiking across the country, your phone will, again, do the trick. For face-to-face interviews, at least set up a small, flexible tripod so you are not fumbling with the phone.

You may also want to record videos when speaking to family members long-distance. Zoom, Skype, and Google Meet all support recording calls as a basic feature. You can also use these apps to record audio, if your family member doesn’t want video recorded for whatever reason.

Apps for recording memories

StoryCorps (available for Apple and Android)

You may recognize StoryCorps, the NPR staple that has been helping people record and share their stories for 20 years. The free app will help you with interview guidance and recording instructions. You have the option to add photos and keywords to your stories and save to your device or upload recordings to the StoryCorps Archive, which is administered by the Library of Congress.

Storycatcher (available for Apple, $4.99)

Feel like a real filmmaker when you record, edit, and share your work with others. The app comes with story prompts for your interviews and includes access to learning modules that teach you the whole process from interview tips to adding music and captions.

Remento (free for Apple phones)

Remento also includes prompts and recording tools for capturing conversations with family members. The spirit of Remento is to deepen connections with family members in the present by recording their stories and personalities as you discuss memories. Your recordings are stored locally on your device, and you decide who to share with.

More resources for starting conversations

“Tell me your life story” would be an overwhelming prompt at the start of your family history journey. Take advantage of StoryCorps’ experience by choosing some options from their list of great questions. It includes interview questions sorted by topic like raising children, religion, working, love and relationships. Compile your questions in advance and consider giving your interviewee a preview so they can give it some thought.

Even if you’re not interested in DNA tests or going down the rabbit hole of deep family genealogy, consider browsing Ancestry.com with a 14-day trial. Browsing genealogy and DNA records with a family member should elicit some stories. If that doesn’t work, try their conversation prompts to get conversation rolling.

If you do get the itch to look more deeply into your family’s genealogy, check out the National Archives for research tips.

Tips for getting reticent family members to open up

You might get a few strange looks or “no, thanks” when you start asking relatives to share stories. For family members who are hesitant to be recorded or clam up face to face, try these sideways approaches to uncovering family history:

  • Ask them to help the kids with a family tree project. Every kid does one eventually. Let that be your opportunity to sneak in some questions about Great Aunt Beulah.
  • Give them a guided journal they can respond to in private, at their own pace. There are many versions of this concept, like the Tell Me Your Life Story series. You can even create your own guided journal by taking the questions from resources above and adding them to a blank notebook.
  • Try a service like Storyworth. It’s an investment—for $99 you get a year of weekly story prompts emailed to your family member and one printed book that compiles all the stories and photos. You can also buy additional copies of the book. This is ideal if you have a family member who dreams of writing a book, but needs some encouragement. They will receive one email per week and can respond in their own time. Everything is compiled at the end of the year. You get to choose what questions are sent each week.

Finally, when you go in search of family stories, don’t leave out your own generation. Cousins and siblings may have different recollections of events you consider canon, and they can be great co-conspirators for getting parents and grandparents to open up.

Lifehacker

How to use the new Kanban feature in Reminders on macOS Sonoma

https://photos5.appleinsider.com/gallery/55951-113589-000-lead-Kanban-Reminders-xl.jpg

Reminders now optionally features this visual layout of tasks, known as a Kanban view


Apple has added a view to Reminders which shows you tasks in the Kanban column style. Here’s how to use it on the Mac, and also the iPad, plus why not to bother on the iPhone.

Kanban has come to Reminders in macOS Sonoma, iPadOS 17, and iOS 17, and is yet another tucked away feature in an app that now only pretends to be simple. It pretends very well, but Reminders is ever more powerful and for some people, this Kanban feature could be what makes them choose Apple’s app over third-party alternatives.

Apple doesn’t actually call this new feature Kanban within the Reminders app, though, so more than ever, it’s hard to find it. But if you’ve ever seen another Kanban app, such as Trello, then you’ll immediately recognize it when you see it in Reminders.

Instead of a straight list of tasks, a Kanban layout shows each To Do as its own separate graphic. It’s very simple graphic: just the text of the To Do written over a grey extended lozenged-shaped background.

Kanban lets you drag tasks around between columns to give you a visual sense of your project

But you can click and grab on that lozenge and drag the task around. So pretty much invariably, a Kanban task list will have columns for tasks that are just started, in progress, or completed.

You can tick any task as done, just like with any other task in Reminders. But you can instead drag them from column to column so that you have a visual sense of everything that’s going on.

How to setup a Kanban list in Reminders on macOS Sonoma

  1. Open Reminders and click Add List at the bottom left
  2. Give the list a name and optionally set the icon and a color
  3. Leave the List Type set to Standard and click OK
  4. Right click in the empty list and choose New Section
  5. Give that a name, and add more New Sections as you want
  6. Choose the View menu and select as Columns

You don’t have to right click to add a New Section. You can alternatively choose the New Section from the File menu, or press Option-Command-N.

Apple thinks that sections will be useful in regular Reminders lists, too, and that may be so. But changing the view to as Columns is what makes this a Kanban list — at least on the Mac and the iPad.

Using the Kanban list

Each column in a Reminders Kanban list has a blank To Do already present. You can’t tick it as done, even though it has the Done circle icon, but you can click in it to write a new task.

You can also paste tasks in that you’ve copied from other lists.

You can edit and rename the Kanban columns that Apple calls Sections

When you drag a task from one Kanban column to another, it can be a little fiddly. You tend to have to drag it to just above another task, or just above the top blank one, before you can let go.

As for sections, you can choose File, Edit Sections and get a dialog listing each of them that you’ve created. Next to every section listed, there’s an edit icon and that lets you rename the sections.

Where Reminders scores and falls short

No question, this new feature lets you present your tasks more like a project plan than, say, a shopping list. You can work the tasks as normal, but you can also much more readily see that you’re ahead or behind.

You can see, perhaps, that one particular aspect of your work is somehow being slowed down and you need to look at it again. Or you can see more clearly that you have several similar tasks and so you could do them all together.

So it’s an excellent addition to Reminders, but it isn’t perfect.

For instance, you’ll tend to stop ticking tasks as done. That’s because when you tick a task in Reminders, it vanishes from the list and once you’ve got all these columns done, you’ll want to see the end one growing.

Then the fact that each column in the Kanban list automatically has a blank task in it is a little ugly for Apple. It only happens when you have created sections, either in the Kanban column view, or a regular list.

Otherwise, a Reminders list will stay completely blank, except for the title you’ve given it. And that just seems neater.

Then the whole point of Kanban is to have columns that you can drag tasks between, and the iPhone won’t show columns. It will solely show the sections as headings in a list.

Still, the more intensely you use To Do apps, it does tend to be that you do most serious work on the Mac or perhaps the iPad. The iPhone is perfect for adding new tasks, and it can be used to find what your next task is, but you won’t tend to want to study everything visually like a project manager.

AppleInsider News

Laser Pointer x Glow-in-the-dark Record

https://theawesomer.com/photos/2023/08/laser_record_t.jpg

Laser Pointer x Glow-in-the-dark Record

Link

Compact discs use a laser to read data and convert that to music. Vinyl records, on the other hand, use a needle to pick up vibrations. Artist Tee Ken Ng used a laser pointer to do something else with a record, exposing a glow-in-the-dark record to laser light as it spun on a turntable. He made some more complicated patterns in this second video.

@teekenng Drawing with a laser pointer on a record that I covered with glow in the dark vinyl. The laser energises the phosphors in the #photoluminescent vinyl causing it to glow. As they lose their charge the older lines fade creating the trail effect. #hypnotic #vinylart #laser #spiral #liveanimation ♬ Space Walk – Lemon Jelly

The Awesomer

How To Use systemd in Linux to Configure and Manage Multiple MySQL Instances

https://www.percona.com/blog/wp-content/uploads/2023/08/Use-systemd-in-Linux-to-Configure-and-Manage-Multiple-MySQL-Instances-200×115.jpegUse systemd in Linux to Configure and Manage Multiple MySQL Instances

This blog describes how to configure systemd for multiple instances of MySQL. With package installations of MySQL using YUM or APT, it’s easy to manage MySQL with systemctl, but how will you manage it when you install from the generic binaries?

Here, we will configure multiple MySQL instances from the generic binaries and manage them using systemd.

Why do you need multiple instances on the same server?

We will do that, but why would you need multiple instances on the same host in the first place? Why not just create another database on the same instance? In some cases, you will need multiple instances on the host. 

  1. You can have a host with two or three instances configured as a delayed replica of the source server with SQL Delay of, let’s say, 24hr, 12hr, and 6/3hrs.
  2. Backup testing. You can run multiple instances on a server to test your backups with the correct version and configs.
  3. We split databases by function/team to give each team full autonomy over their schema, And if someone screws up, it breaks their cluster, not all databases. However, larger instances are more economical as not all MySQL servers will always need maximum resources. So you put multiple MySQL servers on a single machine instead of multiple databases inside one MySQL instance. Better failure handling, similar cost. But yes, do not put all nodes of the same cluster on the same host, but you have multiple nodes on the same host of different clusters. 
  4. Cases where (in very large sharded deployments) a user will install multiple mysqlds per server to reduce contention, i.e., they get more performance per 2-socket server with four or eight mysqlds than one.  AFAIK Facebook does this.

The original motivation for FB was due to different hardware generations, especially between regions/data centers. For example, an older data center may have smaller/less powerful machines, so they run fewer mysqld per host there to compensate. There were other exceptions, too, like abnormally large special-case-shard needing dedicated machines.

That said, other performance motivations mentioned above did play into it, especially before the days of multi-threaded replication. And I agree that in the modern age of cloud and huge flash storage, the vast majority of companies will never need to consider doing this in prod, but there is always a chance of its need. 

Install MySQL

To install and use a MySQL binary distribution, the command sequence looks like this:

yum install  libaio1 libaio-dev numactl
useradd -r -g mysql -s /bin/false mysql
groupadd mysql
cd /usr/local/
tar xvfz /root/Percona-Server-8.0.19-10-Linux.x86_64.ssl101.tar.gz
ln -s /usr/local/Percona-Server-8.0.19-10-Linux.x86_64.ssl101/ mysql
cd /data/
mkdir -p /data/mysql/{3306,3307}/data
chown -R mysql:mysql /data
chmod 750 -R /data/mysql/{3306,3307}/data

Create MySQL configuration for each instance

Below is an example of the first instance I placed in /etc/prod3306.cnf. My naming convention is prod3306 and prod3307. I then place that naming convention in the configuration filename  /etc/prod3306.cnf. I could have done my.cnf.instance or instance.my.cnf.

[root@ip-172-31-128-38 share]# cat  /etc/prod3306.cnf

[mysqld@prod3306]
datadir=/data/mysql/3306
socket=/data/mysql/3306/prod3306.sock
mysqlx_socket=/data/mysql/3306/prod3306x.sock
log-error=/data/mysql/prod3306.err
port=3306
mysqlx_port=33060
server-id=1336
slow_query_log_file=/data/mysql/3306/slowqueries.log
innodb_buffer_pool_size = 50G
lower_case_table_names=0
tmpdir=/data/mysql/3306/tmp/
log_bin=/data/mysql/3306/prod3306-bin
relay_log=/data/mysql/3306/prod3306-relay-bin
lc_messages_dir=/usr/local/mysql/share


[mysqld@prod3307]
datadir=/data/mysql/3307
socket=/data/mysql/3307/prod3307.sock
mysqlx_socket=/data/mysql/3307/prod3307x.sock
log-error=/data/mysql/prod3307.err
port=3307
mysqlx_port=33070
server-id=2337
slow_query_log_file=/data/mysql/3307/slowqueries.log
innodb_buffer_pool_size = 50G
lower_case_table_names=0
lc_messages_dir=/usr/local/mysql/share
tmpdir=/data/mysql/3307/tmp/
log_bin=/data/mysql/3307/prod3307-bin
relay_log=/data/mysql/3307/prod3307-relay-bin

The directory lc_messages_dir=/usr/local/mysql/share  is required when your MySQL binaries base directory is not the default one, so I had to pass the path for it — otherwise, MySQL won’t start. 

Initialize instance

Initialize your database and get the temporary password for the database from the error log file so you can log in and update the passwords after the MySQL instances are started.

ln -s /usr/local/mysql/bin/mysqld /usr/bin
mysqld --no-defaults --initialize-insecure --user=mysql --datadir=/data/mysql/3307 --lower_case_table_names=0
mysqld --no-defaults --initialize-insecure --user=mysql --datadir=/data/mysql/3306 --lower_case_table_names=0

Configured the systemd service

Create the SYSTEMD base configuration at /etc/systemd/system/mysql@.service and place the following contents inside. This is where the naming convention of the MySQL instances comes into effect. In the SYSTEMD configuration file, %I will be replaced with the naming convention that you use. 

[root@ip-172-31-128-38 share]# cat /usr/lib/systemd/system/mysqld@.service
# Copyright (c) 2016, 2021, Oracle and/or its affiliates.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2.0,
# as published by the Free Software Foundation.
#
# This program is also distributed with certain software (including
# but not limited to OpenSSL) that is licensed under separate terms,
# as designated in a particular file or component or in included license
# documentation.  The authors of MySQL hereby grant you an additional
# permission to link the program and your derivative works with the
# separately licensed software that they have included with MySQL.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License, version 2.0, for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
#
# systemd service file for MySQL forking server
#

[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
Type=forking
PIDFile=/data/mysql/mysqld-%i.pid
# Disable service start and stop timeout logic of systemd for mysqld service.
TimeoutSec=0
# Execute pre and post scripts as root
PermissionsStartOnly=true
# Needed to create system tables
#ExecStartPre=/usr/bin/mysqld_pre_systemd %I
# Start main service
ExecStart=/usr/bin/mysqld --defaults-file=/etc/prod3306.cnf --defaults-group-suffix=@%I --daemonize --pid-file=/data/mysql/mysqld-%i.pid $MYSQLD_OPTS

# Use this to switch malloc implementation
EnvironmentFile=-/etc/sysconfig/mysql
# Sets open_files_limit
LimitNOFILE = 65536
Restart=on-failure
RestartPreventExitStatus=1
Environment=MYSQLD_PARENT_PID=1
PrivateTmp=false
[root@ip-172-31-128-38 share]#

Reload daemon

systemctl daemon-reload

Start MySQL

systemctl start mysqld@prod3307

systemctl start mysqld@prod3306

Enable MySQL service

systemctl enable mysqld@prod3307

systemctl enable mysqld@prod3306

Error log for each instance

[root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3306.er

tail: cannot open ‘/data/mysql/prod3306.er’ for reading: No such file or directory

[root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3306.err

2023-07-10T05:26:42.521994Z 0 [System] [MY-010910] [Server] /usr/bin/mysqld: Shutdown complete (mysqld 8.0.19-10)  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:48.210107Z 0 [System] [MY-010116] [Server] /usr/bin/mysqld (mysqld 8.0.19-10) starting as process 20477

2023-07-10T05:26:52.094196Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.

2023-07-10T05:26:52.112887Z 0 [System] [MY-010931] [Server] /usr/bin/mysqld: ready for connections. Version: '8.0.19-10'  socket: '/data/mysql/3306/prod3306.sock'  port: 3306  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:52.261062Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/data/mysql/3306/prod3306x.sock' bind-address: '::' port: 33060

root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3307.err

2023-07-10T05:26:36.032160Z 0 [System] [MY-010910] [Server] /usr/bin/mysqld: Shutdown complete (mysqld 8.0.19-10)  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:58.328962Z 0 [System] [MY-010116] [Server] /usr/bin/mysqld (mysqld 8.0.19-10) starting as process 20546

2023-07-10T05:27:02.179449Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.

2023-07-10T05:27:02.198092Z 0 [System] [MY-010931] [Server] /usr/bin/mysqld: ready for connections. Version: '8.0.19-10'  socket: '/data/mysql/3307/prod3307.sock'  port: 3307  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:27:02.346514Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/data/mysql/3307/prod3307x.sock' bind-address: '::' port: 33070

[root@ip-172-31-128-38 3307]#

Conclusion

Utilizing systemctl to control MySQL significantly simplifies the management of MySQL instances. This approach facilitates the easy configuration of multiple instances, extending beyond two, and streamlines the overall administration process. However, it is essential to be mindful of memory allocation when setting up multiple MySQL instances on a single server. Allocating memory appropriately for each MySQL instance ensures sufficient overhead and optimal performance.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

 

Download Percona Monitoring and Management Today

Planet MySQL

Welcome to Dolphie !

https://i0.wp.com/lefred.be/wp-content/uploads/2023/08/Screenshot-from-2023-08-18-15-01-29.png?w=1688&ssl=1

There are plenty GUI and Web application used to monitor a MySQL server. But if you are long time MySQL DBA, you might have used (and abused) Innotop !

I loved it ! And I even became maintainer of it. This particular task became more and more complicated with the different forks and their differences. Also, let’s be honest, Perl saved my life so many times in the past… but this was in the past. These days, having Perl on a system is more complicated.

But Innotop is still very popular in the MySQL world and to help me maintaining it, I would like to welcome a new member in the maintainer group: yoku0825. Tsubasa Tanaka has been a long time user and contributor of Innotop and I’m sure will keep to good work.

I’ve tried to find an alternative to Innotop, and I even wrote my own clone in Go for MySQL 8.0: innotopgo. But some limitations of the framework I used affected my motivation…

But some time ago, Charles Thompson contacted me about a new tool he was writing. He was looking for feedback.

The tool was very promising and finally this week he released it !

The tool is written in Python 3 and it’s very easy to modify it to contribute code.

Dolphie, the name of the tool, is available on GitHub and can easily be installed using pip:

$ pip install dolphie

Dolphie is already very complete and supports several new features available in MySQL 8.0.

For example I do like the Transaction History, that display the statement that were done inside a running transaction:

Initial Dashboard

Dolphie also integrates the error log from Performance_Schema:

And it also allows searches:

Trending

Dolphie also provides some very interesting trending graphs that can be used to look at performance issues.

This is an example:

The best way to discover all its possibilities is to install and test it.

Conclusion

Dolphie is a brand new Open Source (GPLv3) tool for MySQL DBAs, made for the Community by the Community. It’s very easy to get involved, as Dolphie is written in Python, and Charles, its author, is very responsive in implementing features and solving problems.

I really encourage you to test it, submit bugs, feature requests and, of course, contributions !

Welcome Dolphie and long life !

Planet MySQL

How to Monitor Network Usage for Processes on Linux

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/network-switch-cables.jpg

Internet access is essential, but you may wonder which Linux processes use your connection the most on your computer. Fortunately, with some common Linux utilities, monitoring which processes use your bandwidth is easy. Here are some of them:

1. nethogs

nethogs is a program that does for internet connections what htop or top does for CPU and memory usage. It shows you a snapshot of which processes are accessing the network.

Like top, htop, or atop, nethogs is a full-screen program that updates after a few seconds to show you the current network connections by processes.

Installing nethogs is simple. You just go through your package manager.

For example, on Debian and Ubuntu:

 sudo apt install nethogs 

And on Arch Linux:

 sudo pacman -S nethogs 

On the Red Hat family:

 sudo dnf install nethogs 

To run nethogs, you’ll need to be root:

 sudo nethogs 

It’s possible to set it so that you can run nethogs as a regular user using this command:

 sudo setcap "cap_net_admin,cap_net_raw+pe" /path/to/nethogs 

You should replace “/path/to/nethogs” with the absolute pathname of nethogs. You can find this with the which command:

 which nethogs 

2. lsof

While lsof is a utility for listing open files, it can also list open network connections. The -i option lists internet connections attached to running processes on the system. On Linux, everything is a file, after all.

To see current internet connections, use this command:

 lsof -i 

lsof will show you the name of any commands with open internet connections, the PID, the file descriptor, the type of internet connection, the size, the protocol, and the formal file name of the connection.

Using the -i4 and -i6 options allows you to view connections using IPv4 or IPv6.

There’s a good chance you have lsof installed already. It’s also easy to install on major Linux distros if it isn’t.

On Debian and Ubuntu, type:

 sudo apt install lsof 

And on Arch:

 sudo pacman -S lsof 

On the Red Hat family of distros:

 sudo dnf install lsof 

3. netstat

netstat is a powerful program on its own, letting you see network connections on your system. It doesn’t show you which processes the network connections are attached to. As with lsof, you can see this with a command-line option.

netstat is part of the net-tools package. You can install it on most Linux distros using the default package manager.

For example, on Debian or Ubuntu:

 sudo apt install net-tools

On Arch Linux:

 sudo pacman -S net-tools 

To install netstat on Fedora, CentOS, and RHEL, run:

 sudo dnf install net-tools 

You can run netstat at the command line. By default, it will show you information such as the protocol, the address, and the state of the connection, but the -p option adds a column that shows the process ID and the command name.

 netstat -p 

When you run it, netstat will just list all the network connections and then exit. With the -c option, you can see a continually updated list of connections:

 netstat -pc 

This would be similar to using a screen-oriented program like nethogs, but the advantage of doing it this way is that you can pipe the output into another program like grep or a pager to examine it:

 netstat -p | grep 'systemd' 

To see all of the processes with network connections on your system, you may have to run netstat as root:

 sudo netstat  

Now You Can See Which Linux Apps Are Gobbling Up Your Bandwidth

Linux, like many modern OSes, is intimately connected to the internet. It can be difficult at times to track down which processes are using your bandwidth. With tools like nethogs, lsof, and netstat, you can track down processes that have open connections.

Processes sometimes go haywire, even with connections. On Linux, you can easily terminate any rogue processes.

MakeUseOf

11 MongoDB Queries and Operations You Must Know

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/mongodb-queries-you-must-know.jpg

MongoDB is one of the most desired and admired NoSQL databases for professional development. Its flexibility, scalability, and ability to handle large volumes of data make it a top choice for modern applications. If you want to master MongoDB’s regular queries and operations, you’re in the right place.

Whether you’re looking to efficiently retrieve and manipulate data, implement robust data models, or build responsive applications, acquiring a deep understanding of common MongoDB queries and operations will undoubtedly enhance your skills.

1. Create or Switch Databases

Creating a database locally via the MongoDB Shell is straightforward, especially if you’ve set up a remote cluster. You can create a new database in MongoDB with the use command:

 use db_name 

While the above command creates a new database, you can use it to switch to an existing database without creating a new one from scratch.

2. Drop Database

First, switch to the database you want to drop using the use command as done previously. Then drop the database using the dropDatabase() command:

 use db_name
db.dropDatabase()

3. Create a Collection

To create a collection, switch to the target database. Use the createCollection() keyword to make a new MongoDB collection:

 db.createCollection("collection_name")

Replace collection_name with your chosen collection name.

4. Insert Document Into a Collection

While sending data to a collection, you can insert a single document or an array of documents.

To insert a single document:

 db.collection_name.insertOne({"Name":"Idowu", "Likes":"Chess"})

You can also use the above method to insert an array of documents with one ID:

 db.collection_name.insertOne([{"Name":"Idowu", "Likes":"Chess"}, {"Language": "Mongo", "is_admin": true}])

To insert many documents at once, with each having separate IDs, use the insertMany keyword:

 db.collection_name.insertMany([{"Name":"Idowu", "Likes":"Chess"}, {"Name": "Paul", "Likes": "Wordle"}])

5. Get All Documents From a Collection

You can query all documents from a collection using the find() keyword:

 db.collection_name.find()

The above returns all the documents inside the specified collection:

You can also limit the returned data to a specific number. For instance, you can use the following command to get only the first two documents:

 db.collection_name.find().limit(2)

6. Filter Documents in a Collection

There are many ways to filter documents in MongoDB. Consider the following data, for instance:

If querying only a specific field in a document, use the find method:

 db.collection_name.find({"Likes":"Wordle"}, {"_id":0, "Name":1})

The above returns all documents where the value of Likes is Wordle. It only outputs the names and ignores the document ID.

You can also filter a collection by a numerical factor. Say you want to get the names of all users older than 21, use the $gt operator:

 db.collection_name.find({"Likes":"Chess", "Age":{"$gt":21}}, {"_id":0, "Name":1})

The output looks like so:

Try replacing find with findOne to see what happens. However, there are many other filtering keywords:

  • $lt: All values less than the specified one.
  • $gte: Values equal to or greater than the specified one.
  • $lte: Values that are less than or equal to the defined one.
  • $eq: Gets all values equal to the specified one.
  • $ne: All values not equal to the specified one.
  • $in: Use this when querying based on an array. It gets all values matching any of the items in the array. The $nin keyword does the opposite.

7. Sort Queries

Sorting helps arrange the query in a specific order. You can sort in descending or ascending order. Keep in mind that sorting requires a numerical reference.

For instance, to sort in ascending order:

 db.collection_name.find({"Likes":"Chess"}).sort({"Age":1})

To sort out the above query in descending order, replace “1” with “-1.”

 db.collection_name.find({"Likes":"Chess"}).sort({"Age":-1})

8. Update a Document

MongoDB updates require atomic operators to specify how you want the update done. Here is a list of commonly used atomic operators you can pair with an update query:

  • $set: Add a new field or change an existing field.
  • $push: Insert a new item into an array. Pair it with the $each operator to insert many items at once.
  • $pull: Remove an item from an array. Use it with $in to remove many items at one go.
  • $unset: Remove a field from a document.

To update a document and add a new field, for example:

 db.collection_name.updateOne({"Name":"Sandy"}, {"$set":{"Name":"James", "email":"example@gmail.com"}})

The above updates the specified document as shown:

Removing the email field is straightforward with the $unset operator:

 db.collection_name.updateOne({"Name":"Sandy"}, {"$unset":{"email":"example@gmail.com"}})

Consider the following sample data:

You can insert an item into the existing items array field using the $push operator:

 db.collection_name.updateOne({"Name":"Pete"}, {"$push":{"items":"Plantain"}})

Here’s the output:

Use the $each operator to insert many items at once:

 db.collection_name.updateOne({"Name":"Pete"}, {"$push":{"items": {"$each":["Almond", "Melon"]}}})

Here’s the output:

As mentioned, the $pull operator removes an item from an array:

 db.collection_name.updateOne({"Name":"Pete"}, {"$pull":{"items":"Plantain"}})

The updated data looks like so:

Include the $in keyword to remove many items in an array at one go:

 db.collection_name.updateOne({"Name":"Pete"}, {"$pull":{"items": {"$in":["Almond", "Melon"]} }}) 

9. Delete a Document or a Field

The deleteOne or deleteMany keyword trashes a document from a collection. Use deleteOne to remove a document based on a specified field:

 db.collection_name.deleteOne({"Name":"IDNoble"})

If you want to delete many documents with keys in common, use deleteMany instead. The query below deletes all documents containing Chess as their Likes.

 db.collection.deleteMany({"Likes":"Chess"})

10. Indexing Operation

Indexing improves query performance by streamlining the number of documents MongoDB needs to scan. It’s often best to create an index on fields you query more frequently.

MongoDB indexing is similar to how you use indexes to optimize SQL queries. For instance, to create an ascending index on the Name field:

 db.collection.createIndex({"Name":1})

To list your indexes:

 db.collection.getIndexes()

The above is only a preamble. There are several other methods for creating an index in MongoDB.

11. Aggregation

The aggregation pipeline, an improved version of MapReduce, allows you to run and store complex calculations from inside MongoDB. Unlike MapReduce, which requires writing the map and the reduce functions in separate JavaScript functions, aggregation is straightforward and only uses built-in MongoDB methods.

Consider the following sales data, for example:

Using MongoDB’s aggregation, you can calculate and store the total number of products sold for each category as follows:

 db.sales.aggregate([{$group:{"_id":"$Section", "totalSold":{$sum:"$Sold"}}}, {$project:{"_id":0, "totalSold":1, "Section":"$_id"}}])

The above query returns the following:

Master MongoDB Queries

MongoDB offers many querying methods, including features to improve query performance. Regardless of your programming language, the above query structures are rudimentary for interacting with a MongoDB database.

There may be some discrepancies in base syntaxes, though. For example, while some programming languages like Python recognize snake cases, others, including JavaScript, use the camel case. Ensure you research what works for your chosen technology.

MakeUseOf

Stack Abuse: How to Select Columns in Pandas Based on a String Prefix

https://stackabuse.com/assets/images/icon-information-circle-solid.svg

Introduction

Pandas is a powerful Python library for working with and analyzing data. One operation that you might need to perform when working with data in Pandas is selecting columns based on their string prefix. This can be useful when you have a large DataFrame and you want to focus on specific columns that share a common prefix.

In this Byte, we’ll explore a few methods to achieve this, including creating a series to select columns and using DataFrame.loc.

Select All Columns Starting with a Given String

Let’s start with a simple DataFrame:

import pandas as pd

data = {
    'item1': [1, 2, 3],
    'item2': [4, 5, 6],
    'stuff1': [7, 8, 9],
    'stuff2': [10, 11, 12]
}
df = pd.DataFrame(data)
print(df)

Output:

   item1  item2  stuff1  stuff2
0      1      4       7      10
1      2      5       8      11
2      3      6       9      12

To select columns that start with ‘item’, you can use list comprehension:

selected_columns = [column for column in df.columns if column.startswith('item')]
print(df[selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Creating a Series to Select Columns

Another approach to select columns based on their string prefix is to create a Series object from the DataFrame columns, and then use the str.startswith() method. This method returns a boolean Series where a True value means that the column name starts with the specified string.

selected_columns = pd.Series(df.columns).str.startswith('item')
print(df.loc[:, selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Using DataFrame.loc to Select Columns

The DataFrame.loc method is primarily label-based, but may also be used with a boolean array. The ix indexer for DataFrame is deprecated now, as it has a number of problems. .loc will raise a KeyError when the items are not found.

Consider the following example:

selected_columns = df.columns[df.columns.str.startswith('item')]
print(df.loc[:, selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Here, we first create a boolean array that is True for columns starting with ‘item’. Then, we use this array to select the corresponding columns from the DataFrame using the .loc indexer. This method is more efficient than the previous ones, especially for large DataFrames, as it avoids creating an intermediate list or Series.

Applying DataFrame.filter() for Column Selection

The filter() function in pandas DataFrame provides a flexible and efficient way to select columns based on their names. It is especially useful when dealing with large datasets with many columns.

The filter() function allows us to select columns based on their labels. We can use the like parameter to specify a string pattern that matches the column names. However, if we want to select columns based on a string prefix, we can use the regex parameter.

Here’s an example:

import pandas as pd

# Create a DataFrame
df = pd.DataFrame({
    'product_id': [101, 102, 103, 104],
    'product_name': ['apple', 'banana', 'cherry', 'date'],
    'product_price': [1.2, 0.5, 0.75, 1.3],
    'product_weight': [150, 120, 50, 60]
})

# Select columns that start with 'product'
df_filtered = df.filter(regex='^product')

print(df_filtered)

This will output:

   product_id product_name  product_price  product_weight
0         101        apple           1.20             150
1         102       banana           0.50             120
2         103       cherry           0.75              50
3         104         date           1.30              60

In the above code, the ^ symbol is a regular expression that matches the start of a string. Therefore, '^product' will match all column names that start with ‘product’.

Next: The filter() function returns a new DataFrame that shares the data with the original DataFrame. So, any modifications to the new DataFrame will not affect the original DataFrame.

Conclusion

In this Byte, we explored different ways to select columns in a pandas DataFrame based on a string prefix. We learned how to create a Series and use it to select columns, how to use the DataFrame.loc function, and how to apply the DataFrame.filter() function. Of course, each of these methods has its own advantages and use cases. The choice of method depends on the specific requirements of your data analysis task.

Planet Python

The Ryobi Telescoping Power Scrubber Is the Best at Keeping My Tile Floors Clean

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/d8dad39f888525d8adec05e95b0dbfda.png

The white tiles in the living area of my home are an abomination: I track in dirt from the garden constantly, my dog is forever bursting through the doggie door with muddy glee, and I seem to cook with the spirit of Ratatouille, absentmindedly splashing food all about. I find myself in constant pursuit of cleaning tools that will make my home seem like less of a disaster, and as a result, I’ve bought an embarrassing number of devices that promised to truly scrub my floor clean.

Among them, I’ve tried the Hoover SpinScrub (a precursor to this model), various steam cleaners, many tonics and potions, and even your plain old handheld scrub brush, because it finally seemed that every product that promised to really scrub away serious dirt on your tiles paled in comparison to just getting down and scrubbing the floor yourself. But one night while perusing a Home Depot ad, I saw it: It gleamed bright yellow, and it promised to answer all my problems. And it actually lived up to that promise.

This is the best floor scrubber

The best floor scrubber is the Ryobi Telescoping Power Scrubber. Just look at it. It is quite literally a cordless powerhouse. Although it comes with a medium hard brush, you can also buy soft and hard brush heads for it. Ostensibly, it’s for scrubbing your car or boat exterior, perhaps your roof or house siding.

But if you’re looking for clean tile, there is nothing on the market like this tool.

How a non-expert (me) uses a power scrubber on floors

I use the medium hardness brush, and I work the floor in sections, with a spray bottle of water in one hand, a container of Bar Keepers Friend, and a towel. (The only advantage that more traditional floor scrubbers have is their onboard water source. The Ryobi power scrubber has none of that, but to me, that’s a non-issue given the way it performs.)

The towel is on the floor, and I stand on it. You spray the floor in front of you, sprinkle it with Bar Keepers Friend, and then go to town with your scrubber. As you move forward, keep the towel under your feet, using it to mop up any water as you go. When you get to the end of the hall or room, you may need to give the wall trim a quick wipe for any splatter, but it’s pretty minimal.

The upside of this is incredibly satisfyingly clean tile. Every groove, every niche is clean. The downside is that you’ve likely taken off any sealer on the tile, so that might be worth refreshing with a sealer, which is easy enough. (You can even do so with the scrubber by swapping the head for one of the soft heads like the microfiber cloth.) In between serious cleanings, you can skip the Bar Keepers Friend and use water alone or a mopping solution, but really, the scrubber is doing the majority of the work.

Maintaining the Ryobi Power Scrubber

To wash the scrubber, you disconnect the head and throw it in the dishwasher. Disconnect the battery and recharge it. I can even use one of my smaller 1.5 volt batteries with the scrubber and get a full house clean at once.

People tend to be loyalists when they get into a line of tools. If they start with Makita, they’ll stick with it, and DeWalt folks are die-hards. Like a lot of people, I started with Ryobi because of the price point and its absurdly wide selection of tools in the cordless series. I’ve stuck with the line because I genuinely have a lot of success with it as I’ve grown my collection. I find the batteries stay well charged (and I haven’t had one die yet). I recommend buying bare tools (without battery packs) as soon as you’ve acquired a few chargers, and only getting the higher-end batteries. I have two 4-volt batteries and I almost never find myself needing another. Ryobi has really expanded the line into a lot of consumer-friendly pieces like fans and air compressors, and it has invested in their brushless cordless line—a series of tools with less likelihood of burning out your motor, while also being more powerful. All this to say, I wasn’t surprised Ryobi had a great tool solution here.

For what its worth, they also have a handheld scrubber, and if I hadn’t previously picked up some brush heads that I can just throw on my Ryobi brushless hammer drill for scrubbing smaller surfaces like sinks and bathtubs, I’d have picked it up as well.

Lifehacker