Welcome to Dolphie !

https://i0.wp.com/lefred.be/wp-content/uploads/2023/08/Screenshot-from-2023-08-18-15-01-29.png?w=1688&ssl=1

There are plenty GUI and Web application used to monitor a MySQL server. But if you are long time MySQL DBA, you might have used (and abused) Innotop !

I loved it ! And I even became maintainer of it. This particular task became more and more complicated with the different forks and their differences. Also, let’s be honest, Perl saved my life so many times in the past… but this was in the past. These days, having Perl on a system is more complicated.

But Innotop is still very popular in the MySQL world and to help me maintaining it, I would like to welcome a new member in the maintainer group: yoku0825. Tsubasa Tanaka has been a long time user and contributor of Innotop and I’m sure will keep to good work.

I’ve tried to find an alternative to Innotop, and I even wrote my own clone in Go for MySQL 8.0: innotopgo. But some limitations of the framework I used affected my motivation…

But some time ago, Charles Thompson contacted me about a new tool he was writing. He was looking for feedback.

The tool was very promising and finally this week he released it !

The tool is written in Python 3 and it’s very easy to modify it to contribute code.

Dolphie, the name of the tool, is available on GitHub and can easily be installed using pip:

$ pip install dolphie

Dolphie is already very complete and supports several new features available in MySQL 8.0.

For example I do like the Transaction History, that display the statement that were done inside a running transaction:

Initial Dashboard

Dolphie also integrates the error log from Performance_Schema:

And it also allows searches:

Trending

Dolphie also provides some very interesting trending graphs that can be used to look at performance issues.

This is an example:

The best way to discover all its possibilities is to install and test it.

Conclusion

Dolphie is a brand new Open Source (GPLv3) tool for MySQL DBAs, made for the Community by the Community. It’s very easy to get involved, as Dolphie is written in Python, and Charles, its author, is very responsive in implementing features and solving problems.

I really encourage you to test it, submit bugs, feature requests and, of course, contributions !

Welcome Dolphie and long life !

Planet MySQL

How To Use systemd in Linux to Configure and Manage Multiple MySQL Instances

https://www.percona.com/blog/wp-content/uploads/2023/08/Use-systemd-in-Linux-to-Configure-and-Manage-Multiple-MySQL-Instances-200×115.jpegUse systemd in Linux to Configure and Manage Multiple MySQL Instances

This blog describes how to configure systemd for multiple instances of MySQL. With package installations of MySQL using YUM or APT, it’s easy to manage MySQL with systemctl, but how will you manage it when you install from the generic binaries?

Here, we will configure multiple MySQL instances from the generic binaries and manage them using systemd.

Why do you need multiple instances on the same server?

We will do that, but why would you need multiple instances on the same host in the first place? Why not just create another database on the same instance? In some cases, you will need multiple instances on the host. 

  1. You can have a host with two or three instances configured as a delayed replica of the source server with SQL Delay of, let’s say, 24hr, 12hr, and 6/3hrs.
  2. Backup testing. You can run multiple instances on a server to test your backups with the correct version and configs.
  3. We split databases by function/team to give each team full autonomy over their schema, And if someone screws up, it breaks their cluster, not all databases. However, larger instances are more economical as not all MySQL servers will always need maximum resources. So you put multiple MySQL servers on a single machine instead of multiple databases inside one MySQL instance. Better failure handling, similar cost. But yes, do not put all nodes of the same cluster on the same host, but you have multiple nodes on the same host of different clusters. 
  4. Cases where (in very large sharded deployments) a user will install multiple mysqlds per server to reduce contention, i.e., they get more performance per 2-socket server with four or eight mysqlds than one.  AFAIK Facebook does this.

The original motivation for FB was due to different hardware generations, especially between regions/data centers. For example, an older data center may have smaller/less powerful machines, so they run fewer mysqld per host there to compensate. There were other exceptions, too, like abnormally large special-case-shard needing dedicated machines.

That said, other performance motivations mentioned above did play into it, especially before the days of multi-threaded replication. And I agree that in the modern age of cloud and huge flash storage, the vast majority of companies will never need to consider doing this in prod, but there is always a chance of its need. 

Install MySQL

To install and use a MySQL binary distribution, the command sequence looks like this:

yum install  libaio1 libaio-dev numactl
useradd -r -g mysql -s /bin/false mysql
groupadd mysql
cd /usr/local/
tar xvfz /root/Percona-Server-8.0.19-10-Linux.x86_64.ssl101.tar.gz
ln -s /usr/local/Percona-Server-8.0.19-10-Linux.x86_64.ssl101/ mysql
cd /data/
mkdir -p /data/mysql/{3306,3307}/data
chown -R mysql:mysql /data
chmod 750 -R /data/mysql/{3306,3307}/data

Create MySQL configuration for each instance

Below is an example of the first instance I placed in /etc/prod3306.cnf. My naming convention is prod3306 and prod3307. I then place that naming convention in the configuration filename  /etc/prod3306.cnf. I could have done my.cnf.instance or instance.my.cnf.

[root@ip-172-31-128-38 share]# cat  /etc/prod3306.cnf

[mysqld@prod3306]
datadir=/data/mysql/3306
socket=/data/mysql/3306/prod3306.sock
mysqlx_socket=/data/mysql/3306/prod3306x.sock
log-error=/data/mysql/prod3306.err
port=3306
mysqlx_port=33060
server-id=1336
slow_query_log_file=/data/mysql/3306/slowqueries.log
innodb_buffer_pool_size = 50G
lower_case_table_names=0
tmpdir=/data/mysql/3306/tmp/
log_bin=/data/mysql/3306/prod3306-bin
relay_log=/data/mysql/3306/prod3306-relay-bin
lc_messages_dir=/usr/local/mysql/share


[mysqld@prod3307]
datadir=/data/mysql/3307
socket=/data/mysql/3307/prod3307.sock
mysqlx_socket=/data/mysql/3307/prod3307x.sock
log-error=/data/mysql/prod3307.err
port=3307
mysqlx_port=33070
server-id=2337
slow_query_log_file=/data/mysql/3307/slowqueries.log
innodb_buffer_pool_size = 50G
lower_case_table_names=0
lc_messages_dir=/usr/local/mysql/share
tmpdir=/data/mysql/3307/tmp/
log_bin=/data/mysql/3307/prod3307-bin
relay_log=/data/mysql/3307/prod3307-relay-bin

The directory lc_messages_dir=/usr/local/mysql/share  is required when your MySQL binaries base directory is not the default one, so I had to pass the path for it — otherwise, MySQL won’t start. 

Initialize instance

Initialize your database and get the temporary password for the database from the error log file so you can log in and update the passwords after the MySQL instances are started.

ln -s /usr/local/mysql/bin/mysqld /usr/bin
mysqld --no-defaults --initialize-insecure --user=mysql --datadir=/data/mysql/3307 --lower_case_table_names=0
mysqld --no-defaults --initialize-insecure --user=mysql --datadir=/data/mysql/3306 --lower_case_table_names=0

Configured the systemd service

Create the SYSTEMD base configuration at /etc/systemd/system/mysql@.service and place the following contents inside. This is where the naming convention of the MySQL instances comes into effect. In the SYSTEMD configuration file, %I will be replaced with the naming convention that you use. 

[root@ip-172-31-128-38 share]# cat /usr/lib/systemd/system/mysqld@.service
# Copyright (c) 2016, 2021, Oracle and/or its affiliates.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2.0,
# as published by the Free Software Foundation.
#
# This program is also distributed with certain software (including
# but not limited to OpenSSL) that is licensed under separate terms,
# as designated in a particular file or component or in included license
# documentation.  The authors of MySQL hereby grant you an additional
# permission to link the program and your derivative works with the
# separately licensed software that they have included with MySQL.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License, version 2.0, for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA  02110-1301 USA
#
# systemd service file for MySQL forking server
#

[Unit]
Description=MySQL Server
Documentation=man:mysqld(8)
Documentation=http://dev.mysql.com/doc/refman/en/using-systemd.html
After=network.target
After=syslog.target

[Install]
WantedBy=multi-user.target
[Service]
User=mysql
Group=mysql
Type=forking
PIDFile=/data/mysql/mysqld-%i.pid
# Disable service start and stop timeout logic of systemd for mysqld service.
TimeoutSec=0
# Execute pre and post scripts as root
PermissionsStartOnly=true
# Needed to create system tables
#ExecStartPre=/usr/bin/mysqld_pre_systemd %I
# Start main service
ExecStart=/usr/bin/mysqld --defaults-file=/etc/prod3306.cnf --defaults-group-suffix=@%I --daemonize --pid-file=/data/mysql/mysqld-%i.pid $MYSQLD_OPTS

# Use this to switch malloc implementation
EnvironmentFile=-/etc/sysconfig/mysql
# Sets open_files_limit
LimitNOFILE = 65536
Restart=on-failure
RestartPreventExitStatus=1
Environment=MYSQLD_PARENT_PID=1
PrivateTmp=false
[root@ip-172-31-128-38 share]#

Reload daemon

systemctl daemon-reload

Start MySQL

systemctl start mysqld@prod3307

systemctl start mysqld@prod3306

Enable MySQL service

systemctl enable mysqld@prod3307

systemctl enable mysqld@prod3306

Error log for each instance

[root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3306.er

tail: cannot open ‘/data/mysql/prod3306.er’ for reading: No such file or directory

[root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3306.err

2023-07-10T05:26:42.521994Z 0 [System] [MY-010910] [Server] /usr/bin/mysqld: Shutdown complete (mysqld 8.0.19-10)  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:48.210107Z 0 [System] [MY-010116] [Server] /usr/bin/mysqld (mysqld 8.0.19-10) starting as process 20477

2023-07-10T05:26:52.094196Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.

2023-07-10T05:26:52.112887Z 0 [System] [MY-010931] [Server] /usr/bin/mysqld: ready for connections. Version: '8.0.19-10'  socket: '/data/mysql/3306/prod3306.sock'  port: 3306  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:52.261062Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/data/mysql/3306/prod3306x.sock' bind-address: '::' port: 33060

root@ip-172-31-128-38 3307]# tail -5 /data/mysql/prod3307.err

2023-07-10T05:26:36.032160Z 0 [System] [MY-010910] [Server] /usr/bin/mysqld: Shutdown complete (mysqld 8.0.19-10)  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:26:58.328962Z 0 [System] [MY-010116] [Server] /usr/bin/mysqld (mysqld 8.0.19-10) starting as process 20546

2023-07-10T05:27:02.179449Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.

2023-07-10T05:27:02.198092Z 0 [System] [MY-010931] [Server] /usr/bin/mysqld: ready for connections. Version: '8.0.19-10'  socket: '/data/mysql/3307/prod3307.sock'  port: 3307  Percona Server (GPL), Release 10, Revision f446c04.

2023-07-10T05:27:02.346514Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: '/data/mysql/3307/prod3307x.sock' bind-address: '::' port: 33070

[root@ip-172-31-128-38 3307]#

Conclusion

Utilizing systemctl to control MySQL significantly simplifies the management of MySQL instances. This approach facilitates the easy configuration of multiple instances, extending beyond two, and streamlines the overall administration process. However, it is essential to be mindful of memory allocation when setting up multiple MySQL instances on a single server. Allocating memory appropriately for each MySQL instance ensures sufficient overhead and optimal performance.

Percona Monitoring and Management is a best-of-breed open source database monitoring solution. It helps you reduce complexity, optimize performance, and improve the security of your business-critical database environments, no matter where they are located or deployed.

 

Download Percona Monitoring and Management Today

Planet MySQL

Stack Abuse: How to Select Columns in Pandas Based on a String Prefix

https://stackabuse.com/assets/images/icon-information-circle-solid.svg

Introduction

Pandas is a powerful Python library for working with and analyzing data. One operation that you might need to perform when working with data in Pandas is selecting columns based on their string prefix. This can be useful when you have a large DataFrame and you want to focus on specific columns that share a common prefix.

In this Byte, we’ll explore a few methods to achieve this, including creating a series to select columns and using DataFrame.loc.

Select All Columns Starting with a Given String

Let’s start with a simple DataFrame:

import pandas as pd

data = {
    'item1': [1, 2, 3],
    'item2': [4, 5, 6],
    'stuff1': [7, 8, 9],
    'stuff2': [10, 11, 12]
}
df = pd.DataFrame(data)
print(df)

Output:

   item1  item2  stuff1  stuff2
0      1      4       7      10
1      2      5       8      11
2      3      6       9      12

To select columns that start with ‘item’, you can use list comprehension:

selected_columns = [column for column in df.columns if column.startswith('item')]
print(df[selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Creating a Series to Select Columns

Another approach to select columns based on their string prefix is to create a Series object from the DataFrame columns, and then use the str.startswith() method. This method returns a boolean Series where a True value means that the column name starts with the specified string.

selected_columns = pd.Series(df.columns).str.startswith('item')
print(df.loc[:, selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Using DataFrame.loc to Select Columns

The DataFrame.loc method is primarily label-based, but may also be used with a boolean array. The ix indexer for DataFrame is deprecated now, as it has a number of problems. .loc will raise a KeyError when the items are not found.

Consider the following example:

selected_columns = df.columns[df.columns.str.startswith('item')]
print(df.loc[:, selected_columns])

Output:

   item1  item2
0      1      4
1      2      5
2      3      6

Here, we first create a boolean array that is True for columns starting with ‘item’. Then, we use this array to select the corresponding columns from the DataFrame using the .loc indexer. This method is more efficient than the previous ones, especially for large DataFrames, as it avoids creating an intermediate list or Series.

Applying DataFrame.filter() for Column Selection

The filter() function in pandas DataFrame provides a flexible and efficient way to select columns based on their names. It is especially useful when dealing with large datasets with many columns.

The filter() function allows us to select columns based on their labels. We can use the like parameter to specify a string pattern that matches the column names. However, if we want to select columns based on a string prefix, we can use the regex parameter.

Here’s an example:

import pandas as pd

# Create a DataFrame
df = pd.DataFrame({
    'product_id': [101, 102, 103, 104],
    'product_name': ['apple', 'banana', 'cherry', 'date'],
    'product_price': [1.2, 0.5, 0.75, 1.3],
    'product_weight': [150, 120, 50, 60]
})

# Select columns that start with 'product'
df_filtered = df.filter(regex='^product')

print(df_filtered)

This will output:

   product_id product_name  product_price  product_weight
0         101        apple           1.20             150
1         102       banana           0.50             120
2         103       cherry           0.75              50
3         104         date           1.30              60

In the above code, the ^ symbol is a regular expression that matches the start of a string. Therefore, '^product' will match all column names that start with ‘product’.

Next: The filter() function returns a new DataFrame that shares the data with the original DataFrame. So, any modifications to the new DataFrame will not affect the original DataFrame.

Conclusion

In this Byte, we explored different ways to select columns in a pandas DataFrame based on a string prefix. We learned how to create a Series and use it to select columns, how to use the DataFrame.loc function, and how to apply the DataFrame.filter() function. Of course, each of these methods has its own advantages and use cases. The choice of method depends on the specific requirements of your data analysis task.

Planet Python

How to Monitor Network Usage for Processes on Linux

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/network-switch-cables.jpg

Internet access is essential, but you may wonder which Linux processes use your connection the most on your computer. Fortunately, with some common Linux utilities, monitoring which processes use your bandwidth is easy. Here are some of them:

1. nethogs

nethogs is a program that does for internet connections what htop or top does for CPU and memory usage. It shows you a snapshot of which processes are accessing the network.

Like top, htop, or atop, nethogs is a full-screen program that updates after a few seconds to show you the current network connections by processes.

Installing nethogs is simple. You just go through your package manager.

For example, on Debian and Ubuntu:

 sudo apt install nethogs 

And on Arch Linux:

 sudo pacman -S nethogs 

On the Red Hat family:

 sudo dnf install nethogs 

To run nethogs, you’ll need to be root:

 sudo nethogs 

It’s possible to set it so that you can run nethogs as a regular user using this command:

 sudo setcap "cap_net_admin,cap_net_raw+pe" /path/to/nethogs 

You should replace “/path/to/nethogs” with the absolute pathname of nethogs. You can find this with the which command:

 which nethogs 

2. lsof

While lsof is a utility for listing open files, it can also list open network connections. The -i option lists internet connections attached to running processes on the system. On Linux, everything is a file, after all.

To see current internet connections, use this command:

 lsof -i 

lsof will show you the name of any commands with open internet connections, the PID, the file descriptor, the type of internet connection, the size, the protocol, and the formal file name of the connection.

Using the -i4 and -i6 options allows you to view connections using IPv4 or IPv6.

There’s a good chance you have lsof installed already. It’s also easy to install on major Linux distros if it isn’t.

On Debian and Ubuntu, type:

 sudo apt install lsof 

And on Arch:

 sudo pacman -S lsof 

On the Red Hat family of distros:

 sudo dnf install lsof 

3. netstat

netstat is a powerful program on its own, letting you see network connections on your system. It doesn’t show you which processes the network connections are attached to. As with lsof, you can see this with a command-line option.

netstat is part of the net-tools package. You can install it on most Linux distros using the default package manager.

For example, on Debian or Ubuntu:

 sudo apt install net-tools

On Arch Linux:

 sudo pacman -S net-tools 

To install netstat on Fedora, CentOS, and RHEL, run:

 sudo dnf install net-tools 

You can run netstat at the command line. By default, it will show you information such as the protocol, the address, and the state of the connection, but the -p option adds a column that shows the process ID and the command name.

 netstat -p 

When you run it, netstat will just list all the network connections and then exit. With the -c option, you can see a continually updated list of connections:

 netstat -pc 

This would be similar to using a screen-oriented program like nethogs, but the advantage of doing it this way is that you can pipe the output into another program like grep or a pager to examine it:

 netstat -p | grep 'systemd' 

To see all of the processes with network connections on your system, you may have to run netstat as root:

 sudo netstat  

Now You Can See Which Linux Apps Are Gobbling Up Your Bandwidth

Linux, like many modern OSes, is intimately connected to the internet. It can be difficult at times to track down which processes are using your bandwidth. With tools like nethogs, lsof, and netstat, you can track down processes that have open connections.

Processes sometimes go haywire, even with connections. On Linux, you can easily terminate any rogue processes.

MakeUseOf

11 MongoDB Queries and Operations You Must Know

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2023/08/mongodb-queries-you-must-know.jpg

MongoDB is one of the most desired and admired NoSQL databases for professional development. Its flexibility, scalability, and ability to handle large volumes of data make it a top choice for modern applications. If you want to master MongoDB’s regular queries and operations, you’re in the right place.

Whether you’re looking to efficiently retrieve and manipulate data, implement robust data models, or build responsive applications, acquiring a deep understanding of common MongoDB queries and operations will undoubtedly enhance your skills.

1. Create or Switch Databases

Creating a database locally via the MongoDB Shell is straightforward, especially if you’ve set up a remote cluster. You can create a new database in MongoDB with the use command:

 use db_name 

While the above command creates a new database, you can use it to switch to an existing database without creating a new one from scratch.

2. Drop Database

First, switch to the database you want to drop using the use command as done previously. Then drop the database using the dropDatabase() command:

 use db_name
db.dropDatabase()

3. Create a Collection

To create a collection, switch to the target database. Use the createCollection() keyword to make a new MongoDB collection:

 db.createCollection("collection_name")

Replace collection_name with your chosen collection name.

4. Insert Document Into a Collection

While sending data to a collection, you can insert a single document or an array of documents.

To insert a single document:

 db.collection_name.insertOne({"Name":"Idowu", "Likes":"Chess"})

You can also use the above method to insert an array of documents with one ID:

 db.collection_name.insertOne([{"Name":"Idowu", "Likes":"Chess"}, {"Language": "Mongo", "is_admin": true}])

To insert many documents at once, with each having separate IDs, use the insertMany keyword:

 db.collection_name.insertMany([{"Name":"Idowu", "Likes":"Chess"}, {"Name": "Paul", "Likes": "Wordle"}])

5. Get All Documents From a Collection

You can query all documents from a collection using the find() keyword:

 db.collection_name.find()

The above returns all the documents inside the specified collection:

You can also limit the returned data to a specific number. For instance, you can use the following command to get only the first two documents:

 db.collection_name.find().limit(2)

6. Filter Documents in a Collection

There are many ways to filter documents in MongoDB. Consider the following data, for instance:

If querying only a specific field in a document, use the find method:

 db.collection_name.find({"Likes":"Wordle"}, {"_id":0, "Name":1})

The above returns all documents where the value of Likes is Wordle. It only outputs the names and ignores the document ID.

You can also filter a collection by a numerical factor. Say you want to get the names of all users older than 21, use the $gt operator:

 db.collection_name.find({"Likes":"Chess", "Age":{"$gt":21}}, {"_id":0, "Name":1})

The output looks like so:

Try replacing find with findOne to see what happens. However, there are many other filtering keywords:

  • $lt: All values less than the specified one.
  • $gte: Values equal to or greater than the specified one.
  • $lte: Values that are less than or equal to the defined one.
  • $eq: Gets all values equal to the specified one.
  • $ne: All values not equal to the specified one.
  • $in: Use this when querying based on an array. It gets all values matching any of the items in the array. The $nin keyword does the opposite.

7. Sort Queries

Sorting helps arrange the query in a specific order. You can sort in descending or ascending order. Keep in mind that sorting requires a numerical reference.

For instance, to sort in ascending order:

 db.collection_name.find({"Likes":"Chess"}).sort({"Age":1})

To sort out the above query in descending order, replace “1” with “-1.”

 db.collection_name.find({"Likes":"Chess"}).sort({"Age":-1})

8. Update a Document

MongoDB updates require atomic operators to specify how you want the update done. Here is a list of commonly used atomic operators you can pair with an update query:

  • $set: Add a new field or change an existing field.
  • $push: Insert a new item into an array. Pair it with the $each operator to insert many items at once.
  • $pull: Remove an item from an array. Use it with $in to remove many items at one go.
  • $unset: Remove a field from a document.

To update a document and add a new field, for example:

 db.collection_name.updateOne({"Name":"Sandy"}, {"$set":{"Name":"James", "email":"example@gmail.com"}})

The above updates the specified document as shown:

Removing the email field is straightforward with the $unset operator:

 db.collection_name.updateOne({"Name":"Sandy"}, {"$unset":{"email":"example@gmail.com"}})

Consider the following sample data:

You can insert an item into the existing items array field using the $push operator:

 db.collection_name.updateOne({"Name":"Pete"}, {"$push":{"items":"Plantain"}})

Here’s the output:

Use the $each operator to insert many items at once:

 db.collection_name.updateOne({"Name":"Pete"}, {"$push":{"items": {"$each":["Almond", "Melon"]}}})

Here’s the output:

As mentioned, the $pull operator removes an item from an array:

 db.collection_name.updateOne({"Name":"Pete"}, {"$pull":{"items":"Plantain"}})

The updated data looks like so:

Include the $in keyword to remove many items in an array at one go:

 db.collection_name.updateOne({"Name":"Pete"}, {"$pull":{"items": {"$in":["Almond", "Melon"]} }}) 

9. Delete a Document or a Field

The deleteOne or deleteMany keyword trashes a document from a collection. Use deleteOne to remove a document based on a specified field:

 db.collection_name.deleteOne({"Name":"IDNoble"})

If you want to delete many documents with keys in common, use deleteMany instead. The query below deletes all documents containing Chess as their Likes.

 db.collection.deleteMany({"Likes":"Chess"})

10. Indexing Operation

Indexing improves query performance by streamlining the number of documents MongoDB needs to scan. It’s often best to create an index on fields you query more frequently.

MongoDB indexing is similar to how you use indexes to optimize SQL queries. For instance, to create an ascending index on the Name field:

 db.collection.createIndex({"Name":1})

To list your indexes:

 db.collection.getIndexes()

The above is only a preamble. There are several other methods for creating an index in MongoDB.

11. Aggregation

The aggregation pipeline, an improved version of MapReduce, allows you to run and store complex calculations from inside MongoDB. Unlike MapReduce, which requires writing the map and the reduce functions in separate JavaScript functions, aggregation is straightforward and only uses built-in MongoDB methods.

Consider the following sales data, for example:

Using MongoDB’s aggregation, you can calculate and store the total number of products sold for each category as follows:

 db.sales.aggregate([{$group:{"_id":"$Section", "totalSold":{$sum:"$Sold"}}}, {$project:{"_id":0, "totalSold":1, "Section":"$_id"}}])

The above query returns the following:

Master MongoDB Queries

MongoDB offers many querying methods, including features to improve query performance. Regardless of your programming language, the above query structures are rudimentary for interacting with a MongoDB database.

There may be some discrepancies in base syntaxes, though. For example, while some programming languages like Python recognize snake cases, others, including JavaScript, use the camel case. Ensure you research what works for your chosen technology.

MakeUseOf

Accidental DBA’s Guide to MySQL Management

So, you’ve been tasked with managing the MySQL databases in your environment, but you’re not sure where to start.

Here’s the quick & dirty guide. Oh yeah, and for those who love our stuff, take a look to your right.

See that subscribe button? Grab our newsletter!

Steps to MySQL Management

Here are the steps that are required for MySQL management as a DBA:

1. Installation

The “yum” tool is your friend.  If you’re using Debian, you’ll use apt-get but it’s very similar. You can do a “yum list” to see what packages are available. We prefer to use the Percona distribution of MySQL.

It’s fully compatible with stock MySQL distribution, but usually a bit ahead in terms of tweaks and fixes. Also, if you’re not sure, go with MySQL 5.5 for new installations.

$ rpm -Uhv http://www.percona.com/downloads/percona-release/percona-release-0.0-1.x86_64.rpm
$ yum install Percona-Server-client-55
$ yum install Percona-Server-shared-55
$ yum install Percona-Server-shared-compat
$ yum install Percona-Server-server-55

The last command will create a fresh database for you as well.

Already have data in an existing database? Then you can migrate between MySQL and Oracle.

2. Setup Replication

MySQL replication is a process you’ll need to setup over and over again. Its statement based in MySQL. A lot of INSERT, UPDATE, DELETE & CREATE statements are transferred to the slave database, and applied by a thread running on that box.

The steps to setup are as follows:

  1. lock the primary with FLUSH TABLES WITH READ LOCK;
  2. issue SHOW MASTER STATUS and note the current file & position
  3. make a copy of the data. You can dump the data:

$ mysqldump -A –single-transaction > full_primary.mysql

Alternatively, you can use xtrabackup to take setup replication without locking!

  1. copy the dump to the slave database (scp works, but rsync is even better as it can restart if the connection dies).
  2. import the dump on the slave box (overwrites everything so make sure you got your boxes straight!)

$ mysql < full_primary.mysql

  1. point to the master

mysql> change master to
> master_user=’rep’,
> master_password=’rep’,
> master_host=’10.20.30.40′,
> master_log_file=’bin-log.001122′,
> master_log_pos=11995533;

  1. start replication & check

mysql> start slave;
mysql> show slave statusG;

You should see something like this:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

3. Analyze Slow Query & Tune

If you’re managing an existing MySQL database and you hit a performance blip, it’s likely due to something that has changed. You may be getting a spike in user traffic, that’s new! Or you may have some new application code that has been recently deployed, that’s new SQL that’s running in your database. What to do?

If you haven’t already, enable the slow query log:

mysql> set global slow_query_log=1;
mysql> set global long_query_time=0.50;

Now wait a while. A few hours perhaps, or a few days. The file should default to

/var/lib/mysql/server-slow.log

Now analyze it. You’ll use a tool from the percona toolkit to do that. If you haven’t already done so, install the percona toolkit as well.

$ yum install percona-toolkit
$ pt-query-digest /var/lib/mysql/server-slow.log > /tmp/server-report.txt

Once you’ve done that “less” the file, and review. You’ll likely see the top five queries account for 75% of the output. That’s good news because it means less query tuning. Concentrate on those five and you’ll get the most bang for your buck.

Bounce your opinions about the queries off of the developers who build application code. Ask them where the code originates. What are those pages doing?

Check the tables, are there missing indexes? Look at the EXPLAIN output. Consider tuning the table data structures, multi-column, or covering indexes. There is typically a lot that can improve these troublesome queries.

4. Monitoring Command Line Tools

You’ll want to have a battery of day-to-day tools at your disposal for interactive monitoring of the database. Don’t go overboard. Obsessive tuning means obsessively turning knobs and dials. If there are no problems, you’re likely to create some. So, keep that in mind.

innotop is a “top” like utility for monitoring what’s happening inside your little database universe.  It’s probably already available through yum and the “epel” repository:

$ yum install innotop

First edit the .my.cnf file and add:
[client]
user=root
password=mypw

From there you should be able to just fire up innotop without problems.

mysqltuner is a catch all tool that does a once over of your server, and gives you some nice feedback.  Get a copy as follows:

$ wget mysqltuner.pl

Then run it:
$ chmod +x mysqltuner.pl
$ ./mysqltuner.pl

Here are a couple of useful mysql shell commands to get database information:

mysql> show processlist;
mysql> show innodb status;
mysql> show status;

There is also one last tool which can come in handy for reviewing a new MySQL server. Also, from percona toolkit, the summary tool. Run it as follows:

$ pt-summary

5. Backups

You absolutely need to know about backups if you want to sleep at night. Hardware and database servers fail, software has bugs that bite. And if all that doesn’t get you, people make mistakes. So-called operator error will surely get you at some point. There are three main types:

  1. cold backups

With the database shutdown, make a complete copy of the /var/lib/mysql directory, along with perhaps the /etc/my.cnf file. That together amounts to a cold backup of your database.

  1. hot backups

There has been an enterprise tool for MySQL that provides this for some time. But we’re all very lucky to also have the open source Percona xtrabackup at our disposal. Here’s a howto using it for replication setup.

  1. logical backups

These will generate a file containing all the CREATE statements to recreate all your objects, and then INSERT statements to add data.

$ mysqldump -A > my_database_dump.mysql

6. Review existing servers

The percona toolkit summary tool is a great place to start.

$ pt-summary

Want to compare the my.cnf files of two different servers?

$ pt-config-diff h=localhost h=10.20.30.40

Of course, you’ll want to review the my.cnf file overall. Be sure you have looked at these variables:

tmp_table_size
max_head_table_size
default_storage_engine
read_buffer_size
read_rnd_buffer_size
sort_buffer_size
join_buffer_size
log_slow_queries
log_bin
innodb_log_buffer_size
innodb_log_file_size
innodb_buffer_pool_size
key_buffer_size (for MyISAM)
query_cache_size
max_packet_size
max_connections
table_cache
thread_cache_size
thread_concurrency

7. Security essentials

The output of the pt-summary and mysqltuner.pl scripts should give you some useful information here. Be sure to have passwords set on all accounts. Use fewer privileges by default, and only add additional ones to accounts as necessary.

You can use wildcards for the IP address but try to be as specific as possible. Allow for a subnet, not the whole internet ‘10.20.30.%’ for example instead of just ‘%’.

Also keep in mind that at the operating system or command line level, anyone with root access can really mess up your database. Writing to the wrong datafile or changing permissions can hose a running database very quickly.

8. Monitoring

Use a monitoring system such as Nagios to keep an eye on things.  At minimum check for:

  1. connect to db
  2. server load average
  3. disk partitions have free space
  4. replication running – see above IO & SQL running status messages
  5. no swapping – plenty of free memory

9. Ongoing Maintenance

Periodically it’s a good idea to review your systems even when they’re running smoothly. Don’t go overboard with this however. As they say if it isn’t broke, don’t fix it.

  1. check for unused & duplicate indexes
  2. check for table fragmentation
  3. perform table checks (if using MyISAM)

10. Manage the Surprises

MySQL is full of surprises. In the Oracle world you might be surprised at how arcane some things are to setup, or how much babysitting they require. Or you might be surprised at how obscure some tuning & troubleshooting techniques are. In the MySQL world there are big surprises too. Albeit sometimes of a different sort.

  1. Replication Checksums

One that continues to defy my expectations is those surrounding replication. Even if it is running without error, you still have more checking today. Unfortunately, many DBAs don’t even know this!

That’s because MySQL replication can drift out of sync without error. We go into specific details of what things can cause this, but more importantly how to check and prevent it, by bulletproofing MySQL with table checksums.

  1. Test & Confirm Restores of Backups

Spinup a cloud server in Amazon EC2, and restore your logical dump or hotbackup onto that box. Point a test application at that database and verify that all is well. It may seem obvious that a backup will do all this.

But besides the trouble when a filesystem fills up, or some command had the wrong flag or option included. There can be even bigger problems if some piece or section of the database was simply overlooked.

It’s surprising how easy it is to run into this trouble. Testing also gives you a sense of what restoring time looks like in the real world. A bit of information your boss is sure to appreciate.

If you made it this far, you know you want to grab the newsletter.

Conclusion

That’s all about how to manage MySQL as a DBA. Hopefully, you have found this guide exceptional from other ordinary guides to MySQL management. For any further queries, our comment box is always open for you. Thanks for reading!

The post Accidental DBA’s Guide to MySQL Management appeared first on iheavy.

Planet MySQL

The Ryobi Telescoping Power Scrubber Is the Best at Keeping My Tile Floors Clean

https://i.kinja-img.com/gawker-media/image/upload/c_fill,f_auto,fl_progressive,g_center,h_675,pg_1,q_80,w_1200/d8dad39f888525d8adec05e95b0dbfda.png

The white tiles in the living area of my home are an abomination: I track in dirt from the garden constantly, my dog is forever bursting through the doggie door with muddy glee, and I seem to cook with the spirit of Ratatouille, absentmindedly splashing food all about. I find myself in constant pursuit of cleaning tools that will make my home seem like less of a disaster, and as a result, I’ve bought an embarrassing number of devices that promised to truly scrub my floor clean.

Among them, I’ve tried the Hoover SpinScrub (a precursor to this model), various steam cleaners, many tonics and potions, and even your plain old handheld scrub brush, because it finally seemed that every product that promised to really scrub away serious dirt on your tiles paled in comparison to just getting down and scrubbing the floor yourself. But one night while perusing a Home Depot ad, I saw it: It gleamed bright yellow, and it promised to answer all my problems. And it actually lived up to that promise.

This is the best floor scrubber

The best floor scrubber is the Ryobi Telescoping Power Scrubber. Just look at it. It is quite literally a cordless powerhouse. Although it comes with a medium hard brush, you can also buy soft and hard brush heads for it. Ostensibly, it’s for scrubbing your car or boat exterior, perhaps your roof or house siding.

But if you’re looking for clean tile, there is nothing on the market like this tool.

How a non-expert (me) uses a power scrubber on floors

I use the medium hardness brush, and I work the floor in sections, with a spray bottle of water in one hand, a container of Bar Keepers Friend, and a towel. (The only advantage that more traditional floor scrubbers have is their onboard water source. The Ryobi power scrubber has none of that, but to me, that’s a non-issue given the way it performs.)

The towel is on the floor, and I stand on it. You spray the floor in front of you, sprinkle it with Bar Keepers Friend, and then go to town with your scrubber. As you move forward, keep the towel under your feet, using it to mop up any water as you go. When you get to the end of the hall or room, you may need to give the wall trim a quick wipe for any splatter, but it’s pretty minimal.

The upside of this is incredibly satisfyingly clean tile. Every groove, every niche is clean. The downside is that you’ve likely taken off any sealer on the tile, so that might be worth refreshing with a sealer, which is easy enough. (You can even do so with the scrubber by swapping the head for one of the soft heads like the microfiber cloth.) In between serious cleanings, you can skip the Bar Keepers Friend and use water alone or a mopping solution, but really, the scrubber is doing the majority of the work.

Maintaining the Ryobi Power Scrubber

To wash the scrubber, you disconnect the head and throw it in the dishwasher. Disconnect the battery and recharge it. I can even use one of my smaller 1.5 volt batteries with the scrubber and get a full house clean at once.

People tend to be loyalists when they get into a line of tools. If they start with Makita, they’ll stick with it, and DeWalt folks are die-hards. Like a lot of people, I started with Ryobi because of the price point and its absurdly wide selection of tools in the cordless series. I’ve stuck with the line because I genuinely have a lot of success with it as I’ve grown my collection. I find the batteries stay well charged (and I haven’t had one die yet). I recommend buying bare tools (without battery packs) as soon as you’ve acquired a few chargers, and only getting the higher-end batteries. I have two 4-volt batteries and I almost never find myself needing another. Ryobi has really expanded the line into a lot of consumer-friendly pieces like fans and air compressors, and it has invested in their brushless cordless line—a series of tools with less likelihood of burning out your motor, while also being more powerful. All this to say, I wasn’t surprised Ryobi had a great tool solution here.

For what its worth, they also have a handheld scrubber, and if I hadn’t previously picked up some brush heads that I can just throw on my Ryobi brushless hammer drill for scrubbing smaller surfaces like sinks and bathtubs, I’d have picked it up as well.

Lifehacker

Top MySQL DBA Interview Questions (Part 2)

https://www.iheavy.com/wp-content/uploads/2023/08/Top-MySQL-DBA-Interview-Questions.jpg

Continuing from our Top MySQL DBA interview questions (Part 1) here are five more questions that test a MySQL DBA’s knowledge, with two that will help suss out some personality traits.

Top MySQL DBA Interview Questions

6. Disk I/O

Disk performance should be an ever-present concern to a DBA. So, although they don’t need to be a storage specialist, they should have a working knowledge. Ask them about RAID versions, mirroring versus striping, and so forth. Mirroring combines two disks as a unit. Every write is duplicated on both disks.

If you lose one disk, you have an immediate copy. Like a tandem truck that has spare tires running in parallel. Lose one, and you don’t have to pull over immediately to replace it. Striping spreads I/O over multiple disks so you on the one hand increase throughput linearly as you add disks.

That’s because you have more disks working for you.  At the same time, you increase risk with each new disk you add, because the failure rate is then the sum total of all those disks.

For relational databases, the best RAID level is 10, which is striping over mirrored sets. You use more disks, but disks are cheap compared to the hassle of any outage.

If you’re deploying on Amazon, your candidate should be familiar with the Elastic Block Storage offering also known as EBS. This is virtualized storage, so it introduces a whole world of operational flexibility.

No longer do you have to jump through hoops to attach, add or reconfigure storage on your servers. It can all be done through command line API calls. That said EBS suffers from variability problems as with any other shared resource.

Although Amazon guarantees your average throughput, the I/O you get at a given time can swing wildly from low to high. Consider Linux software RAID across multiple EBS volumes to mitigate against this.

7. How Would You Setup Master/Slave & Master/Master Replication?

A basic replication setup involves creating a full dump of the primary database, while it’s tables are locked. The DBA should capture the master status, logfile & position at that time. She should then copy the dump file to the secondary machine & import the full dump.

Finally, the CHANGE MASTER TO statement should be run to point this database instance to its master.  Lastly START SLAVE should be issued.  If all goes well SHOW SLAVE STATUS should show YES for both of these status variables:

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

Master-master replication is similar, except one additional step. After the above steps have run, you know that your application is not pointing at the slave database. If you’re not sure, verify that fact first.

Now determine the logfile name & position on the slave with SHOW MASTER STATUS. Return to the primary box, and run the CHANGE MASTER TO command to make it slave from the secondary box. You’ve essentially asked MySQL to create a circular loop of replication.

How does MySQL avoid getting into an infinite loop in this scenario?  The server_id variable must be set, and be unique for all MySQL instances in your replication topology.

For extra credit, ask the candidate about replication integrity checking. As important as this piece is to a solid reliable replication setup, many folks in the MySQL world are not aware of the necessity.

Though replication can be setup, and running properly, that does not mean it will keep your data clean and perfect.

Due to the nature of statement-based replication, and non-deterministic functions and/or non-transactional tables, statements can make their way into the binary logs, without completing. What this means is they may then complete on the slave, resulting in a different row set on the same table in the master & slave instance.

Percona’s pt-table-checksum is the preventative tool to use.  It can build checksums of all your tables, and then propagate those checksums through replication to the slave.  An additional check can then be run on the slave side to confirm consistency or show which rows & data are different.

8. How Are Users & Grants Different In MySQL Than Other DBS?

Creating a grant in MySQL can effectively create the user as well.  MySQL users are implemented in a very rudimentary fashion. The biggest misunderstanding in this area surrounds the idea of a user.

In most databases a username is unique by itself.  In MySQL it is the *combination* of user & hostname that must be unique.

So, for example, if I create user sean@localhost, sean@server2 and sean@server3, they are actually three distinct users, which can have distinct passwords, and privileges. It can be very confusing that sean logging in from the local command line has different privileges or password than sean logging in from server2 and server3. So that’s an important point.

9. How Might You Hack A MySQL Server?

This is a good opportunity for the candidate to show some creativity with respect to operations and Linux servers.  There are all sorts of ways into a database server:

  1. bad, weak or unset passwords
  2. files with incorrect permissions – modifying or deleting filesystem files can take a database down or corrupt data
  3. intercepting packets – could reveal unencrypted data inside the database
  4. unpatched software – bugs often reveal vulnerabilities that allow unauthorized entry
  5. moving, disabling or interrupting the backup scripts – a possible timebomb until you need to restore
  6. DNS spoofing, could allow login as a different user
  7. generous permissions – may allow an unprivileged user access to protected data

There are endless possibilities here.  Listening for creative thinking here, reveals how much that person will think thoroughly and effectively about protecting your systems from those same threats.

10. Brain Teasers, Riddles, and Coding Problems

Google for a long time was a fan of these types of tests at interviews, but I’m not at all.  For one thing, you filter for good test takers, and for another, the candidate has no resources – either books or the internet at their disposal.

Why not instead ask them to tell a story? Storytelling conveys a lot of things. It conveys a bit of teaching ability, which extends far beyond internalizing some multiple-choice questions.

It tells you more about their personality, which as I’ve said is very important. It shows how they solve problems, as they’ll take you through their process. And gives them an opportunity to tell you about a real-world triumph they presided over.

Personality Questions

In my experience, some of the most important traits of a new hire center around personality traits, and how they might mix with your existing team. Being punctual for an interview, for instance, sets a precedent for many things. But that door swings both ways, so if you want to hire these types of folks, don’t keep them waiting either!

Pay attention to whether or not the candidate takes some lead in the conversation at all. This can indicate the person is a self-starter.  Obviously, a great candidate will also listen carefully and patiently to what you have to say, but may then take the ball and run with it somewhat.

Listen for signals that the person is active in the field, posting on forums, and attending conferences, meetups, and forums on technology topics. You might also ask them if they blog, and what topics interest them.

Frequently Asked Questions (FAQs)

How Do I Prepare for A DBA Interview?

As a job seeker, you need to have knowledge and experience in database administration in the beginning. While preparing for an interview, you need to review the job description carefully, conduct research on the company and its industry, and then refresh your technical knowledge of DBMS concepts and relevant programming languages accordingly.

What Questions Are Asked in A DBA Interview?

Here are the general questions that are asked in a DBA interview:

  • What purpose does the Model Database Serve?
  • Explain your SQL Server DBA Experience?
  • What is DCL?
  • What is Replication?
  • Why would you use SQL Agent?
  • What is DBCC?
  • What are the recovery models for a database?
  • What is the importance of a recovery model?

What Are the Questions Asked in MySQL Interview?

Here are the questions which are generally asked in MySQL interview:

  • What is MySQL?
  • What are some of the advantages of using MySQL?
  • What do you mean by ‘databases’?
  • What does SQL in MySQL stand for?
  • What does a MySQL database contain?
  • How can you interact with MySQL?
  • What are MySQL Database Queries?
  • What are some of the common MySQL commands?

What Is the Role of DBA In MySQL?

The first role of a database administrator in MySQL is to administer MySQL Server data systems and structures. A DBA can use software to store and organize data, records, or information. They also need to ensure that the data are protected securely from unauthorized access and that users can easily access the information as they need.

Conclusion

The basics that you need to know as a MySQL DBA are discussed in this article and we’ll come again with another part where we’ll discuss more about top MySQL DBA interview questions. Hopefully, now you have got the basic concept of how you need to prepare yourself for a DBA interview after reading this article. Read our next article on the same topic to learn about it in more detail. Till then, have a great day!

The post Top MySQL DBA Interview Questions (Part 2) appeared first on iheavy.

Planet MySQL

HOW TO BOOST MYSQL SCALABILITY | 5 EFFECTIVE WAYS

With the increasing data and user demand, ensuring the scalability of your MySQL database has become crucial to maintain optimal performance. With this, you can handle growing amounts of data, traffic, and user requests with your database in MySQL. But how to boost MySQL scalability?

This is the most trending question among MySQL users and if you are one of them, then this article is just for you. You can simply boost MySQL scalability by optimizing MySQL queries, database schema, and server configuration.

In this article, we’ll explore five effective ways to boost MySQL scalability and handle your database’s growth effectively. So, what are you waiting for? Let’s explore them below!

5 Ways To Boost MySQL Scalability

There are a lot of scalability challenges we see with clients over and over. The list could easily include 20, 50, or even 100 items, but we shortened it down to the biggest five issues we see.

1. Tune those queries

By far the biggest bang for your buck is query optimization. Queries can be functionally correct and meet business requirements without being stress tested for high traffic and high load. This is why we often see clients with growing pains, and scalability challenges as their site becomes more popular.

This also makes sense. It wouldn’t necessarily be a good use of time to tune a query for some page off in a remote corner of your site, that didn’t receive real-world traffic. So, some amount of reactive tuning is common and appropriate.

Enable the slow query log and watch it. Use mk-query-digest, the great tool from Maatkit to analyze the log. Also, make sure the log_queries_not_using_indexes flag is set.

Once you’ve found a heavy resource-intensive query, optimize it! Use the EXPLAIN facility, use a profiler, look at index usage and create missing indexes, and understand how it is joining and/or sorting.

2. Employ Master-Master Replication

Master-master active-passive replication, otherwise known as circular replication, can be a boon for high availability, but also for scalability. That’s because you immediately have a read-only slave for your application to hit as well.

Many web applications exhibit an 80/20 split, where 80% of the activity is read or SELECT and the remainder is INSERT and UPDATE. Configure your application to send read traffic to the slave or rearchitect so this is possible. This type of horizontal scalability can then be extended further, adding additional read-only slaves to the infrastructure as necessary.

If you’re setting up replication for the first time, we recommend you do it using hotbackups. Here’s how.

Keep in mind MySQL’s replication has a tendency to drift, often silently from the master. Data can really get out of sync without throwing errors! Be sure to bulletproof your setup with checksums.

3. Use Your Memory

It sounds very basic and straightforward, yet there are often details overlooked. At least be sure to set these:

  • innodb_buffer_pool_size
  • key_buffer_size (MyISAM index caching)
  • query_cache_size – though beware of issues on large SMP boxes
  • thread_cache & table_cache
  • innodb_log_file_size & innodb_log_buffer_size
  • sort_buffer_size, join_buffer_size, read_buffer_size, read_rnd_buffer_size
  • tmp_table_size & max_heap_table_size

4. RAID Your Disk I/O

RAID5 is slow for inserts and updates.  It is also almost non-functional during a rebuild if you lose a disk. Very very slow performance. What should you use instead?

RAID 10 mirroring and striping, with as many disks as you can fit in your server or raid cabinet.  A database does a lot of disk I/O even if you have enough memory to hold the entire database.

Why?  Sorting requires rearranging rows, as does group by, joins, and so forth. Plus, the transaction log is disk I/O as well!

Are you running on EC2?  In that case, EBS is already fault-tolerant and redundant. So, give your performance a boost by striping only across a number of EBS volumes using the Linux md software raid.

5. Tune Key Parameters

These additional parameters can also help a lot with performance.

innodb_flush_log_at_trx_commit=2

This speeds up inserts & updates dramatically by being a little bit lazy about flushing the innodb log buffer.  You can do more research yourself but for most environments this setting is recommended.

innodb_file_per_table

Innodb was developed like Oracle with the tablespace model for storage. Apparently, the kernel developers didn’t do a very good job.  That’s because the default setting to use a single tablespace turns out to be a performance bottleneck.

Contention for file descriptors and so forth.  This setting makes innodb create tablespace and underlying datafile for each table, just like MyISAM does.

Frequently Asked Questions (FAQs)

What Is Scalability in MySQL?

Scalability is crucial to prevent your database from collapsing under an increased amount of traffic. a scalable database can handle large data and big queries in a short period. When it may take too much time reading and writing big data, a scalable database can significantly reduce the time.

How To Make MySQL Database Scalable?

To make your MySQL scalable, you need to handle current requests without duplicating IDs and also multiple masters in MySQL to scale your operations horizontally.

Conclusion

There is no need to tell the importance of scalability in MySQL. For that, all the 5 most efficient ways to boost your MySQL scalability are explained in this article and we hope, a combination of these strategies might be the most effective solution for your specific case. For other queries regarding this topic, don’t hesitate to ask in our comment box below. Thanks for reading!

The post HOW TO BOOST MYSQL SCALABILITY | 5 EFFECTIVE WAYS appeared first on iheavy.

Planet MySQL

Laravel Vapor – Serverless Deployment Solution For Laravel Apps

https://q8q7r7w8.rocketcdn.me/wp-content/uploads/2023/07/IMG_2875.png

laravel vapor

Laravel Vapor is another one of Laravel products that we deeply love. While we don’t actively use it within the team due to our significant DevOps capacity that works on infrastructure as code basis, it is an excellent tool if you are a solo developer with limited DevOps expertise. Essentially, Vapor is a serverless deployment platform designed specifically for Laravel applications. What makes Laravel Vapor a good choice is its simplicity. It seamlessly integrates with Amazon Web Services (AWS), tapping into their serverless technologies like AWS Lambda, API Gateway, and more. 

One of the key benefits of using Laravel Vapor is its ability to eliminate traditional server management tasks. Developers no longer need to worry about provisioning servers, configuring load balancers, or managing scaling rules. Vapor handles all these aspects behind the scenes, allowing developers to focus on writing code and building features.

Deploying the application with Vapor is as easy as waving a wand. A simple Laravel Vapor CLI (Command-Line Interface) tool allows developers to deploy their projects in seconds. It takes care of packaging your code, managing resources, and updating your application environment.

Now let’s talk more about how to get started with Laravel Vapor. We’ll explore its best features, the common challenges you might face while using Vapor and some handy tips for overcoming them.

Setting up a Laravel Project for Vapor

To get started with Laravel Vapor, you’ll first need a Laravel project. If you don’t have one yet, create a new Laravel project or use an existing one. Make sure you have Laravel and Composer installed on your local machine.

Once your Laravel project is ready, the next step is installing the Laravel Vapor Composer package. Open your terminal or command prompt, navigate to your project’s directory, and run the following command:

composer require laravel/vapor

This will install the Laravel Vapor package and its dependencies into your project.

Configuring and Deploying the Application

With Laravel Vapor installed, you need to configure your project for deployment. Laravel Vapor utilizes a vapor.yml configuration file to specify deployment settings and other details. 

To generate the vapor.yml file, you should run the following command in your terminal or command prompt:

php artisan vapor:install

This command will create the vapor.yml file in the root directory of your Laravel project.

Open the vapor.yml file and configure your AWS credentials, desired AWS region, Laravel Vapor environment variables, and other deployment settings. This file allows you to customize various aspects of your Vapor deployment, such as the number of instances, memory allocation, and more.

Exploring Laravel Vapor’s Deployment Workflow

Now that your Laravel project is set up for Vapor, it’s time to deploy it. Vapor provides a straightforward deployment workflow that simplifies the process.

To deploy your application to Vapor, run the following command in your terminal or command prompt:

php artisan vapor deploy

This command will initiate the deployment process. Vapor will package your application code, upload it to AWS, and create the necessary infrastructure to run your Laravel application in a serverless environment.

During the Laravel Vapor deployment, it will display progress updates, allowing you to track the process. Once the deployment is complete, Vapor will provide you with a URL where your application is accessible.

Following these steps, you can easily set up a Laravel project for Vapor, configure its deployment settings, and deploy your application to a serverless environment. 

If you find it challenging to set up or deploy a Laravel project for Vapor, we recommend you look at the course created by a Laravel team member to gain an in-depth understanding of the platform, its usage and its features. This can serve as a valuable resource, especially for beginners, to overcome any issues.

Exploring Vapor’s Features

laravel vapor

Laravel Vapor brings a lot of powerful features to the table that make it an excellent choice for deploying and managing Laravel applications in a serverless environment. So let’s take a closer look at some of the most significant features that set Vapor apart:

  • Laravel Vapor CLI and its capabilities: The CLI serves as an invaluable assistant tool that boosts developers’ experience by providing greater flexibility and efficiency in using Laravel Vapor. This command-line interface (CLI) allows developers to interact with and manage their Vapor environments and deployments. With the Vapor CLI, you can easily deploy your applications, manage Laravel Vapor environment variables, view logs, and perform various other tasks related to your Laravel Vapor projects. 
  • Setting Up Scheduler: The fact that Vapor has a major focus on Lambda function utilization might give you an idea that it creates a lot of limitations for different Laravel components, including the scheduler. But in reality, Vapor has efficiently addressed this concern as well. The scheduler feature allows you to define and manage scheduled tasks for your application. With this feature, you can execute recurring tasks, such as database backups, sending emails, and processing queues, at specified intervals. The good thing is that the scheduler is fully integrated with the underlying AWS infrastructure, which ensures reliable execution of tasks.
  • Vapor UI with Monitoring Capabilities: Vapor provides a web-based user interface with monitoring capabilities for your applications. Through the Vapor UI, you can access and view essential metrics such as error logs, queued jobs, HTTP requests, and scheduled tasks. This visibility into the performance and behaviour of your application helps you identify issues and monitor its health way more effectively.
  • Setting up a database with Vapor: Vapor makes it straightforward to set up and manage databases for your Laravel applications. It integrates with Amazon RDS (Relational Database Service) and supports MySQL and PostgreSQL databases. You can create and configure database instances using the Laravel Vapor CLI, allowing you to easily scale your database resources as needed. Vapor takes care of handling the database infrastructure, backups, and maintenance, allowing you to focus on building your application.
  • Jumpboxes to Connect Private Database: Jumpbox is a little server that acts as a mediator that allows you to securely connect to your private databases. It provides a secure connection between your Vapor environment and the private database instances. This ensures that your Laravel Vapor database remains protected and isolated from external access while still allowing Vapor to interact with it seamlessly.
  • Metrics: Vapor provides detailed metrics and insights into the performance of your Laravel applications. These metrics can include information such as request/response times, function execution, and resource utilization. Analyzing these metrics helps you optimize your application’s performance and resource consumption.
  • Alarms: With Vapor, you can set up alarms to get notified about specific events or thresholds that exceed predefined limits. Alarms can be configured to trigger notifications via various channels, such as email or SMS. By setting up alarms, you can proactively monitor and respond to critical issues in your serverless Laravel application.

So all these features make using Laravel Vapor extremely comfortable and user-friendly. While doing certain actions from the AWS panel might be complicated and counterintuitive, Vapor simplifies the deployment process, allowing developers to set up applications without having to include DevOps or Server Administration part. So this flexibility is probably one of the greatest benefits of using Vapor.

Cost Optimization with Laravel Vapor

Another noticeable thing about Laravel Vapor is that it offers a more granular and cost-effective billing model than traditional server-based setups. Since Laravel Vapor operates on AWS Lambda, its cost is directly influenced by the volume of executed requests and their respective processing times. Therefore, if the application experiences performance issues or delays, it can increase AWS Lambda costs.

In cases where the application receives lots of traffic with more than 100k users and significant delays between request responses, choosing Vapor might not be the most cost-effective or optimal option. On the other hand, Vapor is considered to be one of the most viable alternatives available (including Laravel Forge) for smaller projects or applications that do not expect large spikes in traffic.

Limitations and Considerations

While Laravel Vapor offers many benefits, it’s essential to be aware of some potential challenges and limitations you might encounter. One limitation is that Vapor currently supports only AWS as the underlying infrastructure, so if you prefer a different cloud provider, you might need to explore laravel Vapor alternative serverless platforms.

Another consideration is that the serverless architecture may require adjustments to your application’s code and architecture. Certain Laravel features, such as long-running processes or file storage, may require modifications to align with the stateless nature of serverless environments.

Scaling considerations are also important. While Vapor handles automatic scaling for you, sudden spikes in traffic may require adjusting scaling rules and ensuring your application can handle the increased load.

To make the most of Laravel Vapor and overcome common challenges, here are some valuable tips:

  1. Optimize your code and architecture: Embrace serverless best practices by optimizing your codebase and architecture. Consider minimizing long-running processes, optimizing database queries, and utilizing caching mechanisms to maximize performance and efficiency.
  2. Monitor and debug: Leverage the monitoring and debugging tools provided by Vapor and AWS. Monitor your application’s performance, identify and address any errors or bottlenecks, and make configuration adjustments as needed.
  3. Stay informed and up to date: Keep yourself updated with the latest Vapor documentation, release notes, and community resources. Regularly check for updates, new features, and bug fixes to ensure you’re taking advantage of the latest improvements and optimizations.
  4. Engage with the community: Join Laravel forums, social media groups, and other developer communities to connect with fellow Vapor users. Share your experiences, exchange insights, and seek advice from others who have overcome similar challenges.
  5. Plan for scalability: Design your application with scalability in mind. Utilize Vapor’s automatic scaling capabilities and plan for potential traffic spikes by implementing efficient caching strategies, utilizing queue workers, and optimizing database configurations.

By understanding and addressing potential limitations and following these tips, you can navigate common challenges and make the most of Laravel Vapor’s serverless deployment capabilities for your Laravel applications.

 

Laravel News Links