Deploying Soketi to Laravel Forge – Part 2

In Part I of this tutorial we learnt how to install and deploy Soketi to our Laravel Forge servers.
Currently, Soketi is accessible over our server’s IP address, behind port 6001. In this post we’re going to modify our setup so that we can access our socket server via socket.my-domain.com. We’ll do this by using an Nginx reverse proxy.Laravel

Deploying Soketi to Laravel Forge

Soketi is a simple, fast and resilient open-source WebSockets server written in Typescript. It’s fully compatible with the Pusher v7 protocol which makes it a great replacement to Pusher when using Laravel Echo.Laravel

Efficient Data Archiving in MySQL

https://www.percona.com/blog/wp-content/uploads/2022/08/Ra_ignore_t.pngEfficient Data Archiving in MySQL

Efficient Data Archiving in MySQLRecently I have been working with a few customers with multiple terabytes of transactional data on their MySQL clusters. These very large datasets are not really needed for their daily operations but they are very convenient because they allow them to query historical data easily. However the convenience comes at a high price, you pay a lot more for the storage, backup and restoration take much longer, and are, of course, much larger. So, the question is: how can they perform “efficient data archiving”?

Let’s try to define what would be an efficient data archiving architecture. We can layout some key requirements:

  • The archive should be on an asynchronous replica
  • The archive replica should be using a storage configuration optimized for large dataset
  • The regular cluster should just be deleting data normally
  • The archiving system should remove delete statements from the replication stream and keep only the inserts and updates
  • The archiving system should be robust and able to handle failures and resume replication

Key elements

Our initial starting point is something like this:

The cluster is composed of a source (S) and two replicas (R1 and R2) and we are adding a replica for the archive (RA). The existing cluster is pretty much irrelevant in all the discussions that will follow, as long as the row-based replication format is used with full-row images.

The above setup is in theory sufficient to archive data but in order to do so, we must not allow the delete statements on the tables we want to archive to flow through the replication stream. The deletions must be executed with sql_log_bin = 0 on all the normal servers. Although this may look simple, it has a number of drawbacks. A cron job or a SQL event must be called regularly on all the servers. These jobs must delete the same data on all the production servers. Likely this process will introduce some differences between the tables. Verification tools like pt-table-checksum may start to report false positives.  As we’ll see, there are other options.

Capturing the changes (CDC)

An important component we need is a way to capture the changes going to the table we want to archive. The MySQL binary log, when used with the row-based format and full row image, is perfect for the purpose.  We need a tool that can connect to a database server like a replica, convert the binary log event into a usable form, and keep track of its position in the binary log.

For this project, we’ll use Maxwell, a tool developed by Zendesk. Maxwell connects to a source server like a regular replica and outputs the row-based events in JSON format. It keeps track of its replication position in a table on the source server.

Removing deletions

Since the CDC component will output the events in JSON format, we just need to filter for the tables we are interested in and then ignore the delete events. You can use any programming language that has decent JSON and MySQL support. In this post, I’ll be using Python.

Storage engine for the archives

InnoDB is great for transactional workload but far less optimal for archiving data. MyRocks is a much better option, as it is write-optimized and is much more efficient at data compression.

Architectures for efficient data archiving

Shifted table

We have a few architectural options for our archiving replica. The first architecture, shown below, hooks the CDC to the archiving replica. This means if we are archiving table t, we’ll need to have on the archiving replica both the production t, from which data is deleted, and the archived copy tA, which keeps its data long term.

Efficient Data Archiving in MySQL

The main advantage of this architecture is that all the components related to the archiving process only interact with the archiving replica. The negative side is, of course, the presence of duplicate data on the archiving replica as it has to host both t and tA. One could argue that the table t could be using the blackhole storage engine but let’s not dive down such a rabbit hole.

Ignored table

Another architectural option is to use two different replication streams from the source. The first stream is the regular replication link but the replica has the replication option replicate-ignore-table=t. The replication events for table t are handled by a second replication link controlled by Maxwell. The deletions events are removed and the inserts and updates are applied to the archiving replica.

Efficient Data Archiving in MySQL Maxwell

While this later architecture stores only a single copy of t on the archiving replica, it needs two full replication streams from the source.

Example

The application

My present goal is to provide an example as simple as possible while still working.  I’ll be using the Shifted table approach with the Sysbench tpc-c script. This script has an option, enable_purge, that removes old orders that have been processed. Our goal is to create the table tpccArchive.orders1 which contains all the rows, even the deleted ones, while the table tpcc.orders1 is the regular orders table. They have the same structure but the archive table is using MyRocks.

Let’s first prepare the archive table:

mysql> create database tpccArchive;
Query OK, 1 row affected (0,01 sec)

mysql> use tpccArchive;
Database changed

mysql> create table orders1 like tpcc.orders1;
Query OK, 0 rows affected (0,05 sec)

mysql> alter table orders1 engine=rocksdb;
Query OK, 0 rows affected (0,07 sec)
Records: 0  Duplicates: 0  Warnings: 0

Capturing the changes

Now, we can install Maxwell. Maxwell is a Java-based application so a compatible JRE is needed. It will also connect to MySQL as a replica so it needs an account with the required grants.  It also needs its own maxwell schema in order to persist replication status and position.

root@LabPS8_1:~# apt-get install openjdk-17-jre-headless 
root@LabPS8_1:~# mysql -e "create user maxwell@'localhost' identified by 'maxwell';"
root@LabPS8_1:~# mysql -e 'create database maxwell;'
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON maxwell.* TO maxwell@localhost;'
root@LabPS8_1:~# mysql -e 'grant SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO maxwell@localhost;'
root@LabPS8_1:~# curl -sLo - https://github.com/zendesk/maxwell/releases/download/v1.37.6/maxwell-1.37.6.tar.gz| tar zxvf -
root@LabPS8_1:~# cd maxwell-1.37.6/
root@LabPS8_1:~/maxwell-1.37.6# ./bin/maxwell -help
Help for Maxwell:

Option                   Description                                                                       
------                   -----------                                                                       
--config <String>        location of config.properties file                                                
--env_config <String>    json object encoded config in an environment variable                             
--producer <String>      producer type: stdout|file|kafka|kinesis|nats|pubsub|sns|sqs|rabbitmq|redis|custom
--client_id <String>     unique identifier for this maxwell instance, use when running multiple maxwells   
--host <String>          main mysql host (contains `maxwell` database)                                     
--port <Integer>         port for host                                                                     
--user <String>          username for host                                                                 
--password <String>      password for host                                                                 
--help [ all, mysql, operation, custom_producer, file_producer, kafka, kinesis, sqs, sns, nats, pubsub, output, filtering, rabbitmq, redis, metrics, http ]


In our example, we’ll use the stdout producer to keep things as simple as possible. 

Filtering script

In order to add and update rows to the tpccArchive.orders1 table, we need a piece of logic that will identify events for the table tpcc.orders1 and ignore the delete statements. Again, for simplicity, I chose to use a Python script. I won’t present the whole script here, feel free to download it from my GitHub repository.  It is essentially a loop on line written to stdin. The line is loaded as a JSON string and then some decisions are made based on the values found.  Here’s a small section of code at its core:

...
for line in sys.stdin:
    j = json.loads(line)
    if j['database'] == dbName and j['table'] == tableName:
        debug_print(line)
        if j['type'] == 'insert':
            # Let's build an insert ignore statement
             sql += 'insert ignore into ' + destDbName + '.' + tableName
...

The above section creates an “insert ignore” statement when the event type is ‘insert’. The script connects to the database using the user archiver and the password tpcc and then applies the event to the table tpccArchive.orders1.

root@LabPS8_1:~# mysql -e "create user archiver@'localhost' identified by 'tpcc';"
root@LabPS8_1:~# mysql -e 'grant ALL PRIVILEGES ON tpccArchive.* TO archiver@localhost;'

All together

Just to make it easy to reproduce the steps, here’s the application (tpcc) side:

yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 prepare
yves@ThinkPad-P51:~/src/sysbench-tpcc$ ./tpcc.lua --mysql-host=10.0.4.158 --mysql-user=tpcc --mysql-password=tpcc --mysql-db=tpcc \
        --threads=1 --tables=1 --scale=1 --db-driver=mysql --enable_purge=yes --time=7200 --report-interval=10 run

The database is running a VM whose IP is 10.0.4.158.  The enable_purge option causes old orders1 to be deleted. For the archiving side, running on the database VM:

root@LabPS8_1:~/maxwell-1.37.6# bin/maxwell --user='maxwell' --password='maxwell' --host='127.0.0.1' \
        --producer=stdout 2> /tmp/maxerr | python3 ArchiveTpccOrders1.py

After the two hours tpcc run we have:

mysql> select  TABLE_SCHEMA, TABLE_ROWS, DATA_LENGTH, INDEX_LENGTH, ENGINE from information_schema.tables where table_name='orders1';
+--------------+------------+-------------+--------------+---------+
| TABLE_SCHEMA | TABLE_ROWS | DATA_LENGTH | INDEX_LENGTH | ENGINE  |
+--------------+------------+-------------+--------------+---------+
| tpcc         |      48724 |     4210688 |      2310144 | InnoDB  |
| tpccArchive  |    1858878 |    38107132 |     14870912 | ROCKSDB |
+--------------+------------+-------------+--------------+---------+
2 rows in set (0,00 sec)

A more realistic architecture

The above example is, well, an example. Any production system will need to be hardened much more than my example. Here are a few requirements:

  • Maxwell must be able to restart and continue from the correct replication position
  • The Python script must be able to restart and continue from the correct replication position
  • The Python script must be able to reconnect to MySQL and retry a transaction if the connection is dropped.

Maxwell already takes care of the first point, it uses the database to store its current position.

The following logical step would be to add a more robust queuing system than a simple process pipe between Maxwell and the Python script. Maxwell supports many queuing systems like kafka, kinesis, rabbitmq, redis and many others. For our application, I tend to like a solution using kafka and a single partition.  kafka doesn’t manage the offset of the message, it is up to the application. This means the Python script could update a row of a table as part of every transaction it is applying to keep track of its position in the kafka stream. If the archive tables are using RocksDB, the queue position tracking table should also use RocksDB so the database transaction is not across storage engines.

Conclusion

In this post, I provided a solution to archive data using the MySQL replication binary logs. Archiving fast-growing tables is a frequent need and hopefully, such a solution can help. It would be great to have a MySQL plugin on the replica able to filter the replication events directly.  This would remove the need for an external solution like Maxwell and my python script. Generally speaking, however, this archiving solution is just a specific case of a summary table. In a future post, I hope to present a more complete solution that will also maintain a summary.

Percona Database Performance Blog

Athena (Trailer)

https://theawesomer.com/photos/2022/08/athena_trailer_t.jpg

Athena (Trailer)

Link

After a soldier loses his youngest brother in an apparent police altercation, he returns home to help bring justice to the fallen. But things quickly escalate as the community wants revenge. Director Romain Gavras’ immersive Athena drops us smack-dab into the middle of a tense and chaotic scene. Coming to Netflix 9.23.2022.

The Awesomer

Two Extremely Useful Tools (pt-upgrade and checkForServerUpgrade) for MySQL Upgrade Testing

https://www.percona.com/blog/wp-content/uploads/2022/08/MySQL-Upgrade-Testing.pngMySQL Upgrade Testing

MySQL Upgrade TestingMy last blog, Percona Utilities That Make Major MySQL Version Upgrades Easier, detailed the tools available from the Percona toolkit that assists us with major MySQL version upgrades. The pt-upgrade tool aids in testing application queries and generates reports on how each question performs on servers running various versions of MySQL.

MySQL Shell Upgrade Checker is a utility that helps in compatibility tests between MySQL 5.7 instances and MySQL 8.0 upgrades, which is part of the mysql-shell-utilities. The util.checkForServerUpgrade() function checks whether the MySQL 5.7 instance is ready for the MySQL 8.0 upgrade and generates a report with warnings, errors, and notices for preparing the current MySQL 5.7 setup for upgrading to MySQL 8.0.

We can run this Upgrade Checker Utility in the current MySQL 5.7 environment to generate the report; I would recommend running it on any of the replica instances that have the same configuration as the production.

The user account used to execute the upgrade checker tool must have ALL rights up to MySQL Shell 8.0.20. The user account requires RELOAD, PROCESS, and SELECT capabilities as of MySQL Shell 8.0.21.

How to generate a report using Upgrade Checker Utility

To generate a report using Upgrade Checker Utility we may either login to the shell prompt or execute directly from the command prompt.

mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf > CheckForServerUpgrade_Report.txt
Please provide the password for 'mysqluser@localhost:3306':

$ mysqlsh
MySQL  JS > util.checkForServerUpgrade('root@localhost:3306', { "targetVersion":"8.0.29", "configPath":"/etc/my.cnf"})
Please provide the password for 'mysqluser@localhost:3306':

To quit the mysqlsh command prompt, type \exit.

MySQL  JS > \exit
Bye!

Do pt-upgrade and Upgrade Checker Utility do the same tests?  No!

Don’t confuse the Upgrade Checker Utility with the pt-upgrade tool since they are used for different kinds of major version upgrade testing. The Upgrade Checker Utility performs a variety of tests on the selected MySQL server to ascertain whether the upgrade will be successful; however, the tool does not confirm whether the upgrade is compatible with the application queries or routines.

Does it check both my.cnf file and the MySQL server variables?

The utility can look for system variables declared in the configuration file (my.cnf) but removed in the target MySQL Server release, as well as system variables not defined in the configuration file but with a different default value in the target MySQL Server release.  You must give the file path to the configuration file when executing checkForServerUpgrade() for these checks. However, the tool is unable to identify the variables that have been deleted in the my.cnf file but are set in the MySQL server.

Let us remove query_cache_type from /etc/percona-server.conf.d/mysqld.cnf and run the command.

]# mysql -uroot -p -e "SHOW VARIABLES WHERE Variable_Name IN ('query_cache_type','query_cache_size')"
Enter password:
+------------------+---------+
| Variable_name    | Value   |
+------------------+---------+
| query_cache_size | 1048576 |
| query_cache_type | ON      |
+------------------+---------+

]# cat /etc/my.cnf
#
# The Percona Server 5.7 configuration file.
#
#
# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#   Please make any edits and changes to the appropriate sectional files
#   included below.
#
!includedir /etc/my.cnf.d/
!includedir /etc/percona-server.conf.d/
]#

Remove query_cache_type variable from mysqld.cnf:

]# sed -i '/query_cache_type/d' /etc/percona-server.conf.d/mysqld.cnf
]#

]# grep -i query /etc/my.cnf /etc/percona-server.conf.d/mysqld.cnf
/etc/percona-server.conf.d/mysqld.cnf:query_cache_size=5058320
]#

As the query cache type variable has been deleted from my.cnf,  the tool is unable to detect it.

#  mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf | grep  -B 6  -i "query_cache"
15) Removed system variables
  Error: Following system variables that were detected as being used will be
    removed. Please update your system to not rely on them before the upgrade.
  More information:
    https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed

  query_cache_size - is set and will be removed
ERROR: 1 errors were found. Please correct these issues before upgrading to avoid compatibility issues.

In JSON format, the report looks like this:

Note: To make the blog more readable, I shortened the report.

# mysqlsh -- util checkForServerUpgrade 'root@localhost:3306' --target-version=8.0.29 --config-path=/etc/my.cnf --output-format=JSON
{
    "serverAddress": "localhost:3306",
    "serverVersion": "5.7.39-42 - Percona Server (GPL), Release 42, Revision b0a7dc2da2e",
    "targetVersion": "8.0.29",
    "errorCount": 1,
    "warningCount": 27,
    "noticeCount": 1,
    "summary": "1 errors were found. Please correct these issues before upgrading to avoid compatibility issues.",
    "checksPerformed": [
        {
            "id": "oldTemporalCheck",
            "title": "Usage of old temporal type",
            "status": "OK",
            "detectedProblems": []
        },
        {
            "id": "reservedKeywordsCheck",
            "title": "Usage of db objects with names conflicting with new reserved keywords",
            "status": "OK",
            "detectedProblems": []
        },
…
        {
            "id": "sqlModeFlagCheck",
            "title": "Usage of obsolete sql_mode flags",
            "status": "OK",
            "description": "Notice: The following DB objects have obsolete options persisted for sql_mode, which will be cleared during upgrade to 8.0.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/mysql-nutshell.html#mysql-nutshell-removals",
            "detectedProblems": [
                {
                    "level": "Notice",
                    "dbObject": "global system variable sql_mode",
                    "description": "defined using obsolete NO_AUTO_CREATE_USER option"
                }
            ]
        },
        {
            "id": "enumSetElementLenghtCheck",
            "title": "ENUM/SET column definitions containing elements longer than 255 characters",
            "status": "OK",
            "detectedProblems": []
        },
…
        {
            "id": "removedSysVars",
            "title": "Removed system variables",
            "status": "OK",
            "description": "Error: Following system variables that were detected as being used will be removed. Please update your system to not rely on them before the upgrade.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/added-deprecated-removed.html#optvars-removed",
            "detectedProblems": [
                {
                    "level": "Error",
                    "dbObject": "query_cache_size",
                    "description": "is set and will be removed"
                }
            ]
        },
        {
            "id": "sysVarsNewDefaults",
            "title": "System variables with new default values",
            "status": "OK",
            "description": "Warning: Following system variables that are not defined in your configuration file will have new default values. Please review if you rely on their current values and if so define them before performing upgrade.",
            "documentationLink": "https://mysqlserverteam.com/new-defaults-in-mysql-8-0/",
            "detectedProblems": [
                {
                    "level": "Warning",
                    "dbObject": "back_log",
                    "description": "default value will change"
                },
                {
                    "level": "Warning",
                    "dbObject": "innodb_max_dirty_pages_pct",
                    "description": "default value will change from 75 (%)  90 (%)"
                }
            ]
        },
        {
            "id": "zeroDatesCheck",
            "title": "Zero Date, Datetime, and Timestamp values",
            "status": "OK",
            "detectedProblems": []
        },
…
    ],
    "manualChecks": [
        {
            "id": "defaultAuthenticationPlugin",
            "title": "New default authentication plugin considerations",
            "description": "Warning: The new default authentication plugin 'caching_sha2_password' offers more secure password hashing than previously used 'mysql_native_password' (and consequent improved client connection authentication). However, it also has compatibility implications that may affect existing MySQL installations.  If your MySQL installation must serve pre-8.0 clients and you encounter compatibility issues after upgrading, the simplest way to address those issues is to reconfigure the server to revert to the previous default authentication plugin (mysql_native_password). For example, use these lines in the server option file:\n\n[mysqld]\ndefault_authentication_plugin=mysql_native_password\n\nHowever, the setting should be viewed as temporary, not as a long term or permanent solution, because it causes new accounts created with the setting in effect to forego the improved authentication security.\nIf you are using replication please take time to understand how the authentication plugin changes may impact you.",
            "documentationLink": "https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-compatibility-issues\nhttps://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html#upgrade-caching-sha2-password-replication"
        }
    ]
}

Please read Daniel Guzmán Burgos’ blog post to find out more about the Upgrade Checker Utility and click the link to learn more about the pt-upgrade testing.

Prior to a major version upgrade, application query testing and configuration checks are an inevitable task, and the pt-upgrade and “Upgrade Checker Utility” are quite helpful.

Planet MySQL

Mining the MySQL Performance Schema for Transactions

The MySQL Performance Schema is a gold mine of valuable data.
Among the many nuggets you can extract from it is an historical report of transactions: how long a transaction took to execute, what queries were executed in it (with query metrics), and idle time between queries.
Mining this information is not trivial, but it’s fun and this blog post shows how to start.

Planet MySQL

Why You Should Use Administrative Interfaces to Manage Linux Servers

https://static1.makeuseofimages.com/wordpress/wp-content/uploads/2022/08/linux-management-interfaces-systems.jpg

The biggest problem for Linux system and server administrators is troubleshooting the errors encountered. Fixing these issues, managing security problems, and analyzing the primary cause behind such issues from the command screen can sometimes pose serious challenges.

Linux itself is a command-line universe. It is not easy to learn all the commands and their parameters, let alone use them to troubleshoot errors.

That’s why there are Linux management interfaces to keep everything in sight. Most system and server administrators prefer these administrative interfaces for managing their Linux systems instead. Here’s why you should consider using an admin interface to manage a Linux server.

Why Use an Admin Interface for Linux Management?

For Linux system administrators, it is important to learn how these interfaces work in addition to knowing how to use the management interfaces properly. To summarize this, you can think of management interfaces as tools that you will use between your network management station and the object or tool you want to manage, in this case, a Linux machine.

So that you can imagine it better, you can think of it this way. Imagine you have a Linux server. To manage this server and access various objects, you need to use some management protocol. It is possible to monitor the relationship between these management protocols and the object to be managed with management interfaces.

It is quite difficult to do all this tracking from the command screen. You need to spend a lot of time on the command screen and master the Linux networking commands. Moreover, even if you do all these, there’s an increased possibility of making mistakes. As a result, it will be risky and difficult to manage a system manually using commands.

Using a Web Interface for Linux Administration

Web interfaces are accessible and easy to use. If you’re managing a system using a web interface, you can often find databases, customer information, user agreements, uploaded files, IP addresses, and even error logs, all in one place. Since everything will be in front of your eyes, you can perform your management operations with just a few mouse clicks.

What Is Webmin?

It is very practical to manage web-based systems with Webmin. If you have used environments such as cPanel and Plesk before, you will never be unfamiliar when using Webmin. Moreover, Webmin is open source and has a lot of features.

Webmin allows you to manage the accounts of all registered users in the system from a single location. Furthermore, no coding abilities are required. You also don’t require shell commands to configure your network or change network files, as Webmin can assist you with network configurations as well.

Another management issue that Linux users are closely familiar with is disk partitioning. Webmin comes with partitioning and automatic backup features. It also takes care of security protocols so you don’t have to worry about SSL renewal. In addition, there is a command shell feature using which you can issue Linux and Unix commands within Webmin.

Today, cloud technologies continue to grow at a very rapid pace. If you are considering using a cloud computing service or want to build your system on a cloud, Webmin also has a cloud installation feature.

Another very useful feature of Webmin is that it has different modules. Since it is open source, you can write your own modules and can even benefit from ready-made modules on the internet. For example, using the Virtualmin GPL module, you can control your hosting service. Moreover, it is possible to manage virtual hosts and DNS from here.

If you have more than one virtual server, Virtualmin GPL creates a Webmin user for each virtual server. Each server manages only its own virtual server with Webmin. Thus, it is possible to have independent mailboxes, websites, applications, database servers, and software in each of these virtual servers.

Package Configuration in Linux System Management

Another topic that Linux system administrators should be familiar with is package configuration and management. When installing a package on your system, you only follow what is happening on the command screen. The download process takes place, it writes what the installed files are, and you are given information about the installation. However, this adventure is not that simple.

When you want to install a package, it needs to be configured system-wide. To give an example from Debian and Ubuntu systems, the configuration tool that does this is debconf. It configures the package you want to install, according to the settings in the dpkg-reconfigure file.

It would make sense to examine it through an example to better understand why you should consider using debconf within the management interfaces. You can query the packages available in your debconf database using a simple command. The below debconf-show command lets you query the entire database and the –listowners parameter returns only owners:

sudo debconf-show 

Now try to reconfigure an item of your choice using dpkg-reconfigure:

sudo dpkg-reconfigure wireshark-common

As you can see, a configuration interface for wireshark-common will open. Now the configuration operations will be easier using the debconf interface. There is no debconf command on the command line, though. This is because debconf is already integrated into dpkg.

If you are going to write your own Linux packages and use them in system administration, it will be useful to be familiar with debconf. Because it provides an interface to talk to users who will install your package and get some input from them. For this, you need to use the frontend and backend APIs that debconf provides.

Importance of Admin Interfaces in Linux System Management

There are a lot of commands you can use when managing Linux systems and servers. Each of these commands has dozens of different parameters. Of course, it is very valuable for you to become familiar with and learn about them. However, you can’t ignore the convenience and accessibility provided by management interfaces.

Even just to change a basic configuration setting, you need to make some changes to the files. Moreover, these changes can damage your system. In a large-scale project, such configuration issues can cause huge problems in terms of both expenses and security. However, the management interfaces will save you from this whole pile of commands and parameters.

The main purpose here is to reduce the workload and save time. Webmin and debconf are just examples. You may also want to learn technologies such as Cockpit and Nagios. These are powerful Linux system and server administrator tools that are used frequently and will be useful to you.

MUO – Feed