Generate Invisible Primary Key (GIPK) MySQL 8.0

https://lh3.googleusercontent.com/kzS5w9VmGnAz2f5H70FnI8TiFbepEQM4940WQlEPz5R6nWEZ8axWFMRPIGL4Z2pKnFTvadFdtG4KLYt5MXGzIoUZZZhjAPD5K7SPdB-9ZXHiPM9yzts2NrcKjoI0s6kJ2kAt05Mx1ix95ACq6LHOTOsibcOjAiiDxkVblglYOOetWRbD0SXdaGTJR63eug

The Primary key is like the hero of a row, which has more beneficial features in the table while performing any task on the table.

The DBA knows the importance of the primary key in the table and how to handle it.

  1. Notable features of having a primary key:
  2. Requirements:
  3. Enabling GIPK:
  4. Handling GIPK:
  5. Benchmarking
    1. Data loading :
  6. Limitations:
  7. Conclusion:

Notable features of having a primary key:

  1. Performing any online alter or archival 
  2. Faster replication (Row Based Replication)
  3. Table Partitioning.
  4. The primary key is mandatory in cluster environments (InnoDB Cluster / Galera / Xtradb Cluster).
  5. Better query performance 

From 8.0.30, no need to maintain a separate column for the primary key in the table. We have a feature of sql_generate_invisible_primary_key (GIPK), a dynamic global variable. We can enable it online without any downtime.

By enabling this variable, the primary key and invisible column will be automatically created in the table if any of the created tables have an absent primary key. The default name of the auto-generated column name is my_row_id.

The main advantage of this feature is that we can ease cluster migrations and faster replication synchronization. 

The structure will be replicated to the replicas only for the ROW-based replication.

Requirements:

Binlog format ROW
MySQL Version >= 8.0.30
Engine InnoDB

Enabling GIPK:

It is a global variable, we can enable it dynamically without any downtime. By default, it will be off.

+------------------------------------+-------+
| Variable_name                      | Value |
+------------------------------------+-------+
| sql_generate_invisible_primary_key | OFF   |
+------------------------------------+-------+

mysql> set global sql_generate_invisible_primary_key=1;
Query OK, 0 rows affected (0.00 sec)

+------------------------------------+-------+
| Variable_name                      | Value |
+------------------------------------+-------+
| sql_generate_invisible_primary_key | ON    |
+------------------------------------+-------+

Working with GIPK :

mysql> CREATE TABLE `gipk` (`name` varchar(50) DEFAULT NULL,  `number` int DEFAULT NULL ) ENGINE=InnoDB;
Query OK, 0 rows affected (0.07 sec)

mysql>  show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL,
  PRIMARY KEY (`my_row_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)

Post enabling the GPIK restart the connection to get it applied. 

In the above example, in the create statement I have mentioned two columns name and number. But MySQL has automatically created 1 more invisible primary key column named my_row_id.

We can make the column invisible or visible based on our use case. We just need to perform the alter statement to switch between invisible and visible columns.

mysql> ALTER TABLE gipk ALTER COLUMN my_row_id SET VISIBLE;
Query OK, 0 rows affected (0.00 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql>  show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL,
  PRIMARY KEY (`my_row_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)

Even though it is an auto-generated column, It will be visible in the show create a statement and in information_schema.columns as well.

mysql>  show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL,
  PRIMARY KEY (`my_row_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)
mysql> SELECT COLUMN_NAME,DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "gipk";
+-------------+-----------+
| COLUMN_NAME | DATA_TYPE |
+-------------+-----------+
| my_row_id   | bigint    |
| name        | varchar   |
| number      | int       |
+-------------+-----------+
3 rows in set (0.00 sec)

By turning off show_gipk_in_create_table_and_information_schema, we may fully obscure it. By doing so, the column details will fully disappear from the show and create statements as well as from information schema.columns.

It is a dynamic variable, by default it will be on.

+--------------------------------------------------+-------+
| Variable_name                                    | Value |
+--------------------------------------------------+-------+
| show_gipk_in_create_table_and_information_schema | ON    |
+--------------------------------------------------+-------+

mysql> set global show_gipk_in_create_table_and_information_schema=0;
Query OK, 0 rows affected (0.00 sec)

+--------------------------------------------------+-------+
| Variable_name                                    | Value |
+--------------------------------------------------+-------+
| show_gipk_in_create_table_and_information_schema | OFF   |
+--------------------------------------------------+-------+
mysql> SELECT COLUMN_NAME,DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = "gipk";
+-------------+-----------+
| COLUMN_NAME | DATA_TYPE |
+-------------+-----------+
| name        | varchar   |
| number      | int       |
+-------------+-----------+
2 rows in set (0.00 sec)

mysql> show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)

Now the column is completely invisible.

Handling GIPK:

We can’t change the column name when the column is in an invisible state. 

mysql> ALTER TABLE gipk RENAME COLUMN my_row_id to id;

ERROR 4110 (HY000): Altering generated invisible primary key column 'my_row_id' is not allowed.

To achieve this, first, we need to make the column visible and then we need to perform the rename column to change the column name based on our convenience. 

mysql>  show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL,
  PRIMARY KEY (`my_row_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)

mysql> ALTER TABLE gipk RENAME COLUMN my_row_id to id;
Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `id` bigint unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(50) DEFAULT NULL,
  `number` int DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

1 row in set (0.00 sec)

Benchmarking

We have done a benchmark on the same to identify if there is any issue occurring post-enabling GIPK.

Table structure :

mysql> show create table gipk\G
*************************** 1. row ***************************
       Table: gipk
Create Table: CREATE TABLE `gipk` (
  `my_row_id` bigint unsigned NOT NULL AUTO_INCREMENT /*!80023 INVISIBLE */,
  `id` int unsigned NOT NULL,
  `k` int unsigned NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  PRIMARY KEY (`my_row_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

1 row in set (0.01 sec)

mysql> show create table non_gipk\G
*************************** 1. row ***************************
       Table: non_gipk
Create Table: CREATE TABLE `non_gipk` (
  `id` int unsigned NOT NULL,
  `k` int unsigned NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=latin1

1 row in set (0.00 sec)

Data loading :

Table size :

+----------+----------+------------+
| Database | Table    | Size in GB |
+----------+----------+------------+
| mydbops  | non_gipk |      20.76 |
+----------+----------+------------+

+----------+-------+---------------+
| Database | Table | Table size GB |
+----------+-------+---------------+
| mydbops  | gipk  |         21.83 |
+----------+-------+---------------+

We have created tables with GIPK and without Primary key. I have used mysql random data load for loading data to tables. The surprise is, the time taken for data loading is the same with GIPK and without the Primary key. So there won’t be much latency when bulk loading is happening even if GIPK is enabled.

Full table scan :

mysql> select * from gipk order by id limit 1;
+----+------------+------------------------------------------------------------------------+------------------------------+
| id | k          | c                                                                      | pad                          |
+----+------------+------------------------------------------------------------------------+------------------------------+
|  9 | 1542554247 | fugit sapiente consectetur ab non repudiandae ducimus laboriosam quas! | dolore veritatis asperiores. |
+----+------------+------------------------------------------------------------------------+------------------------------+
1 row in set (2 min 56.14 sec)

mysql> select * from non_gipk order by id limit 1;
+----+------------+---------------------------------------+--------------------------------------+
| id | k          | c                                     | pad                                  |
+----+------------+---------------------------------------+--------------------------------------+
|  9 | 1542554247 | voluptas facere sed dolore iure nisi. | at ipsam id voluptatem et excepturi. |
+----+------------+---------------------------------------+--------------------------------------+
1 row in set (4 min 22.99 sec)

We have done the full table query execution on both with GIPK and without Primary key table, the performance improvement is there in the GIPK table. The time taken for the execution has been reduced by half. 

Online alter and archival :

For performing safer online alter and archival of data chunk by chunk, the percona toolkit plays a vital role. For percona toolkit operations ( pt-osc / archiver) , The basic requirement is the primary key. If there is no primary key on the table, the tool won’t work on that table.  

The advantage of enabling GPIK is, we will have the invisible primary key. By using that primary key, the Percona tool is able to perform like online alter or archival, etc.

[root@localhost mydbopslabs]# pt-archiver --source h=localhost,D=mydbops,t=non_gipk,u=root,p='*****' --where "1=1" --limit 5000 --progress 5000 --statistics --no-check-charset --commit-each --bulk-delete --purge --file '/home/mydbopslabs/non_gipk_%d_%m_%Y_%H_%m_%s.csv' --output-format=csv --dry-run
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Cannot find an ascendable index in table at /bin/pt-archiver line 3261.

[root@localhost mydbopslabs]# pt-archiver --source h=localhost,D=mydbops,t=gipk,u=root,p='******' --where "1=1" --limit 5000 --progress 5000 --statistics --no-check-charset --commit-each --bulk-delete --purge --file '/home/mydbopslabs/non_gipk_%d_%m_%Y_%H_%m_%s.csv' --output-format=csv --dry-run
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
/home/mydbopslabs/non_gipk_06_10_2022_02_10_05.csv
SELECT /*!40001 SQL_NO_CACHE */ `my_row_id`,`id`,`k`,`c`,`pad` FROM `mydbops`.`gipk` FORCE INDEX(`PRIMARY`) WHERE (1=1) AND (`my_row_id` < '100000000') ORDER BY `my_row_id` LIMIT 5000
SELECT /*!40001 SQL_NO_CACHE */ `my_row_id`,`id`,`k`,`c`,`pad` FROM `mydbops`.`gipk` FORCE INDEX(`PRIMARY`) WHERE (1=1) AND (`my_row_id` < '100000000') AND ((`my_row_id` >= ?)) ORDER BY `my_row_id` LIMIT 5000
DELETE FROM `mydbops`.`gipk` WHERE (((`my_row_id` >= ?))) AND (((`my_row_id` <= ?))) AND (1=1) LIMIT 5000

While performing archival on the Non-primary key table, the archival got failed, but it got succeeded on the GIPK table since it has an invisible primary key.

[root@localhost mydbopslabs]# pt-online-schema-change h=localhost,D=mydbops,t=non_gipk --user='root' --password='*****' --no-check-alter  --critical-load "Threads_running=900" --recursion-method=none --max-load  "Threads_running=1000" --no-check-plan --alter "engine=innodb" --dry-run
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
# A software update is available:
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Starting a dry run.  `mydbops`.`non_gipk` will not be altered.  Specify --execute instead of --dry-run to alter the table.
Creating new table...
Created new table mydbops._non_gipk_new OK.
Altering new table...
Altered `mydbops`.`_non_gipk_new` OK.
The new table `mydbops`.`_non_gipk_new` does not have a PRIMARY KEY or a unique index required for the DELETE trigger.
Please check you have at least one UNIQUE and NOT NULLABLE index.
2022-10-06T02:48:59 Dropping new table...
2022-10-06T02:48:59 Dropped new table OK.
Dry run complete.  `mydbops`.`non_gipk` was not altered.
[root@localhost mydbopslabs]# pt-online-schema-change h=localhost,D=mydbops,t=gipk --user='root' --password='*****' --no-check-alter  --critical-load "Threads_running=900" --recursion-method=none --max-load  "Threads_running=1000" --no-check-plan --alter "engine=innodb" --dry-run
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
	LANGUAGE = (unset),
	LC_ALL = (unset),
	LC_CTYPE = "UTF-8",
	LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
Operation, tries, wait:
  analyze_table, 10, 1
  copy_rows, 10, 0.25
  create_triggers, 10, 1
  drop_triggers, 10, 1
  swap_tables, 10, 1
  update_foreign_keys, 10, 1
Starting a dry run.  `mydbops`.`gipk` will not be altered.  Specify --execute instead of --dry-run to alter the table.
Creating new table...
Created new table mydbops._gipk_new OK.
Altering new table...
Altered `mydbops`.`_gipk_new` OK.
Not creating triggers because this is a dry run.
Not copying rows because this is a dry run.
Not swapping tables because this is a dry run.
Not dropping old table because this is a dry run.
Not dropping triggers because this is a dry run.
2022-10-06T02:49:15 Dropping new table...
2022-10-06T02:49:15 Dropped new table OK.
Dry run complete. `mydbops`.`gipk` was not altered.

Online alter has failed on the Non-primary key table, and the archival failed, but it succeeded on the GIPK table since it has an invisible primary key.

Limitations:

  • GIPK will fail if the CREATE table statement has an auto-increment column. 
  • It supports only InnoDB Engine.
  • GIPK supports only row-based replication. 

Conclusion:

No more need to worry about creating and maintaining a primary key separately. GPIK also solves the problem in migration to Inn0DB Cluster, where the Primary key is mandatory. By enabling sql_generate_invisible_primary_key , we have an auto primary key in place now as a lifesaver. 

Planet MySQL

15 Best MySQL GUI Clients for macOS

https://blog.devart.com/wp-content/uploads/2022/11/macOS-dbForge-Studio-for-MySQL.png

Well, we can’t argue that Windows is the key platform for database development and management software—but what if you are a Mac user? Who said you can’t have equal opportunities to set up easy daily work with, for instance, MySQL databases? Simply take a closer look and you’ll see an abundance of top-tier MySQL tools […]

The post 15 Best MySQL GUI Clients for macOS appeared first on Devart Blog.

Planet MySQL

Introduction to Laravel Testing

https://laravelnews.s3.amazonaws.com/images/featured-testing-purple.jpg

Building Laravel projects that are well-tested with a high test automation coverage can be a lot of work. At the same time it is realistically the only way for smaller teams without dedicated QA teams to continue adding more features with confidence without the constant risk of breaking existing things. In this article I want to provide an introduction to testing Laravel projects that covers all bases.

Our team is building a test management tool, so naturally our goal is to have a high test coverage with high quality releases ourselves (the saying “The shoemaker always wears the worst shoes” does not apply here). Our users are usually software testers, so they expect a lot from us when it comes to software quality and usability (they are good at finding and breaking even the smallest issues). So we invest a lot of work into our test strategy and below I will give an overview of how we test Laravel projects.

Backend Tests with PHPUnit

The easiest way to get started with testing in Laravel is to build unit tests using PHPUnit. Laravel already comes pre-configured to run unit tests out of the box and there’s great documentation on setting up and running your tests. Before you get started though, I would recommend thinking about what exactly you can and should test in your backend code, as there are different types of tests you can build. We use PHPUnit to implement different, separate test suites that focus on different aspects of our code.

  • Unit tests for helpers and libraries: Code that is completely independent of the rest of the app can be easily tested in small, discrete unit tests. Think about helpers that format data such as dates/times, convert color values or libraries that generate specific export formats.

  • Queue jobs and commands: Background jobs and commands often perform important operations on data such as archiving older entries that are no longer needed, or generating scheduled reports. Having automated tests for jobs and commands is thus quite critical, so you should have separate test suites.

  • Controllers & models: The tests for your API, models and views of your app will likely be one of your largest test suites. Laravel makes it easy to build fast API & controller tests to cover everything.

  • Database migrations: If you are improving your app and add new features over time, you often need to change the database schema and possible migrate existing data. This can be tricky to test without solid automated tests that consider all edge cases, so make sure to take your time to build these.

The good news is that building these tests also usually makes development much easier. For example, our team is often building the backend tests to develop and try the backend functionality first before any UI is added in a later step. It would be difficult to build new functionality without writing these tests first. We use Testmo ourselves for test automation reporting so we can track all our backend tests.

UI Browser Tests with Dusk

We write our end-to-end browser UI tests with Laravel Dusk. Laravel Dusk uses Selenium under the hood and you can use it to easily automatically test your application against Chrome, Firefox and Edge (plus Safari with some limitations). You could alternatively use a generic browser automation framework such as Cypress or Playwright. But having access to your app’s database model to set up test data and to use the same asserts as your backend tests, I would recommend sticking to PHP-based browser testing.

Here’s a word of warning: writing Laravel Dusk tests (and browser-based tests in general) can be very time consuming (and sometimes frustrating). We have an extensive library of Dusk tests for Testmo that automatically test every feature we add to Testmo. But you don’t necessarily have to do the same. Any browser tests are better than having no tests at all. So if you decide to only test certain happy paths in your app, or build some initial smoke tests to click through the most important features, that’s a great start.

I’ve written about our tips for Laravel Dusk browser testing here before, so you might find this useful.

Frontend Testing

We covered backend testing and UI browser-based testing above. You might be wondering why we now also need additional, separate frontend tests to test our JavaScript code. The reason is quite simple: nowadays more and more code runs in the browser, so we also want to have a way to run unit tests in JavaScript to test such code in addition to our end-to-end UI tests.

Here’s a simple example. In Testmo we allow users to import existing data for test case management. Customers can import data from Excel or migrate from older, legacy products and take their test cases with them.

Customers might have huge existing test case libraries they want to import. To speed up importing and parsing the data, we are actually processing the import files in the browser before submitting them to Testmo. To do this, we have built import parsers for different formats such as CSV/Excel files. It would be difficult and slow to test such code with pure Selenium-based tests, so we have additional frontend tests for our JavaScript libraries and helpers.

There are various options for writing and running JavaScrip tests. We ourselves use Mocha/Chai for our tests and have been quite happy with this.

Laravel Testing CI Integration

Tests are only useful if you run them regularly. The best way to ensure that all tests are run when you make changes to your app is to integrate them with your CI pipeline. This is usually implemented with popular platforms such as GitHub Actions, GitLab CI/CD or CircleCI. This has a couple of advantages:

  • If you get in the habit of only deploying builds that pass all the tests, you are automatically motivated to build better (and faster) tests
  • Running your tests in your CI environment usually helps finding flaky tests that fail due to timing issues. This is often the case when you are new to writing browser tests, so it’s a good idea to learn this early.
  • You can more easily set up and run your tests with parallel test jobs to significantly speed up test execution. For example, for Testmo, running all our tests sequentially would take hours. With parallel testing we can run all our test suites in less than 30 minutes.

You might also find my previous article on integrating Laravel Dusk with GitHub Actions useful.

Manual & Exploratory Testing

Last but not least, we also still do a fair amount of exploratory testing and manual testing for new features, or as smoke tests for new builds and releases. If you have a dedicated testing team, then using a tool such as Testmo is usually important so you plan, manage, assign and track all your tests. If you are a solo developer or a development team without dedicated testers, then starting with spreadsheets is usually a good alternative.

If I could only give one piece of advice on building better Laravel apps, I would recommend starting to think about testing early. It is much much easier to build tests parallel to development instead of trying to fix things later. Without extensive automated tests, more complex apps become difficult to maintain very quickly, so the initial time spent on building your test suites will save you a lot of time later.

This guest posting was written by Dennis Gurock, one of the founders of Testmo. Testmo is built using Laravel and helps teams manage all their software tests in one modern platform. If you are not familiar with QA tools, Testmo recently published various tool guides to get started:

Laravel News

Are You Nuts? Know your Fishing Knots! – Arbor Knot

https://www.alloutdoor.com/wp-content/uploads/2022/11/20221124_214308-e1669362336509.jpg

Today we’re going to be covering the Arbor Knot, a very simple knot with only one purpose in fishing. The only reason you will ever tie an Arbor Knot is because you are spooling up a fishing reel up with line. As the name implies its used to attach line to the “arbor” which in this case is the center of fishing reel spool. It is based on a noose knot so it pulling on the mainline tightens it up. It consists of two overhand knots, so if you’re capable of tying your shoes you can tie the Arbor Knot. When it comes to spooling up your fishing reel, if you are using monofilament fishing line you don’t need to use any sort of tape for grip to the spool. But if you are spooling up braided fishing line you need to put down some sort of backing for grip on the reel or the braided fishing line can slip. For this you can use a small piece of electrical tape, some lines even give you a small sticker for you to use for just that purpose.

Are You Nuts? Know your Fishing Knots! – Arbor Knot
The plastic “arbor” is a stand-in for a fishing reel spool

Step 1

The first step is getting your main line and running it around the spool, “arbor” of the reel.

Are You Nuts? Know your Fishing Knots! – Arbor Knot

Step 2

The next thing to do is, using the tag end of the main line make an overhand knot around the main line, then cinch it down.

Are You Nuts? Know your Fishing Knots! – Arbor Knot

Are You Nuts? Know your Fishing Knots! – Arbor Knot

Step 3

After you cinch down the first overhand knot around the main line, using the tag make another overhand knot with just the tag of the line. Cinch that second overhand knot down.

Are You Nuts? Know your Fishing Knots! – Arbor Knot

Step 4

Once the overhand knots are done you can pull on the mainline to tighten up the arbor knot to the spool, then clip the tag end of the line short and you are ready to spool up your fishing reel.

Are You Nuts? Know your Fishing Knots! – Arbor Knot

The post Are You Nuts? Know your Fishing Knots! – Arbor Knot appeared first on AllOutdoor.com.

AllOutdoor.com

Dropping an Egg from Space

https://theawesomer.com/photos/2022/11/space_egg_drop_t.jpg

Dropping an Egg from Space

Link

For his latest experiment, rocket scientist and entertainer Mark Rober teamed up with Joe Barnard of BPS Space to launch an egg into space to see if they could catch it safely a mattress when it dropped back to earth. But the project proved far more challenging than they thought and required huge amounts of trial and error.

The Awesomer

MySQL Variables – Definition and Examples

MySQL variables store data, label data, and let developers create more accurate and efficient code by turning long and complicated strings of characters into one simple variable. This article will explore user-defined variables. User-defined variables let us execute various data sets with one command and use this data whenever needed. Mastering variables in MySQL is […]

The post MySQL Variables – Definition and Examples appeared first on Devart Blog.

Planet MySQL