MySQL InnoDB Cluster GA is Available Now!

The MySQL Development Team is happy to announce the first GA release of InnoDB Cluster–our integrated, native, full stack HA solution for MySQL. You can see highlights of the changes and improvements made since the RC release here, and you can download the GA packages from our MySQL APT (Ubuntu, Debian) and YUM (Redhat, OEL, Fedora) repositories or from dev.mysql.com.…

via Planet MySQL
MySQL InnoDB Cluster GA is Available Now!

Inmates Stashed Two Homebrew Computers in Ohio Prison’s Ceiling, Used Them to Do More Crimes

The ceiling area once housing the computers. Image: Ohio Inspector General

Adam Johnston and Scott Spriggs may well go down as Ohio’s cleverest inmates.

The pair were incarcerated at Marion Correctional Institution, a low-security, 2,500-capacity facility which used inmate labor to recycle old computers as part of the non-profit RET3 program. Spriggs and Johnson managed to squirrel away dozens of RET3 parts and construct two new machines inside MCI.

According to the 50-page Ohio Inspector General report, the fully functional computers were “hidden on a plywood board in the ceiling above a closet” and subsequently “connected to [Ohio Department of Rehabilitation and Correction’s] computer network.” But wait—there’s more.

Here’s how far they had to carry the stolen parts. Image: Ohio Inspector General

Somehow Spriggs and Johnson were able to run ethernet cables through the ceiling and down to the network switch, where it was connected to port 16, and the inmates were able to obtain internet access via credentials belonging to Ray Canterbury, a retired prison employee who now works for ODRC as a contractor. Once connected, they were able to download articles on “home-made drugs, plastics, explosives, and credit cards.” Johnson, according to the report, also “accessed an article online from the Bloomberg.com site detailing how to submit fraudulent tax returns and have the refunds wired to debit cards,” and stole the identity of another inmate and used his name and social security number to apply for five credit cards.

But wait, there’s more.

As one does with an internet connection, the inmates used their unfettered access to download a shitload of porn, ferrying it to inmates via a thumb drive. But the inmate caught with said thumb drive told investigators that “it was not just pornographic movies. It was like the new releases, TV series” as well as music.

Oh yes, there’s still more.

On these two homebrew machines investigators found a litany of software useful for hacking and encryption, as well as brute force password crackers, an email spamming program, and a Java-based tool used to commit man-in-the-middle attacks. Likely this cornucopia of illicit programs was how the pair were able to issue “passes for inmates to gain access to multiple areas within MCI” and gain access to “unauthorized inmate records including disciplinary records, sentencing data, and inmate locations.” Ho-lee fuck.

Given that oversight at MCI is clearly lacking, the only way these two masterminds were caught at all was due to employee bandwidth throttling. Remember Ray Canterbury? An automated message informed MCI staff that on Friday July 3, 2015 “a computer operating through the ODRC computer network had exceeded a daily internet usage threshold.” The credentials were tied to Canterbury, who only worked Monday through Thursday. Considering the level of sneakiness required to build computers from scratch, run network cables, and steal someone’s identity, not looking up employee schedules is a spectacular own-goal.

The ring of prisoners involved with this data heist have been shipped off to other facilities, and MCI are shouldering the blame for not only allowing it to happen, but failing to notify Ohio State Highway Patrol as regulations apparently dictate.

[Watchdog.ohio.gov, Cleaveland.com]

via Gizmodo
Inmates Stashed Two Homebrew Computers in Ohio Prison’s Ceiling, Used Them to Do More Crimes

Investigation Finds Inmates Built Computers, Hid Them In Prison Ceiling

An anonymous reader quotes a report from WRGB: The discovery of two working computers hidden in a ceiling at the Marion Correctional Institution prompted an investigation by the state into how inmates got access. In late July, 2015 staff at the prison discovered the computers hidden on a plywood board in the ceiling above a training room closet. The computers were also connected to the Ohio Department of Rehabilitation and Correction’s network. Authorities say they were first tipped off to a possible problem in July, when their computer network support team got an alert that a computer "exceeded a daily internet usage threshold." When they checked the login being used, they discovered an employee’s credentials were being used on days he wasn’t scheduled to work. That’s when they tracked down where the connection was coming from and alerted Marion Correctional Institution of a possible problem. Investigators say there was lax supervision at the prison, which gave inmates the ability to build computers from parts, get them through security checks, and hide them in the ceiling. The inmates were also able to run cabling, connecting the computers to the prison’s network. Furthermore, "investigators found an inmate used the computers to steal the identify of another inmate, and then submit credit card applications, and commit tax fraud," reports WRGB. "They also found inmates used the computers to create security clearance passes that gave them access to restricted areas."



Share on Google+

Read more of this story at Slashdot.

via Slashdot
Investigation Finds Inmates Built Computers, Hid Them In Prison Ceiling

‘Star Wars Battlefront II’ trailer leaks out a few days early

EA promised to reveal a trailer for its Star Wars Battlefront sequel on April 15th during the Star Wars Celebration event, but it appears to have popped up online a bit early. The 30-second teaser clip shows "game engine footage," with hints at what we can expect from both its single- and multiplayer experience. It appears that the story mode will have players taking on the role of a young woman fighting on the side of the Empire in a post-Return of the Jedi story line attempting to "avenge your emperor."

Still, most people will probably spend much more time in the multiplayer section, which promises to feature action "across all eras," as clips flash including Darth Maul, Yoda, Rey and Kylo Ren. The final shot (shown above) highlights the first two and what we assume is your single player character — expect to find out more about Star Wars Battlefront II over the next few days.

Via: NeoGAF, r/Battlefront

Source: Vimeo

via Engadget
‘Star Wars Battlefront II’ trailer leaks out a few days early

Updated – Full Restore of a MySQL or MariaDB Galera Cluster from Backup

Performing regular backups of your database cluster is imperative for high availability and disaster recovery. If for any reason you lost your entire cluster and had to do a full restore from backup, you would need a reliable and up-to-date backup to start from.

Best Practices for Backups

Some recommendations to consider for a good scheduled backup regime:

  • You should be able to completely recover from a catastrophic failure from at least two previous full backups. Just in case the most recent full backup is damaged, lost, or corrupt,
  • Your backup should contain at least one full backup within a chosen cycle, normally weekly,
  • Store backups away from the current data location, preferably off site,
  • Use a mixture of mysqldump and Xtrabackup for extra safety, and not rely on one method,
  • Test restore your backups on a regular basis, e.g. every two months.

A weekly full backup combined with daily incremental backup is normally enough. Keeping a number of backups for a period of time is always a good plan, maybe keep each weekly backup for one month. This allows you to recover an older database in case of emergencies or if for some reason you have local backup file corruption.

mysqldump or Xtrabackup

mysqldump is very likely the most popular way of backing up MySQL. It does a logical backup of the data, reading from each table using SQL statements then exporting the data into text files. Restoration of a mysqldump is as easy as creating the dump file. The main drawbacks are that it is very slow for large databases, it is not ‘hot’ and it wipes out the InnoDB buffer pool.

Xtrabackup performs hot backups, does not lock the database during the backup and is generally faster. Hot backups are important for high availability, as they run without blocking the application. This is also an important factor when used with Galera, as Galera relies on synchronous replication. However, restoring an xtrabackup can be a little tricky using manual ways.

ClusterControl supports the scheduling of both mysqldump and Xtrabackup (full and incremental), as well as the backup restoration right from the UI.

Full Restore from Backup

In this post, we will show you how to restore Xtrabackup (full + incremental) onto an empty cluster running on MariaDB Galera Cluster. These steps should also work on Percona XtraDB Cluster or Galera Cluster for MySQL from Codership.

In our original cluster, we had a full xtrabackup scheduled daily, with incremental backups created every hour. The backups are stored on ClusterControl as shown in the following screenshot:

Now, let’s assume we have lost our original cluster and have to do a full restore onto a new cluster. The steps include:

  1. Set up a new ClusterControl server.
  2. Set up a new MariaDB Cluster.
  3. Export the backup records and files to the new ClusterControl server.
  4. Start the restoration process.
  5. Start the remaining nodes.

The following diagram illustrates our architecture for this exercise:

Step 1 – Set up New MariaDB Cluster

Install ClusterControl and deploy a new MariaDB Cluster. Go to ClusterControl -> Deploy -> Deploy Database Cluster -> MySQL Galera and specify the required information in the deployment dialog:

Click on the Deploy button and start the deployment. Since we only have a cluster on the old server so the cluster ID should be identical (cluster ID: 1) in this new instance.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Step 2 – Export and import the backup files

Once the cluster is deployed, we will have to import the backups from the old ClusterControl server into the new one. First, export the content of cmon.backup_records to dump files. Since the old cluster ID and the new one is identical, we just need to modify the dump file with the new IP address and import it to the new ClusterControl node. If the cluster ID is different, then you have to change “cid” value accordingly inside the dump files before importing into CMON DB on the new node. Also, it is easier to keep the backup storage location as in the old server so the new ClusterControl can locate the backup files in the new server.

On the old ClusterControl server, export the backup_records table into dump files:

$ mysqldump -uroot -p --single-transaction --no-create-info cmon backup_records > backup_records.sql

Then, perform remote copy of the backup files from the old server into the new ClusterControl server:

$ scp -r /root/backups 192.168.55.150:/root/
$ scp ~/backup_records.sql 192.168.55.150:~

Next is to modify the dump files to reflect the new ClusterControl server IP address. Don’t forget to escape the dot in the IP address:

$ sed -i "s/192\.168\.55\.170/192\.168\.55\.150/g" backup_records.sql

On the new ClusterControl server, import the dump files:

$ mysql -uroot -p cmon < backup_records.sql

Verify that the backup list is correct in the new ClusterControl server:

As you can see, all occurences of the previous IP address (192.168.55.170) have been replaced by the new IP address (192.168.55.150). Now we are ready to perform the restoration in the new server.

Step 3 – Perform the Restoration

Performing restoration through the ClusterControl UI is a simple point-and-click step. Choose which backup to restore and click on the “Restore” button. We are going to restore the latest incremental backup available (Backup: 9). Click on the “Restore” button just below the backup name and you will be presented with the following pre-restoration dialog:

Looks like the backup size is pretty small (165.6 kB). It doesn’t really matter because ClusterControl will prepare all incremental backups grouped under Backup Set 6, which holds the full Xtrabackup. You also have several post-restoration options:

  • Restore backup on – Choose the node to restore the backup on.
  • Tmp Dir – Directory will be used on the local ClusterControl server as temporary storage during backup preparation. It must be as big as the estimated MySQL data directory.
  • Bootstrap cluster from the restored node – Since this is a new cluster, we are going to toggle this ON so ClusterControl will bootstrap the cluster automatically after the restoration succeeds.
  • Make a copy of the datadir before restoring the backup – If the restored data is corrupted or not as what you are expected it to be, you will have a backup of the previous MySQL data directory. Since this is a new cluster, we are going to ignore this one.

Percona Xtrabackup restoration will cause the cluster to be stopped. ClusterControl will:

  1. Stop all nodes in the cluster.
  2. Restore the backup on the selected node.
  3. Bootstrap the selected node.

To see the restoration progress, go to Activity -> Jobs -> Restore Backup and click on the “Full Job Details” button. You should see something like this:

One important thing that you need to do is to monitor the output of the MySQL error log on the target node (192.168.55.151) during the restoration process. After the restoration completes and during the bootstrapping process, you should see the following lines starting to appear:

Version: '10.1.22-MariaDB' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2017-04-07 18:03:51 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:51 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:51 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:52 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:53 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:54 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)
2017-04-07 18:03:55 140608191986432 [Warning] Access denied for user 'cmon'@'192.168.55.150' (using password: YES)

Don’t panic. This is an expected behaviour because this backup set doesn’t store the cmon login credentials of the new ClusterControl cmon password. It has restored/replaced the old cmon user instead. What you need to do is to re-grant cmon user back to the server by running the following statement on this DB node:

GRANT ALL PRIVILEGES ON *.* to cmon@'192.168.55.150' IDENTIFIED BY 'mynewCMONpassw0rd' WITH GRANT OPTION;
FLUSH PRIVILEGES;

ClusterControl then would be able to connect to the bootstrapped node and determine the node and backup state. If everything is OK, you should see something like this:

At this point, the target node is bootstrapped and running. We can start the remaining nodes under Nodes -> choose node -> Start Node and check the “Perform an Initial Start” checkbox:

The restoration is now complete and you can expect Performance -> DB Growth to report the updated size of our newly restored data set :

Happy restoring!

via Planet MySQL
Updated – Full Restore of a MySQL or MariaDB Galera Cluster from Backup

Home Inspection and Its Powerful Benefits

A celebration is typically in order when you finally get to buy that dream house of yours. It’s a huge prize and achievement after so much hard work and saving. But, what if you open that door only to see occasional power surges and broken pipes?

Opening your doors to those things is the most horrible way to start that next phase of your life. One way you can prevent those things from happening is through a home inspection.

Doing a home inspection is an excellent way to find problems that you didn’t get to see during your first visit to the property. By doing a home inspection prior to making a purchase, you’ll be saving yourself tons of headaches, frustrations and the high cost of repair.

Still not convinced? Here are some home inspection benefits you should know about.

Peace of Mind

home inspection

When we notice something out of place, it bothers us and makes us want to know what’s making us uneasy. A home inspection is one way of understanding that kind of problem.

See Also: 10 Important Home Features That Home Buyers Want

Safety

One of the main reasons you buy a house is for safety. It can protect you and your loved ones from the weather, dangerous elements, and other things that can compromise your safety. Doing an inspection can and will provide the security you need, like preventing personal injuries from happening.

A slight power surge may mean that some wirings are rotting or rats have begun chewing on the wires. You can already consider that as a fire hazard, and lots of tragic stories have been born from that.

Water dripping from the ceiling can be solved temporarily by a bucket, but what if it becomes full and turns into a slipping hazard? Never underestimate what a simple home inspection can do.

Savings

home inspection savings

A lot of homeowners disregard home inspection due to its cost. This way of thinking should be completely changed if you want to save a vast fortune. Knowing what needs repairs or fixing as soon as possible can prevent disasters, not only physically, but financially as well.

Having to face an abysmal ton of repairs in the future is a real headache. With home inspection, you can sort out what needs to be fixed immediately to prevent it from causing, even more, problems.

Why wait for that rusty and leaky pipe to fall and break that expensive bath tub upstairs? Why wait for that wire to burn down the house when you can change that wire for a small fee?

A home inspection can save you from that headache and that financial drawback that you were so desperately afraid of in the first place.

See Also: Warning: 7 Home Inspection Pitfalls That Can Cost You A Fortune

Takeaway

It’s normal for people to feel euphoric when they get to purchase their dream house. Because of the overwhelming emotions, they can inadvertently skip home inspection. They tend to disregard the idea, thinking that newer properties are impossible to have defects and damages.

Home inspection can provide us with a more comfortable life in the long run. From significant savings to saving the entire house we own, early assessment of our property is the next best thing we can do after popping that bottle of champagne.

The post Home Inspection and Its Powerful Benefits appeared first on Dumb Little Man.


via Dumb Little Man – Tips for Life
Home Inspection and Its Powerful Benefits

InnoDB Page Merging and Page Splitting

Page Merging and Page Splitting

If you met one of the (few) MySQL consultants around the globe and asked him/her to review your queries and/or schemas, I am sure that he/she would tell you something regarding the importance of good primary key(s) design. Especially in the case of InnoDB, I’m sure they started to explain to you about index merges and page splits. These two notions are closely related to performance, and you should take this relationship into consideration when designing any index (not just PKs).

That may sound like mumbo jumbo to you, and you may be right. This is not easy stuff, especially when talking about internals. This is not something you deal with on a regular basis, and often you don’t want to deal with it at all.

But sometimes it’s a necessity. If so, this article is for you.

In this article, I want to shed some light in explaining some of the most unclear, behind the scenes operations in InnoDB: page index creation, page merging and page splitting.

In Innodb all data is an index. You’ve probably heard that as well right? But what exactly does that mean?

File-Table Components

Let’s say you have MySQL installed, the latest 5.7 version (Percona Server for MySQL, right? 😉 ), and you have a table named wmills in the schema windmills. In the data directory (normally /var/lib/mysql/) you will see that it contains:

data/
  windmills/
      wmills.ibd
      wmills.frm

This is because the parameter innodb_file_per_table is set to 1 since MySQL 5.6. With that setting, each table in your schema is represented by one file (or many files if the table is partitioned).

What is important here is that the physical container is a file named wmills.ibd. This file is broken up into and contains N number of segments. Each segment is associated with an index.

While a file’s dimensions do not shrink with row-deletions, a segment itself can grow or shrink in relation to a sub-element named extent. An extent can only exist inside a segment and has a fixed dimension of 1MB (in the case of default page size). A page is a sub-element of an extent and has a default size of 16KB.

Given that, an extent can contain a maximum of 64 pages. A page can contain two to N number of rows. The number of rows a page can contain is related to the size of the row, as defined by your table schema. There is a rule within InnoDB that says, at minimum, two rows must fit into a page. Therefore, we have a row-size limit of 8000 bytes.

If you think this sounds like Matryoshka dolls, you are right! An image might help:

InnoDB uses B-trees to organize your data inside pages across extents, within segments.

Roots, Branches, and Leaves

Each page (leaf) contains 2-N rows(s) organized by the primary key. The tree has special pages to manage the different branch(es). These are known as internal nodes (INodes).

This image is just an example, and is not indicative of the real-world output below.

Let’s see the details:

ROOT NODE #3: 4 records, 68 bytes
 NODE POINTER RECORD ≥ (id=2) → #197
 INTERNAL NODE #197: 464 records, 7888 bytes
 NODE POINTER RECORD ≥ (id=2) → #5
 LEAF NODE #5: 57 records, 7524 bytes
 RECORD: (id=2) → (uuid="884e471c-0e82-11e7-8bf6-08002734ed50", millid=139, kwatts_s=1956, date="2017-05-01", location="For beauty's pattern to succeeding men.Yet do thy", active=1, time="2017-03-21 22:05:45", strrecordtype="Wit")

Below is the table structure:

CREATE TABLE `wmills` (
  `id` bigint(11) NOT NULL AUTO_INCREMENT,
  `uuid` char(36) COLLATE utf8_bin NOT NULL,
  `millid` smallint(6) NOT NULL,
  `kwatts_s` int(11) NOT NULL,
  `date` date NOT NULL,
  `location` varchar(50) COLLATE utf8_bin DEFAULT NULL,
  `active` tinyint(2) NOT NULL DEFAULT '1',
  `time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `strrecordtype` char(3) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  KEY `IDX_millid` (`millid`)
) ENGINE=InnoDB;

All styles of B-trees have a point of entry known as the root node. We’ve identified that here as page #3. The root page contains information such as index ID, number of INodes, etc. INode pages contain information about the pages themselves, their value ranges, etc. Finally, we have the leaf nodes, which is where we can find our data. In this example, we can see that leaf node #5 has 57 records for a total of 7524 bytes. Below that line is a record, and you can see the row data.

The concept here is that while you organize your data in tables and rows, InnoDB organizes it in branches, pages, and records. It is very important to keep in mind that InnoDB does not work on a single row basis. InnoDB always operates on pages. Once a page is loaded, it will then scan the page for the requested row/record.

Is that clear up to now? Good. Let’s continue.

Page Internals

A page can be empty or fully filled (100%). The row-records will be organized by PK. For example, if your table is using an AUTO_INCREMENT, you will have the sequence ID = 1, 2, 3, 4, etc.

A page also has another important attribute: MERGE_THRESHOLD. The default value of this parameter is 50% of the page, and it plays a very important role in InnoDB merge activity:

While you insert data, the page is filled up sequentially if the incoming record can be accommodated inside the page.

When a page is full, the next record will be inserted into the NEXT page:

Given the nature of B-trees, the structure is browsable not only top-down following the branches, but also horizontally across the leaf nodes. This is because each leaf node page has a pointer to the page that contains the NEXT record value in the sequence.

For example, Page #5 has a reference to the next page, Page #6. Page #6 has references backward to the previous page (Page #5) and a forward to the next page (Page #7).

This mechanism of a linked list allows for fast, in-order scans (i.e., Range Scans). As mentioned before, this is what happens when you are inserting and have a PK based on AUTO_INCREMENT. But what happens if I start to delete values?

Page Merging

When you delete a record, the record is not physically deleted. Instead, it flags the record as deleted and the space it used becomes reclaimable.

When a page has received enough deletes to match the MERGE_THRESHOLD (50% of the page size by default), InnoDB starts to look to the closest pages (NEXT and PREVIOUS) to see if there is any chance to optimize the space utilization by merging the two pages.

In this example, Page #6 is utilizing less than half of its space. Page #5 received many deletes and is also now less than 50% used. From InnoDB’s perspective, they are mergeable:

The merge operation results in Page #5 containing its previous data plus the data from Page #6. Page #6 becomes an empty page, usable for new data.

Page Merging and Page Splitting

The same process also happens when we update a record and the size of the new record brings the page below the threshold.

The rule is: Merges happen on delete and update operations involving close linked pages. If a merge operation is successful, the index_page_merge_successful metric in INFORMATION_SCHEMA.INNODB_METRICS is incremented.

Page Splits

As mentioned above, a page can be filled up to 100%. When this happens, the next page takes new records.

But what if we have the following situation?

Page Merging and Page Splitting

Page #10 doesn’t have enough space to accommodate the new (or updated) record. Following the next page logic, the record should go on Page #11. However:

Page Merging and Page Splitting

Page #11 is also full, and data cannot be inserted out of order. So what can be done?

Remember the linked list we spoke about? At this moment Page #10 has Prev=9 and Next=11.  

What InnoDB will do is (simplifying):

  1. Create a new page
  2. Identify where the original page (Page #10) can be split (at the record level)
  3. Move records
  4. Redefine the page relationships

Page Merging and Page Splitting

A new Page #12 is created:

Page Merging and Page Splitting

Page #11 stays as it is. The thing that changes is the relationship between the pages:

  • Page #10 will have Prev=9 and Next=12
  • Page #12 Prev=10 and Next=11
  • Page #11 Prev=12 and Next=13

The path of the B-tree still sees consistency since it is following a logical organization. However, physically the page is located out of order, and in most cases in a different extent.

As a rule we can say: Page splits happens on Insert or Update, and cause page dislocation (in many cases on different extents).

InnoDB tracks the number of page splits in INFORMATION_SCHEMA.INNODB_METRICS. Look for index_page_splits and index_page_reorg_attempts/successful metrics.

Once the split page is created, the only way to move back is to have the created page drop below the merge threshold. When that happens, InnoDB moves the data from the split page with a merge operation.

The other way is to reorganize the data by OPTIMIZE the table. This can be a very heavy and long process, but often is the only way to recover from a situation where too many pages are located in sparse extents.

Another aspect to keep in mind is that during merge and split operations, InnoDB acquires an x-latch to the index tree. On a busy system, this can easily become a source of concern. This can cause index latch contention. If no merges and splits (aka writes) touch only a single page, this is called an “optimistic” update in InnoDB, and the latch is only taken in S. Merges and splits are called “pessimistic” updates, and take the latch in X.

My Primary Key

A good Primary Key (PK) is not only important for retrieving data, but also correctly distributing the data inside the extents while writing (which is also relevant in the case of split and merge operations).

In the first case, I have a simple auto-increment. In the second my PK is based on an ID (1-200 range) and an auto-increment value. In my third, I have the same ID (1-200 range) but associate with a UUID.

When inserting, InnoDB must add pages. This is read as a SPLIT operation:

Page Merging and Page Splitting

The behavior is quite different depending on the kind of Primary Key I use.

The first two cases will have more “compact” data distribution. This means they will also have better space utilization, while the semi-random nature of the UUID will cause a significant “sparse” page distribution (causing a higher number of pages and related split operations).

In the case of merges, the number of attempts to merge is even more different by PK type.

Page Merging and Page Splitting

On Insert-Update-Delete operations, auto-increment has less page merge attempts and 9.45% less of a success ratio than the other two types. The PK with UUID (on the side other of the spectrum) has a higher number of merge attempts, but at the same time also a significantly higher success ratio at 22.34%, given that the “sparse” distribution left many pages partially empty. 

The PK values with similar numbers also come from a secondary index.

Conclusion

MySQL/InnoDB constantly performs these operations, and you have very limited visibility of them. But they can bite you, and bite hard, especially if using a spindle storage VS SSD (which have different issues, by the way).

The sad story is there is also very little we can do to optimize this on the server side using parameters or some other magic. But the good news is there is A LOT that can be done at design time.

Use a proper Primary Key and design a secondary index, keeping in mind that you shouldn’t abuse of them. Plan proper maintenance windows on the tables that you know will have very high levels of inserts/deletes/updates.

This is an important point to keep in mind. In InnoDB you cannot have fragmented records, but you can have a nightmare at the page-extent level. Ignoring table maintenance will cause more work at the IO level, memory and InnoDB buffer pool.

You must rebuild some tables at regular intervals. Use whatever tricks it requires, including partitioning and external tools (pt-osc). Do not let a table to become gigantic and fully fragmented. 

Wasting disk space? Need to load three pages instead one to retrieve the record set you need? Each search causes significantly more reads?
That’s your fault; there is no excuse for being sloppy!

Happy MySQL to everyone!

Acknowledgments

Laurynas Biveinis: who had the time and patience to explain some internals to me.

Jeremy Cole: for his project InnoDB_ruby (that I use constantly).

via MySQL Performance Blog
InnoDB Page Merging and Page Splitting

Staples Tries Co-Working Spaces To Court Millennials And Entrepreneurs

Are there any Slashdot readers who are doing their work in co-working spaces? An anonymous reader writes:
Staples office-supply stores is aggressively repositioning its brand to entice new customers like tech entrepreneurs and small businesses, reports The New York Times. "A case in point: Staples’ partnership with Workbar, a Boston-based co-working company founded in 2009… Workbar attracts the coveted millennial generation, as well as entrepreneurs, a potential pipeline for new small business customers." Three co-working spaces have now been added to Staples stores, including their original flagship store in Boston, and the Times spotted funky art, skylights, an artificial putting green, as well as gourmet coffee "and — on some nights — happy hours with beer and wine."
"This blend of old and new shows how Staples Inc. is digging up its roots as one of the first, and most successful, big-box retailers. Under Shira Goodman, the company’s new chief executive officer, Staples hopes it can reverse its years of declining sales, unlike so many other retailers left for dead in the internet age." The company also reports online orders already make up 60% of their sales, which they hope to push to 80% by 2020, according to the Motley Fool. "Selling products, 50% of which are outside of traditional office supply categories, to businesses large and small has proven to be a resilient business for Staples."



Share on Google+

Read more of this story at Slashdot.

via Slashdot
Staples Tries Co-Working Spaces To Court Millennials And Entrepreneurs

I Hate My Wide Feet

Illustration by Sam Woolley

My feet are big. Not in a potentially good way, the way that might grab the interest of an NBA scout. Or in the way that might set a woman to wondering. No. My feet are wide.

I wear a size 11, width 4E. I can get away with a 2E, but it’s not ideal. According to this handy chart—for big and tall men, goddamnit—that means my foot is three-quarters of an inch wider than yours, a normal human male’s. My foot is an entire 15 percent wider across.

This is not enough to get me a job in the circus, but it is enough to ensure something that has probably never crossed normies’ minds: I cannot wear most shoes. Think of any hot sneaker, or sharp loafer, or even rain and snow boots. They do not make them in my width. (Nike has relatively recently released some sneakers that come in 4E, but please trust that they’re not the Nikes that you’d ever want to buy.) When I see the kids lining up for new releases, or basketball players shilling their signature models, I know that I am looking at a world of which I can never be a part. Coolness is forever paraded before me, and denied me.

There are a small number of companies that do make nearly all their shoes in extra widths. That list of companies is grim. I wear, and have worn since middle school, exclusively New Balances and Rockports. Who wears those shoes? Dads wear those shoes. I have been a footwear dad since I reached puberty.

The world is not made for me. I have never had dress shoes—which I must buy a half-size bigger just to be able to physically put on—that aren’t perpetual agony. I went bowling this week and I am a chafed, blistered mess. I couldn’t even have a proper punk phase because I couldn’t fit into combat boots.

I am sure there are solutions to my problems. I am sure there is an entire community out there of wide-footed men, who share their questions and provide answers and give each other the support denied them by an industry that’d pretend we don’t exist. I can’t join them, because that would make this my identity. And I do not want to be a Wide-Footed Man. I just want to be a man, who has many unique and humanizing qualities, one of which is my wide feet.

And so I suffer, quietly. You see me, and you don’t consider my plight. You cover your children’s eyes to shield them from the sight of my New Balance 515s. I’ll never be one of you.

I guess Asics makes extra-wide shoes too? No, I’d rather be dead.

via Gizmodo
I Hate My Wide Feet