KeyPort’s latest creation is a modular upgrade system for standard 58mm Swiss Army Knives. At the heart of the Versa58 are its magnetic mounting plates, which let you easily snap tools on and off. The first modules include a mini flashlight, a retractable pen, a USB-C flash drive, a pocket clip, and a multi-purpose holder for a toothpick, tweezers, or ferro rod.
MySQL 8.4 changed the InnoDB adaptive hash index (innodb_adaptive_hash_index) default from ON to OFF, a major shift after years of it being enabled by default. Note that the MySQL adaptive hash index (AHI) feature remains fully available and configurable.
This blog is me going down the rabbit hole so you don’t have to and present you what you actually need to know. I am sure you’re a great MySQLer know-it-all and you might want to skip this but DON’T, participate in bonus task towards the end.
Note that MariaDB already made this change in 10.5.4 (see MDEV-20487), so MySQL is doing nothing new! But why? Let me start with What(?) first!
What is Adaptive Hash Index in MySQL (AHI)
This has been discussed so many times, I’ll keep it short.
We know InnoDB uses B-trees for all indexes. A typical lookup requires traversing 3 – 4 levels: root > internal nodes > leaf page. For millions of rows, this is efficient but not instant.
AHI is an in-memory hash table that sits on top of your B-tree indexes. It monitors access patterns in real-time, and when it detects frequent lookups with the same search keys, it builds hash entries that map those keys directly to buffer pool pages.
So when next time the same search key is hit, instead of a multi-level B-tree traversal, you get a single hash lookup from the AHI memory section and direct jump to the buffer pool page giving you immediate data access.
FYI, AHI is part of InnoDB bufferpool.
What is “adaptive” in the “hash index”
InnoDB watches your workload and decides what to cache adaptively based on access patterns and lookup frequency. You don’t configure which indexes or keys to hash, InnoDB figures it out automatically. High-frequency lookups? AHI builds entries. Access patterns changes? AHI rebuilds the hash. It’s a self tuning optimization that adjusts to your actual runtime behavior and query patterns. That’s the adaptive-ness.
Sounds perfect, right? What’s the problem then?
The Problem(s) with AHI
– Overhead of AHI
AHI is optimal for frequently accessed pages but for non-frequent? The look-up path for such query is:
– Check AHI – Check bufferpool – Read from disk
For infrequent or random access patterns the AHI lookup isn’t useful, only to fall through to the regular B-tree path anyway. It causes you to spend memory search, comparison and burn CPU cycles.
– There is a latch on the AHI door
AHI is a shared data structure, though partitioned (innodb_adaptive_hash_index_parts), it has mutexes for controlled access. Thus when the concurrency increases, AHI may cause those threads blocking each other.
– The unpredictability of AHI
This appears to be the main reason for disabling the Adaptive Hash Index in MySQL 8.4. The optimizer needs to predict costs BEFORE the query runs. It has to decide: “Should I use index A or index B?”. AHI is dynamically built and is access (more frequently or less) dependent thus optimizer cannot predict a consistent query path.
The comments in this IndexLookupCost function section of cost_model.h explains it better, and I quote:
“With AHI enabled the cost of random lookups does not appear to be predictable using standard explanatory variables such as index height or the logarithm of the number of rows in the index.”
I’d word it like this… the default change of InnoDB Adaptive Hash Index in MySQL 8.4 was driven by, One: the realization that “favoring predictability” is more important than potential gains in specific scenarios and Two: End users have the feature available and they can Enable it if they know/think it’d help them.
In my production experience, AHI frequently becomes a contention bottleneck under certain workloads, like write-heavy, highly concurrent or when active dataset is more than the buffer pool size. Disabling AHI ensures consistent response times and eliminates a common source of performance unpredictability”.
That comes to our next segment, what is that YOU need to do? and importantly, HOW?
The bottom line: MySQL 8.4 defaults to innodb_adaptive_hash_index=OFF. Before upgrading, verify whether AHI is actually helping your workload or quietly hurting it.
How to track MySQL AHI usage
Using the MySQL CLI
Use ENGINE INNODB STATUS command and look for the section that says “INSERT BUFFER AND ADAPTIVE HASH INDEX”:
Here: hash searches: Lookups served by AHI non-hash searches: Regular B-tree lookups (after AHI search fails)
If your hash search rate is significantly higher, AHI is actively helping. If the numbers for AHI are similar or lower, AHI isn’t providing much benefit.
Is AHI causing contention in MySQL?
In SHOW ENGINE INNODB STATUS look for wait events in SEMAPHORE section:
-Thread X has waited at btr0sea.ic line … seconds the semaphore: S-lock on RW-latch at … created in file btr0sea.cc line …
How about watching a chart that shows AHI efficiency? Percona Monitoring and Management makes visualization easy to decide on if that’s better for current workload. Here are 1000 words for you:
Bonus Task
Think you’ve got it about MySQL AHI here? Let’s do this task:
Scroll down to “Innodb Adaptive Hash Index” section
Answer this question in comments section: Which MySQL instances are better off without AHI?
Conclusion
AHI is a great idea and it works until it doesn’t. You’ve gotta do the homework, track usage, measure impact, then decide. Make sure you be ready for your upgrade. If your monitoring shows consistently high hash search rates with minimal contention, you’re in the sweet spot, AHI should remain enabled. If not, innodb_adaptive_hash_index is good to remain OFF. I recall a recent song verse that suits well on MySQL AHI: “I’m a king but I’m far from a saint” “It’s a blessing and a curse” (IUKUK)
Have you seen AHI help or hurt in your systems? What’s your plan for MySQL 8.4? I’d love to hear real-world experiences… the database community learns best when we share our war stories.
PS
Open source is beautiful, you can actually read the code (and comments) and understand the “why” behind decisions.
https://codeforgeek.com/wp-content/uploads/2026/01/150-SQL-Commands-Explained.pngIn this guide, we explain 150+ SQL commands in simple words, covering everything from basic queries to advanced functions for 2026. We cover almost every SQL command that exists in one single place, so you never have to go search for anything anywhere else. If you master these 150 commands, you will become an SQL […]Planet MySQL
MySQL Studio in Oracle Cloud Infrastructure MySQL Studio in Oracle Cloud Infrastructure (OCI) is a unified environment for working with MySQL and HeatWave features through a single, streamlined interface. It brings SQL authoring, AI-assisted chat, and Jupyter-compatible notebooks together with project-based organization to help teams get from database setup to productive analytics faster. The same […]Planet MySQL
Database performance is often the bottleneck in web applications. This guide covers comprehensive MySQL optimization techniques from query-level improvements to server configuration tuning.
Understanding Query Execution
Before optimizing, understand how MySQL executes queries using EXPLAIN:
Key EXPLAIN columns to watch: type (aim for ref or better), rows (lower is better), Extra (avoid "Using filesort" and "Using temporary").
EXPLAIN Output Analysis
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| 1 | SIMPLE | o | range | idx_status | idx_... | 4 | NULL | 5000 | Using where |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | mydb.o.user_id | 1 | NULL |
| 1 | SIMPLE | oi | ref | idx_order | idx_... | 4 | mydb.o.id | 3 | Using index |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
Indexing Strategies
Composite Index Design
Design indexes based on query patterns:
-- Query pattern: Filter by status, date range, sort by dateSELECT*FROMordersWHEREstatus='pending'ANDcreated_at>'2024-01-01'ORDERBYcreated_atDESC;-- Optimal composite index (leftmost prefix rule)CREATEINDEXidx_orders_status_createdONorders(status,created_at);-- For queries with multiple equality conditionsSELECT*FROMproductsWHEREcategory_id=5ANDbrand_id=10ANDis_active=1;-- Index with most selective column firstCREATEINDEXidx_products_brand_cat_activeONproducts(brand_id,category_id,is_active);
Covering Indexes
Avoid table lookups with covering indexes:
-- Query only needs specific columnsSELECTid,name,priceFROMproductsWHEREcategory_id=5ORDERBYprice;-- Covering index includes all needed columnsCREATEINDEXidx_products_coveringONproducts(category_id,price,id,name);-- MySQL can satisfy query entirely from index-- EXPLAIN shows "Using index" in Extra column
Index for JOIN Operations
-- Ensure foreign keys are indexedCREATEINDEXidx_orders_user_idONorders(user_id);CREATEINDEXidx_order_items_order_idONorder_items(order_id);CREATEINDEXidx_order_items_product_idONorder_items(product_id);-- For complex joins, index the join columnsSELECTp.name,SUM(oi.quantity)astotal_soldFROMproductspJOINorder_itemsoiONp.id=oi.product_idJOINordersoONoi.order_id=o.idWHEREo.created_at>'2024-01-01'GROUPBYp.idORDERBYtotal_soldDESC;-- Indexes needed:-- orders(created_at) - for WHERE filter-- order_items(order_id) - for JOIN-- order_items(product_id) - for JOIN
Don’t over-index! Each index slows down INSERT/UPDATE operations. Monitor unused indexes with sys.schema_unused_indexes.
Query Optimization Techniques
Avoiding Full Table Scans
-- Bad: Function on indexed column prevents index useSELECT*FROMusersWHEREYEAR(created_at)=2024;-- Good: Range query uses indexSELECT*FROMusersWHEREcreated_at>='2024-01-01'ANDcreated_at<'2025-01-01';-- Bad: Leading wildcard prevents index useSELECT*FROMproductsWHEREnameLIKE'%phone%';-- Good: Trailing wildcard can use indexSELECT*FROMproductsWHEREnameLIKE'phone%';-- For full-text search, use FULLTEXT indexALTERTABLEproductsADDFULLTEXTINDEXft_name(name);SELECT*FROMproductsWHEREMATCH(name)AGAINST('phone');
Optimizing Subqueries
-- Bad: Correlated subquery runs for each rowSELECT*FROMproductspWHEREprice>(SELECTAVG(price)FROMproductsWHEREcategory_id=p.category_id);-- Good: JOIN with derived tableSELECTp.*FROMproductspJOIN(SELECTcategory_id,AVG(price)asavg_priceFROMproductsGROUPBYcategory_id)cat_avgONp.category_id=cat_avg.category_idWHEREp.price>cat_avg.avg_price;-- Even better: Window function (MySQL 8.0+)SELECT*FROM(SELECT*,AVG(price)OVER(PARTITIONBYcategory_id)asavg_priceFROMproducts)tWHEREprice>avg_price;
Pagination Optimization
-- Bad: OFFSET scans and discards rowsSELECT*FROMproductsORDERBYidLIMIT10OFFSET100000;-- Good: Keyset pagination (cursor-based)SELECT*FROMproductsWHEREid>100000-- Last seen IDORDERBYidLIMIT10;-- For complex sorting, use deferred joinSELECTp.*FROMproductspJOIN(SELECTidFROMproductsORDERBYcreated_atDESC,idDESCLIMIT10OFFSET100000)tONp.id=t.id;
Server Configuration Tuning
InnoDB Buffer Pool
# my.cnf - For dedicated database server with 32GB RAM
[mysqld]# Buffer pool should be 70-80% of available RAM
innodb_buffer_pool_size=24Ginnodb_buffer_pool_instances=24# Log file size affects recovery time vs write performance
innodb_log_file_size=2Ginnodb_log_buffer_size=64M# Flush settings (1 = safest, 2 = faster)
innodb_flush_log_at_trx_commit=1innodb_flush_method=O_DIRECT# Thread concurrency
innodb_thread_concurrency=0innodb_read_io_threads=8innodb_write_io_threads=8
-- Find top 10 slowest queriesSELECTDIGEST_TEXT,COUNT_STARasexec_count,ROUND(SUM_TIMER_WAIT/1000000000000,2)astotal_time_sec,ROUND(AVG_TIMER_WAIT/1000000000,2)asavg_time_ms,SUM_ROWS_EXAMINED,SUM_ROWS_SENTFROMperformance_schema.events_statements_summary_by_digestORDERBYSUM_TIMER_WAITDESCLIMIT10;-- Find tables with most I/OSELECTobject_schema,object_name,count_read,count_write,ROUND(sum_timer_read/1000000000000,2)asread_time_sec,ROUND(sum_timer_write/1000000000000,2)aswrite_time_secFROMperformance_schema.table_io_waits_summary_by_tableORDERBYsum_timer_waitDESCLIMIT10;-- Find unused indexesSELECT*FROMsys.schema_unused_indexes;-- Find redundant indexesSELECT*FROMsys.schema_redundant_indexes;
Real-time Monitoring
-- Current running queriesSELECTid,user,host,db,command,time,state,LEFT(info,100)asqueryFROMinformation_schema.processlistWHEREcommand!='Sleep'ORDERBYtimeDESC;-- InnoDB statusSHOWENGINEINNODBSTATUS\G-- Buffer pool hit ratio (should be > 99%)SELECT(1-((SELECTvariable_valueFROMperformance_schema.global_statusWHEREvariable_name='Innodb_buffer_pool_reads')/(SELECTvariable_valueFROMperformance_schema.global_statusWHEREvariable_name='Innodb_buffer_pool_read_requests')))*100asbuffer_pool_hit_ratio;
Partitioning for Large Tables
-- Range partitioning by dateCREATETABLEorders(idBIGINTAUTO_INCREMENT,user_idINTNOTNULL,totalDECIMAL(10,2),statusVARCHAR(20),created_atDATETIMENOTNULL,PRIMARYKEY(id,created_at),INDEXidx_user(user_id,created_at))PARTITIONBYRANGE(YEAR(created_at))(PARTITIONp2022VALUESLESSTHAN(2023),PARTITIONp2023VALUESLESSTHAN(2024),PARTITIONp2024VALUESLESSTHAN(2025),PARTITIONp_futureVALUESLESSTHANMAXVALUE);-- Queries automatically prune partitionsSELECT*FROMordersWHEREcreated_at>='2024-01-01'ANDcreated_at<'2024-07-01';-- Only scans p2024 partition
Connection Pooling
Application-Level Pooling
// Node.js with mysql2constmysql=require('mysql2/promise');constpool=mysql.createPool({host:'localhost',user:'app_user',password:'password',database:'myapp',waitForConnections:true,connectionLimit:20,queueLimit:0,enableKeepAlive:true,keepAliveInitialDelay:10000});// Use pool for queriesasyncfunctiongetUser(id){const[rows]=awaitpool.execute('SELECT * FROM users WHERE id = ?',[id]);returnrows[0];}
Conclusion
MySQL performance optimization is an iterative process. Start by identifying slow queries with the slow query log, analyze them with EXPLAIN, add appropriate indexes, and monitor the results. Server configuration should be tuned based on your workload characteristics and available resources.
Key takeaways:
Design indexes based on actual query patterns
Use EXPLAIN to understand query execution
Avoid functions on indexed columns in WHERE clauses
Elosql is a production-grade Laravel package that intelligently analyzes existing database schemas and generates precise migrations and Eloquent models. It supports MySQL, PostgreSQL, SQLite, and SQL Server, making it perfect for legacy database integration, reverse engineering, and rapid application scaffolding.
Epic Spaceman takes us on a journey through a smartphone’s main processing unit by enlarging a computer chip to the size of Manhattan and flying through it with his digital avatar. It’s mind-blowing when you realize just how much computing power and engineering complexity fits inside a chip the size of a fingernail. For more, check out his collab with MKBHD.
It’s a Tuesday morning, and Avatar: Fire and Ash is still playing in theaters, so you know what that means: it’s time to sit down and watch a much nicer quality version of an Avengers: Doomsday trailer you already saw a camrip of on social media a week ago.
At least this time, whether or not you saw the leaks that have heralded every delayed online release rolling out Marvel’s Doomsday marketing plan, there is a surprisingly novel element to this latest one: it shows a new team-up for the latest entry in a superhero team-up movie. Imagine that!
The fourth Doomsday teaser jukes perhaps where many would’ve expected it to jive. After last week’s action-packed X-Men tease, things are back to a bit more of a calm, yet dire portent as we catch up with the worlds of Wakanda and Talokan after the events of Wakanda Forever. With Shuri mourning the loss of much of her family and Namor ever-vigilant for things that go bump in the ocean, it’s an intriguing figure we see when Shuri and M’Baku meet to disrupt that contemplation: none other than Ben Grimm of the Fantastic Four.
After their brief appearance arriving into the primary MCU timeline during Thunderbolts‘ post-credit scene, this is our first proper glimpse of the Fantastic Four joining up with the rest of the MCU, which is a fun little treat. It’s especially nice considering that Namor himself has a long history in the comics with the group, so even if he’s not present for this welcoming, it’s nice to at least put these two corners of the Marvel universe in each other’s paths like this.
But it’s also an intriguing choice for one of these teasers. The past three have focused on some big familiar heavy hitters making their comebacks for Doomsday—the “surprise” return of Chris Evans as Steve Rogers, another original Avenger (and Hollywood Chris) in Thor, and then of course the invocation of the Fox X-Men films. In contrast, this feels a bit more interestingly muted… and, of course, it continues to kick the can of whether or not one of these teasers will give us a proper look at Robert Downey Jr.’s Doctor Doom. At least we’re inching ever closer to that possibility with the arrival of a Fantastic Four member on the scene.
Time will tell if this is all we’ll be getting from Doomsday for now—early rumors about the campaign did say there would be just four teasers—or if we’ll be getting more as long as the box office is in a state of Pandora-induced mania. Either way, you’ll probably learn about it on social media first well before Marvel deigns to officially fill us all in.
Playing hockey at the intermission is every kid’s dream, but have you really lived the dream to its fullest if you don’t start a full-on brawl right there on the ice in front of thousands of people?