These days, companies like Stihl and Makita sell multi-heads. These are battery-powered motors that can drive a variety of common landscaping attachments, like string trimmers and hedge cutters.
Uniquely, Makita also offers this Snow Thrower attachment:
The business end is 12" wide and can handle a 6" depth of snow at a time. Tiltable vanes on the inside let you control whether you want to throw the snow to the left, to the right or straight ahead. The company says you can clear about five parking spaces with two 18V batteries.
So how well does it work? Seeing is believing. Here’s Murray Kruger of Kruger Construction putting it through its paces:
Audit logging has become a crucial component of database security and compliance, helping organizations track user activities, monitor data access patterns, and maintain detailed records for regulatory requirements and security investigations. Database audit logs provide a comprehensive trail of actions performed within the database, including queries executed, changes made to data, and user authentication attempts. Managing these logs is more straightforward with a robust storage solution such as Amazon Simple Storage Service (Amazon S3).
In this post, we explore two approaches for exporting MySQL audit logs to Amazon S3: either using batching with a native export to Amazon S3 or processing logs in real time with Amazon Data Firehose.
Solution overview
The first solution involves batch processing by using the built-in audit log export feature in Amazon RDS for MySQL or Aurora MySQL-Compatible to export logs to Amazon CloudWatch Logs. Amazon EventBridge periodically triggers an AWS Lambda function. This solution creates a CloudWatch export task that sends the last one days’s of audit logs to Amazon S3. The period (one day) is configurable based on your requirements. This solution is the most cost-effective and practical if you don’t require the audit logs to be available in real-time within an S3 bucket. The following diagram illustrates this workflow.
The other proposed solution uses Data Firehose to immediately process the MySQL audit logs within CloudWatch Logs and send them to an S3 bucket. This approach is suitable for business use cases that require immediate export of audit logs when they’re available within CloudWatch Logs. The following diagram illustrates this workflow.
Use cases
Once you’ve implemented either of these solutions, you’ll have your Aurora MySQL or RDS for MySQL audit logs stored securely in Amazon S3. This opens up a wealth of possibilities for analysis, monitoring, and compliance reporting. Here’s what you can do with your exported audit logs:
Run Amazon Athena queries: With your audit logs in S3, you can use Amazon Athena to run SQL queries directly against your log data. This allows you to quickly analyze user activities, identify unusual patterns, or generate compliance reports. For example, you could query for all actions performed by a specific user, or find all failed login attempts within a certain time frame.
Create Amazon Quick Sight dashboards: Using Amazon Quick Sight in conjunction with Athena, you can create visual dashboards of your audit log data. This can help you spot trends over time, such as peak usage hours, most active users, or frequently accessed database objects.
Set up automated alerting: By combining your S3-stored logs with AWS Lambda and Amazon SNS, you can create automated alerts for specific events. For instance, you could set up a system to notify security personnel if there’s an unusual spike in failed login attempts or if sensitive tables are accessed outside of business hours.
Perform long-term analysis: With your audit logs centralized in S3, you can perform long-term trend analysis. This could help you understand how database usage patterns change over time, informing capacity planning and security policies.
Meet compliance requirements: Many regulatory frameworks require retention and analysis of database audit logs. With your logs in S3, you can easily demonstrate compliance with these requirements, running reports as needed for auditors.
By leveraging these capabilities, you can turn your audit logs from a passive security measure into an active tool for database management, security enhancement, and business intelligence.
Comparing solutions
The first solution used EventBridge to periodically trigger a Lambda function. This function creates a CloudWatch Log export task that sends a batch of log data to Amazon S3 at regular intervals. This method is well-suited for scenarios where you prefer to process logs in batches to optimize costs and resources.
The second solution uses Data Firehose to create a real-time audit log processing pipeline. This approach streams logs directly from CloudWatch to an S3 bucket, providing near real-time access to your audit data. In this context, “real-time” means that log data is processed and delivered synchronously as it is generated, rather than being sent in a pre-defined interval. This solution is ideal for scenarios requiring immediate access to log data or for high-volume logging environments.
Whether you choose the near real-time streaming approach or the scheduled export method, you will be well-equipped to managed your Aurora MySQL and RDS for MySQL audit logs effectively.
Prerequisites for both solutions
Before getting started, complete the following prerequisites:
Create or have an existing RDS for MySQL instance or Aurora MySQL cluster.
Create an S3 bucket to store the MySQL audit logs using the below AWS CLI command:
aws s3api create-bucket --bucket <bucket_name>
After the command is complete, you will see an output similar to the following:
Note: Each solution has specific service components which are discussed in their respective sections.
Solution #1: Peform audit log batch processing with EventBridge and Lambda
In this solution, we create a Lambda function to export your audit log to Amazon S3 based on the schedule you set using EventBridge Scheduler. This solution offers a cost-efficient way to transfer audit log files within an S3 bucket in a scheduled manner.
Create IAM role for EventBridge Scheduler
The first step is to create an AWS Identity and Access Management (IAM) role responsible for allowing EventBridge Scheduler to invoke the Lambda function we will create later. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create a file named TrustPolicyForEventBridgeScheduler.json using your preferred text editor:
nano TrustPolicyForEventBridgeScheduler.json
Insert the following trust policy into the JSON file:
Note: Make sure to amend SourceAccount before saving into a file. The condition is used to prevents unauthorized access from other AWS accounts.
Create a file named PermissionsForEventBridgeScheduler.json using your preferred text editor:
nano PermissionsForEventBridgeScheduler.json
Insert the following permissions into the JSON file:
Note: Replace <LambdaFunctionName> with the name of the function you’ll create later.
Use the following AWS CLI command to create the IAM role for EventBridge Scheduler to invoke the Lambda function:
Create the IAM policy and attach it to the previously created IAM role:
In this section, we created an IAM role with appropriate trust and permissions policies that allow EventBridge Scheduler to securely invoke Lambda functions from your AWS account. Next, we’ll create another IAM role that defines the permissions that your Lambda function needs to execute its tasks.
Create IAM role for Lambda
The next step is to create an IAM role responsible for allowing Lambda to put records from CloudWatch into your S3 bucket. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano TrustPolicyForLambda.json
Insert the following trust policy into the JSON file:
Use the following AWS CLI command to create the IAM role for Lambda to insert records from CloudWatch to Amazon S3:
Create a file named PermissionsForLambda.json using your preferred text editor:
nano PermissionsForLambda.json
Insert the following permissions into the JSON file:
Create the IAM policy and attach it to the previously created IAM role:
Create ZIP file for the Python Lambda function
To create a file with the code the Lambda function will invoke, complete the following steps:
Create and write to a file named lambda_function.py using your preferred text editor:
nano lambda_function.py
Within the file, insert the following code:
import boto3
import os
import datetime
import logging
import time
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError
logger = logging.getLogger()
logger.setLevel(logging.INFO)defcheck_active_export_tasks(client):"""Check for any active export tasks"""try:
response = client.describe_export_tasks()
active_tasks =[
task for task in response.get('exportTasks',[])if task.get('status',{}).get('code')in['RUNNING','PENDING']]return active_tasks
except ClientError as e:
logger.error(f"Error checking active export tasks: {e}")return[]defwait_for_export_task_completion(client, max_wait_minutes=15, check_interval=60):"""Wait for any active export tasks to complete"""
max_wait_seconds = max_wait_minutes *60
waited_seconds =0while waited_seconds < max_wait_seconds:
active_tasks = check_active_export_tasks(client)ifnot active_tasks:
logger.info("No active export tasks found, proceeding...")returnTrue
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting {check_interval} seconds...")for task in active_tasks:
task_id = task.get('taskId','Unknown')
status = task.get('status',{}).get('code','Unknown')
logger.info(f"Active task ID: {task_id}, Status: {status}")
time.sleep(check_interval)
waited_seconds += check_interval
logger.warning(f"Timed out waiting for export tasks to complete after {max_wait_minutes} minutes")returnFalsedeflambda_handler(event, context):try:
required_env_vars =['GROUP_NAME','DESTINATION_BUCKET','PREFIX','NDAYS']
missing_vars =[var for var in required_env_vars ifnot os.environ.get(var)]if missing_vars:
error_msg =f"Missing required environment variables: {', '.join(missing_vars)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}
GROUP_NAME = os.environ['GROUP_NAME'].strip()
DESTINATION_BUCKET = os.environ['DESTINATION_BUCKET'].strip()
PREFIX = os.environ['PREFIX'].strip()
NDAYS = os.environ['NDAYS'].strip()
MAX_WAIT_MINUTES =int(os.environ.get('MAX_WAIT_MINUTES','30'))
CHECK_INTERVAL =int(os.environ.get('CHECK_INTERVAL','60'))
RETRY_ON_CONCURRENT = os.environ.get('RETRY_ON_CONCURRENT','true').lower()=='true'ifnotall([GROUP_NAME, DESTINATION_BUCKET, PREFIX, NDAYS]):
error_msg ="Environment variables cannot be empty"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
nDays =int(NDAYS)if nDays <=0:raise ValueError("NDAYS must be a positive integer")except ValueError as e:
error_msg =f"Invalid NDAYS value '{NDAYS}': {str(e)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
currentTime = datetime.datetime.now()
StartDate = currentTime - datetime.timedelta(days=nDays)
EndDate = currentTime - datetime.timedelta(days=nDays -1)
fromDate =int(StartDate.timestamp()*1000)
toDate =int(EndDate.timestamp()*1000)if fromDate >= toDate:raise ValueError("Invalid date range: fromDate must be less than toDate")except(ValueError, OverflowError)as e:
error_msg =f"Date calculation error: {str(e)}"
logger.error(error_msg)return{'statusCode':400,'body':{'error': error_msg}}try:
BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))except Exception as e:
error_msg =f"Error creating bucket prefix: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}
logger.info(f"Starting export task for log group: {GROUP_NAME}")
logger.info(f"Date range: {StartDate.strftime('%Y-%m-%d')} to {EndDate.strftime('%Y-%m-%d')}")
logger.info(f"Destination: s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}")try:
client = boto3.client('logs')except NoCredentialsError:
error_msg ="AWS credentials not found"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Error creating boto3 client: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}if RETRY_ON_CONCURRENT:
logger.info("Checking for active export tasks...")
active_tasks = check_active_export_tasks(client)if active_tasks:
logger.info(f"Found {len(active_tasks)} active export task(s). Waiting for completion...")ifnot wait_for_export_task_completion(client, MAX_WAIT_MINUTES, CHECK_INTERVAL):return{'statusCode':409,'body':{'error':f'Active export task(s) still running after {MAX_WAIT_MINUTES} minutes','activeTaskCount':len(active_tasks)}}try:
response = client.create_export_task(
logGroupName=GROUP_NAME,
fromTime=fromDate,
to=toDate,
destination=DESTINATION_BUCKET,
destinationPrefix=BUCKET_PREFIX
)
task_id = response.get('taskId','Unknown')
logger.info(f"Export task created successfully with ID: {task_id}")return{'statusCode':200,'body':{'message':'Export task created successfully','taskId': task_id,'logGroup': GROUP_NAME,'fromDate': StartDate.isoformat(),'toDate': EndDate.isoformat(),'destination':f"s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}"}}except ClientError as e:
error_code = e.response['Error']['Code']
error_msg = e.response['Error']['Message']if error_code =='ResourceNotFoundException':
logger.error(f"Log group '{GROUP_NAME}' not found")return{'statusCode':404,'body':{'error':f"Log group '{GROUP_NAME}' not found"}}elif error_code =='LimitExceededException':
logger.error(f"Export task limit exceeded (concurrent task running): {error_msg}")
active_tasks = check_active_export_tasks(client)return{'statusCode':409,'body':{'error':'Cannot create export task: Another export task is already running','details': error_msg,'activeTaskCount':len(active_tasks),'suggestion':'Only one export task can run at a time. Please wait for the current task to complete or set RETRY_ON_CONCURRENT=true to auto-retry.'}}elif error_code =='InvalidParameterException':
logger.error(f"Invalid parameter: {error_msg}")return{'statusCode':400,'body':{'error':f"Invalid parameter: {error_msg}"}}elif error_code =='AccessDeniedException':
logger.error(f"Access denied: {error_msg}")return{'statusCode':403,'body':{'error':f"Access denied: {error_msg}"}}else:
logger.error(f"AWS ClientError ({error_code}): {error_msg}")return{'statusCode':500,'body':{'error':f"AWS error: {error_msg}"}}except BotoCoreError as e:
error_msg =f"BotoCore error: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Unexpected error creating export task: {str(e)}"
logger.error(error_msg)return{'statusCode':500,'body':{'error': error_msg}}except Exception as e:
error_msg =f"Unexpected error in lambda_handler: {str(e)}"
logger.error(error_msg, exc_info=True)return{'statusCode':500,'body':{'error':'Internal server error'}}
Zip the file using the following command:
zip function.zip lambda_function.py
Create Lambda function
Complete the following steps to create a Lambda function:
Connect to a terminal with the AWS CLI or CloudShell.
Run the following command, which references the zip file previously created:
The NDAYS variable in the preceding command will determine the dates of audit logs exported per invocation of the Lambda function. For example, if you plan on exporting logs one time per day to Amazon S3, set NDAYS=1, as shown in the preceding command.
Add concurrency limits to keep executions in control:
Note: Reserved concurrency in Lambda sets a fixed limit on how many instances of your function can run simultaneously, like having a specific number of workers for a task. In this database export scenario, we’re limiting it to 2 concurrent executions to prevent overwhelming the database, avoid API throttling, and ensure smooth, controlled exports. This limitation helps maintain system stability, prevents resource contention, and keeps costs in check
In this section, we created a Lambda function that will handle the CloudWatch log exports, configured its essential parameters including environment variables, and set a concurrency limit to ensure controlled execution. Next, we’ll create an EventBridge schedule that will automatically trigger this Lambda function at specified intervals to perform the log exports.
Create EventBridge schedule
Complete the following steps to create an EventBridge schedule to invoke the Lambda function at an interval of your choosing:
Connect to a terminal with the AWS CLI or CloudShell.
Run the following command:
The schedule-expression parameter in the preceding command must be equal to the environmental variable NDAYS in the previously created Lambda function.
This solution provides an efficient, scheduled approach to exporting RDS audit logs to Amazon S3 using AWS Lambda and EventBridge Scheduler. By leveraging these serverless components, we’ve created a cost-effective, automated system that periodically transfers audit logs to S3 for long-term storage and analysis. This method is particularly useful for organizations that need regular, batch-style exports of their database audit logs, allowing for easier compliance reporting and historical data analysis.
While the first solution offers a scheduled, batch-processing approach, some scenarios require a more real-time solution for audit log processing. In our next solution, we’ll explore how to create a near real-time audit log processing system using Amazon Kinesis Data Firehose. This approach will allow for continuous streaming of audit logs from RDS to S3, providing almost immediate access to log data.
Solution 2: Create near real-time audit log processing with Amazon Data Firehose
In this section, we review how to create a near real-time audit log export to Amazon S3 using the power of Data Firehose. With this solution, you can directly load the latest audit log files to an S3 bucket for quick analysis, manipulation, or other purposes.
Create IAM role for CloudWatch Logs
The first step is to create an IAM role responsible for allowing CloudWatch Logs to put records into the Firehose delivery stream (CWLtoDataFirehoseRole). Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano TrustPolicyForCWL.json
Insert the following trust policy into the JSON file:
Create and write to a new JSON file for the IAM permissions policy using your preferred text editor:
nano PermissionsForCWL.json
Insert the following permissions into the JSON file:
Use the following AWS CLI command to create the IAM role for CloudWatch Logs to insert records into the Firehose delivery stream:
Create the IAM policy and attach it to the previously created IAM role:
Create IAM role for Firehose delivery stream
The next step is to create an IAM role (DataFirehosetoS3Role) responsible for allowing the Firehose delivery stream to insert the audit logs into an S3 bucket. Complete the following steps to create this role:
Connect to a terminal with the AWS CLI or CloudShell.
Create and write to a JSON file for the IAM trust policy using your preferred text editor:
nano PermissionsForCWL.json
Insert the following trust policy into the JSON file:
Create and write to a new JSON file for the IAM permissions using your preferred text editor:
nano PermissionsForCWL.json
Insert the following permissions into the JSON file:
Use the following AWS CLI command to create the IAM role for Data Firehose to perform operations on the S3 bucket:
Create the IAM policy and attach it to the previously created IAM role:
Create the Firehose delivery stream
Now you create the Firehose delivery stream to allow near real-time transfer of MySQL audit logs from CloudWatch Logs to your S3 bucket. Complete the following steps:
Create the Firehose delivery stream with the following AWS CLI command. Setting the buffer interval and size determines how long your data is buffered before being delivered to the S3 bucket. For more information, refer to AWS documentation. In this example, we use the default values:
Wait until the Firehose delivery stream becomes active (this might take a few minutes). You can use the Firehose CLI describe-delivery-stream command to check the status of the delivery stream. Note the DeliveryStreamDescription.DeliveryStreamARN value, to use in a later step:
After the Firehose delivery stream is in an active state, create a CloudWatch Logs subscription filter. This subscription filter immediately starts the flow of near real-time log data from the chosen log group to your Firehose delivery stream. Make sure to provide the log group name that you want to push to Amazon S3 and properly copy the destination-arn of your Firehose delivery stream:
Your near real-time MySQL audit log solution is now properly configured and will begin delivering MySQL audit logs to your S3 bucket through the Firehose delivery stream.
Clean up
To clean up your resources, complete the following steps (depending on which solution you used):
In this post, we’ve presented two solutions for managing Aurora MySQL or RDS for MySQL audit logs, each offering unique benefits for different business use cases.
We encourage you to implement these solutions in your own environment and share your experiences, challenges, and success stories in the comments section. Your feedback and real-world implementations can help fellow AWS users choose and adapt these solutions to best fit their specific audit logging needs.
Laravel Toaster Magic is designed to be the only toaster package you’ll need for any type of Laravel project.
Whether you are building a corporate dashboard, a modern SaaS, a gaming platform, or a simple blog, I have crafted a theme that fits perfectly.
"One Package, Many Themes." — No need to switch libraries just to change the look.
This major release brings 7 stunning new themes, full Livewire v3/v4 support, and modern UI enhancements.
🚀 What’s New?
1. 🎨 7 Beautiful New Themes
I have completely redesigned the visual experience. You can now switch between 7 distinct themes by simply updating your config.
Theme
Config Value
Description
Default
'default'
Clean, professional, and perfect for corporate apps.
Material
'material'
Google Material Design inspired. Flat and bold.
iOS
'ios'
(Fan Favorite) Apple-style notifications with backdrop blur and smooth bounce animations.
Glassmorphism
'glassmorphism'
Trendy frosted glass effect with vibrant borders and semi-transparent backgrounds.
Neon
'neon'
(Dark Mode Best) Cyberpunk-inspired with glowing neon borders and dark gradients.
Minimal
'minimal'
Ultra-clean, distraction-free design with simple left-border accents.
Neumorphism
'neumorphism'
Soft UI design with 3D embossed/debossed plastic-like shadows.
Want your toasts to pop without changing the entire theme? Enable Gradient Mode to add a subtle "glow-from-within" gradient based on the toast type (Success, Error, etc.).
Works best with Default, Material, Neon, and Glassmorphism themes.
4. 🎨 Color Mode
Don’t want themes? Just want solid colors? Color Mode forces the background of the toast to match its type (Green for Success, Red for Error, etc.), overriding theme backgrounds for high-visibility alerts.
v2.0 transforms Laravel Toaster Magic from a simple notification library into a UI-first experience. Whether you’re building a sleek SaaS (use iOS), a gaming platform (use Neon), or an admin dashboard (use Material), there is likely a theme for you.
KeyPort’s latest creation is a modular upgrade system for standard 58mm Swiss Army Knives. At the heart of the Versa58 are its magnetic mounting plates, which let you easily snap tools on and off. The first modules include a mini flashlight, a retractable pen, a USB-C flash drive, a pocket clip, and a multi-purpose holder for a toothpick, tweezers, or ferro rod.
MySQL 8.4 changed the InnoDB adaptive hash index (innodb_adaptive_hash_index) default from ON to OFF, a major shift after years of it being enabled by default. Note that the MySQL adaptive hash index (AHI) feature remains fully available and configurable.
This blog is me going down the rabbit hole so you don’t have to and present you what you actually need to know. I am sure you’re a great MySQLer know-it-all and you might want to skip this but DON’T, participate in bonus task towards the end.
Note that MariaDB already made this change in 10.5.4 (see MDEV-20487), so MySQL is doing nothing new! But why? Let me start with What(?) first!
What is Adaptive Hash Index in MySQL (AHI)
This has been discussed so many times, I’ll keep it short.
We know InnoDB uses B-trees for all indexes. A typical lookup requires traversing 3 – 4 levels: root > internal nodes > leaf page. For millions of rows, this is efficient but not instant.
AHI is an in-memory hash table that sits on top of your B-tree indexes. It monitors access patterns in real-time, and when it detects frequent lookups with the same search keys, it builds hash entries that map those keys directly to buffer pool pages.
So when next time the same search key is hit, instead of a multi-level B-tree traversal, you get a single hash lookup from the AHI memory section and direct jump to the buffer pool page giving you immediate data access.
FYI, AHI is part of InnoDB bufferpool.
What is “adaptive” in the “hash index”
InnoDB watches your workload and decides what to cache adaptively based on access patterns and lookup frequency. You don’t configure which indexes or keys to hash, InnoDB figures it out automatically. High-frequency lookups? AHI builds entries. Access patterns changes? AHI rebuilds the hash. It’s a self tuning optimization that adjusts to your actual runtime behavior and query patterns. That’s the adaptive-ness.
Sounds perfect, right? What’s the problem then?
The Problem(s) with AHI
– Overhead of AHI
AHI is optimal for frequently accessed pages but for non-frequent? The look-up path for such query is:
– Check AHI – Check bufferpool – Read from disk
For infrequent or random access patterns the AHI lookup isn’t useful, only to fall through to the regular B-tree path anyway. It causes you to spend memory search, comparison and burn CPU cycles.
– There is a latch on the AHI door
AHI is a shared data structure, though partitioned (innodb_adaptive_hash_index_parts), it has mutexes for controlled access. Thus when the concurrency increases, AHI may cause those threads blocking each other.
– The unpredictability of AHI
This appears to be the main reason for disabling the Adaptive Hash Index in MySQL 8.4. The optimizer needs to predict costs BEFORE the query runs. It has to decide: “Should I use index A or index B?”. AHI is dynamically built and is access (more frequently or less) dependent thus optimizer cannot predict a consistent query path.
The comments in this IndexLookupCost function section of cost_model.h explains it better, and I quote:
“With AHI enabled the cost of random lookups does not appear to be predictable using standard explanatory variables such as index height or the logarithm of the number of rows in the index.”
I’d word it like this… the default change of InnoDB Adaptive Hash Index in MySQL 8.4 was driven by, One: the realization that “favoring predictability” is more important than potential gains in specific scenarios and Two: End users have the feature available and they can Enable it if they know/think it’d help them.
In my production experience, AHI frequently becomes a contention bottleneck under certain workloads, like write-heavy, highly concurrent or when active dataset is more than the buffer pool size. Disabling AHI ensures consistent response times and eliminates a common source of performance unpredictability”.
That comes to our next segment, what is that YOU need to do? and importantly, HOW?
The bottom line: MySQL 8.4 defaults to innodb_adaptive_hash_index=OFF. Before upgrading, verify whether AHI is actually helping your workload or quietly hurting it.
How to track MySQL AHI usage
Using the MySQL CLI
Use ENGINE INNODB STATUS command and look for the section that says “INSERT BUFFER AND ADAPTIVE HASH INDEX”:
Here: hash searches: Lookups served by AHI non-hash searches: Regular B-tree lookups (after AHI search fails)
If your hash search rate is significantly higher, AHI is actively helping. If the numbers for AHI are similar or lower, AHI isn’t providing much benefit.
Is AHI causing contention in MySQL?
In SHOW ENGINE INNODB STATUS look for wait events in SEMAPHORE section:
-Thread X has waited at btr0sea.ic line … seconds the semaphore: S-lock on RW-latch at … created in file btr0sea.cc line …
How about watching a chart that shows AHI efficiency? Percona Monitoring and Management makes visualization easy to decide on if that’s better for current workload. Here are 1000 words for you:
Bonus Task
Think you’ve got it about MySQL AHI here? Let’s do this task:
Scroll down to “Innodb Adaptive Hash Index” section
Answer this question in comments section: Which MySQL instances are better off without AHI?
Conclusion
AHI is a great idea and it works until it doesn’t. You’ve gotta do the homework, track usage, measure impact, then decide. Make sure you be ready for your upgrade. If your monitoring shows consistently high hash search rates with minimal contention, you’re in the sweet spot, AHI should remain enabled. If not, innodb_adaptive_hash_index is good to remain OFF. I recall a recent song verse that suits well on MySQL AHI: “I’m a king but I’m far from a saint” “It’s a blessing and a curse” (IUKUK)
Have you seen AHI help or hurt in your systems? What’s your plan for MySQL 8.4? I’d love to hear real-world experiences… the database community learns best when we share our war stories.
PS
Open source is beautiful, you can actually read the code (and comments) and understand the “why” behind decisions.
https://codeforgeek.com/wp-content/uploads/2026/01/150-SQL-Commands-Explained.pngIn this guide, we explain 150+ SQL commands in simple words, covering everything from basic queries to advanced functions for 2026. We cover almost every SQL command that exists in one single place, so you never have to go search for anything anywhere else. If you master these 150 commands, you will become an SQL […]Planet MySQL
MySQL Studio in Oracle Cloud Infrastructure MySQL Studio in Oracle Cloud Infrastructure (OCI) is a unified environment for working with MySQL and HeatWave features through a single, streamlined interface. It brings SQL authoring, AI-assisted chat, and Jupyter-compatible notebooks together with project-based organization to help teams get from database setup to productive analytics faster. The same […]Planet MySQL
Database performance is often the bottleneck in web applications. This guide covers comprehensive MySQL optimization techniques from query-level improvements to server configuration tuning.
Understanding Query Execution
Before optimizing, understand how MySQL executes queries using EXPLAIN:
Key EXPLAIN columns to watch: type (aim for ref or better), rows (lower is better), Extra (avoid "Using filesort" and "Using temporary").
EXPLAIN Output Analysis
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
| 1 | SIMPLE | o | range | idx_status | idx_... | 4 | NULL | 5000 | Using where |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | mydb.o.user_id | 1 | NULL |
| 1 | SIMPLE | oi | ref | idx_order | idx_... | 4 | mydb.o.id | 3 | Using index |
+----+-------------+-------+--------+---------------+---------+---------+------------------+------+-------------+
Indexing Strategies
Composite Index Design
Design indexes based on query patterns:
-- Query pattern: Filter by status, date range, sort by dateSELECT*FROMordersWHEREstatus='pending'ANDcreated_at>'2024-01-01'ORDERBYcreated_atDESC;-- Optimal composite index (leftmost prefix rule)CREATEINDEXidx_orders_status_createdONorders(status,created_at);-- For queries with multiple equality conditionsSELECT*FROMproductsWHEREcategory_id=5ANDbrand_id=10ANDis_active=1;-- Index with most selective column firstCREATEINDEXidx_products_brand_cat_activeONproducts(brand_id,category_id,is_active);
Covering Indexes
Avoid table lookups with covering indexes:
-- Query only needs specific columnsSELECTid,name,priceFROMproductsWHEREcategory_id=5ORDERBYprice;-- Covering index includes all needed columnsCREATEINDEXidx_products_coveringONproducts(category_id,price,id,name);-- MySQL can satisfy query entirely from index-- EXPLAIN shows "Using index" in Extra column
Index for JOIN Operations
-- Ensure foreign keys are indexedCREATEINDEXidx_orders_user_idONorders(user_id);CREATEINDEXidx_order_items_order_idONorder_items(order_id);CREATEINDEXidx_order_items_product_idONorder_items(product_id);-- For complex joins, index the join columnsSELECTp.name,SUM(oi.quantity)astotal_soldFROMproductspJOINorder_itemsoiONp.id=oi.product_idJOINordersoONoi.order_id=o.idWHEREo.created_at>'2024-01-01'GROUPBYp.idORDERBYtotal_soldDESC;-- Indexes needed:-- orders(created_at) - for WHERE filter-- order_items(order_id) - for JOIN-- order_items(product_id) - for JOIN
Don’t over-index! Each index slows down INSERT/UPDATE operations. Monitor unused indexes with sys.schema_unused_indexes.
Query Optimization Techniques
Avoiding Full Table Scans
-- Bad: Function on indexed column prevents index useSELECT*FROMusersWHEREYEAR(created_at)=2024;-- Good: Range query uses indexSELECT*FROMusersWHEREcreated_at>='2024-01-01'ANDcreated_at<'2025-01-01';-- Bad: Leading wildcard prevents index useSELECT*FROMproductsWHEREnameLIKE'%phone%';-- Good: Trailing wildcard can use indexSELECT*FROMproductsWHEREnameLIKE'phone%';-- For full-text search, use FULLTEXT indexALTERTABLEproductsADDFULLTEXTINDEXft_name(name);SELECT*FROMproductsWHEREMATCH(name)AGAINST('phone');
Optimizing Subqueries
-- Bad: Correlated subquery runs for each rowSELECT*FROMproductspWHEREprice>(SELECTAVG(price)FROMproductsWHEREcategory_id=p.category_id);-- Good: JOIN with derived tableSELECTp.*FROMproductspJOIN(SELECTcategory_id,AVG(price)asavg_priceFROMproductsGROUPBYcategory_id)cat_avgONp.category_id=cat_avg.category_idWHEREp.price>cat_avg.avg_price;-- Even better: Window function (MySQL 8.0+)SELECT*FROM(SELECT*,AVG(price)OVER(PARTITIONBYcategory_id)asavg_priceFROMproducts)tWHEREprice>avg_price;
Pagination Optimization
-- Bad: OFFSET scans and discards rowsSELECT*FROMproductsORDERBYidLIMIT10OFFSET100000;-- Good: Keyset pagination (cursor-based)SELECT*FROMproductsWHEREid>100000-- Last seen IDORDERBYidLIMIT10;-- For complex sorting, use deferred joinSELECTp.*FROMproductspJOIN(SELECTidFROMproductsORDERBYcreated_atDESC,idDESCLIMIT10OFFSET100000)tONp.id=t.id;
Server Configuration Tuning
InnoDB Buffer Pool
# my.cnf - For dedicated database server with 32GB RAM
[mysqld]# Buffer pool should be 70-80% of available RAM
innodb_buffer_pool_size=24Ginnodb_buffer_pool_instances=24# Log file size affects recovery time vs write performance
innodb_log_file_size=2Ginnodb_log_buffer_size=64M# Flush settings (1 = safest, 2 = faster)
innodb_flush_log_at_trx_commit=1innodb_flush_method=O_DIRECT# Thread concurrency
innodb_thread_concurrency=0innodb_read_io_threads=8innodb_write_io_threads=8
-- Find top 10 slowest queriesSELECTDIGEST_TEXT,COUNT_STARasexec_count,ROUND(SUM_TIMER_WAIT/1000000000000,2)astotal_time_sec,ROUND(AVG_TIMER_WAIT/1000000000,2)asavg_time_ms,SUM_ROWS_EXAMINED,SUM_ROWS_SENTFROMperformance_schema.events_statements_summary_by_digestORDERBYSUM_TIMER_WAITDESCLIMIT10;-- Find tables with most I/OSELECTobject_schema,object_name,count_read,count_write,ROUND(sum_timer_read/1000000000000,2)asread_time_sec,ROUND(sum_timer_write/1000000000000,2)aswrite_time_secFROMperformance_schema.table_io_waits_summary_by_tableORDERBYsum_timer_waitDESCLIMIT10;-- Find unused indexesSELECT*FROMsys.schema_unused_indexes;-- Find redundant indexesSELECT*FROMsys.schema_redundant_indexes;
Real-time Monitoring
-- Current running queriesSELECTid,user,host,db,command,time,state,LEFT(info,100)asqueryFROMinformation_schema.processlistWHEREcommand!='Sleep'ORDERBYtimeDESC;-- InnoDB statusSHOWENGINEINNODBSTATUS\G-- Buffer pool hit ratio (should be > 99%)SELECT(1-((SELECTvariable_valueFROMperformance_schema.global_statusWHEREvariable_name='Innodb_buffer_pool_reads')/(SELECTvariable_valueFROMperformance_schema.global_statusWHEREvariable_name='Innodb_buffer_pool_read_requests')))*100asbuffer_pool_hit_ratio;
Partitioning for Large Tables
-- Range partitioning by dateCREATETABLEorders(idBIGINTAUTO_INCREMENT,user_idINTNOTNULL,totalDECIMAL(10,2),statusVARCHAR(20),created_atDATETIMENOTNULL,PRIMARYKEY(id,created_at),INDEXidx_user(user_id,created_at))PARTITIONBYRANGE(YEAR(created_at))(PARTITIONp2022VALUESLESSTHAN(2023),PARTITIONp2023VALUESLESSTHAN(2024),PARTITIONp2024VALUESLESSTHAN(2025),PARTITIONp_futureVALUESLESSTHANMAXVALUE);-- Queries automatically prune partitionsSELECT*FROMordersWHEREcreated_at>='2024-01-01'ANDcreated_at<'2024-07-01';-- Only scans p2024 partition
Connection Pooling
Application-Level Pooling
// Node.js with mysql2constmysql=require('mysql2/promise');constpool=mysql.createPool({host:'localhost',user:'app_user',password:'password',database:'myapp',waitForConnections:true,connectionLimit:20,queueLimit:0,enableKeepAlive:true,keepAliveInitialDelay:10000});// Use pool for queriesasyncfunctiongetUser(id){const[rows]=awaitpool.execute('SELECT * FROM users WHERE id = ?',[id]);returnrows[0];}
Conclusion
MySQL performance optimization is an iterative process. Start by identifying slow queries with the slow query log, analyze them with EXPLAIN, add appropriate indexes, and monitor the results. Server configuration should be tuned based on your workload characteristics and available resources.
Key takeaways:
Design indexes based on actual query patterns
Use EXPLAIN to understand query execution
Avoid functions on indexed columns in WHERE clauses
Elosql is a production-grade Laravel package that intelligently analyzes existing database schemas and generates precise migrations and Eloquent models. It supports MySQL, PostgreSQL, SQLite, and SQL Server, making it perfect for legacy database integration, reverse engineering, and rapid application scaffolding.
Epic Spaceman takes us on a journey through a smartphone’s main processing unit by enlarging a computer chip to the size of Manhattan and flying through it with his digital avatar. It’s mind-blowing when you realize just how much computing power and engineering complexity fits inside a chip the size of a fingernail. For more, check out his collab with MKBHD.