how to systematically optimize Laravel databases in production

https://techversedaily.com/storage/posts/MzQOunegn6DppLlmQlHsC7Mp4l52cV55BAhrUwPY.png

Your Laravel application felt fast during development.

Pages loaded instantly. Queries returned results in milliseconds. Everything seemed under control.

Then you deployed to production.

Traffic increased. Data grew. Users started complaining: “The site feels slow.”

This is a classic Laravel problem — and no, it’s usually not caused by PHP or Blade templates.

👉 The real bottleneck is almost always the database.

In production, inefficient queries don’t just slow down a page — they compound under load, drain server resources, and quietly kill performance.

In this guide, you’ll learn how to systematically optimize Laravel databases in production using three essential tools:

  1. Database Indexes

  2. EXPLAIN (Query Execution Plans)

  3. MySQL Slow Query Log

Used together, these tools turn guessing into measurable optimization.

Why Database Optimization Matters (More Than You Think)

A few uncomfortable truths:

  • A 1-second delay can reduce conversions by 7%

  • Full table scans grow exponentially with data

  • What works with 10,000 rows fails miserably at 1 million

  • Laravel doesn’t automatically fix bad queries

Your database doesn’t care how elegant your Eloquent code looks — it only cares how much work it has to do.

Optimization is about reducing work.

Database Indexes: The Foundation of Performance

What Is an Index (Really)?

Without an index, MySQL must scan every row to find matching data.

Think of it like this:

Indexes turn O(n) scans into O(log n) lookups.

When Should You Add Indexes?

Create indexes on columns that are:

Avoid indexing:

Basic Laravel Example

$orders = Order::where('user_id', $userId)->get();

If orders.user_id is not indexed, MySQL scans the entire table.

Fix via Migration

Schema::table('orders', function (Blueprint $table) {
    $table->index('user_id');
});

Now MySQL can jump straight to relevant rows.

Composite Indexes (Where Most Apps Win)

Real queries rarely filter on just one column.

$orders = Order::where('user_id', $userId)
               ->where('status', 'paid')
               ->orderBy('created_at', 'desc')
               ->get();

Optimal Index

Schema::table('orders', function (Blueprint $table) {
    $table->index(['user_id', 'status', 'created_at']);
});

⚠️ Index order matters

  • MySQL can use (user_id, status)

  • It cannot efficiently use (status, created_at) alone

Always index columns in the same order your queries filter them.

Indexing Mistakes to Avoid

❌ Indexing every column
❌ Guessing instead of measuring
❌ Ignoring write performance
❌ Indexing low-cardinality fields alone

Indexes speed up reads but slow down writes. Balance is key.

EXPLAIN: Understanding What MySQL Actually Does

Writing a query doesn’t mean MySQL executes it the way you expect.

EXPLAIN shows the truth.

Using EXPLAIN in Laravel

Raw SQL (Recommended)

$plan = DB::select(
    'EXPLAIN SELECT * FROM orders WHERE user_id = ?',
    [$userId]
);

dd($plan);

MySQL Console

EXPLAIN SELECT * FROM orders WHERE user_id = 10;

The Most Important EXPLAIN Columns

type (Scan Method)

key

  • The index actually used

  • NULL = no index used ❌

rows

  • Estimated rows scanned

  • Smaller is always better

Extra

  • Using filesort → Slow sorting

  • Using temporary → Temp table created

  • Using index → Index-only query (excellent)

Example: Query Without Index

EXPLAIN SELECT * FROM products WHERE category_id = 5;

Result:

  • type = ALL

  • key = NULL

  • rows = 600000

🚨 MySQL scanned the entire table.

After Adding Index

CREATE INDEX idx_category_id ON products (category_id);
EXPLAIN SELECT * FROM products WHERE category_id = 5;

Result:

  • type = ref

  • key = idx_category_id

  • rows = 120

✔ Massive improvement with zero code changes.

EXPLAIN Golden Rules

Slow Query Log: Catching Problems in Real Traffic

Some performance issues only appear in production.

That’s where Slow Query Log shines.

What Is Slow Query Log?

It records queries that exceed a time threshold.

Think of it as a black box recorder for your database.

Enable Slow Query Log (Temporary)

SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1;
SET GLOBAL log_queries_not_using_indexes = 1;

Queries taking longer than 1 second will be logged.

Enable Permanently (Recommended)

Edit MySQL config:

[mysqld]
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log_queries_not_using_indexes = 1

Restart MySQL:

sudo systemctl restart mysql

Sample Slow Query Log Entry

Query_time: 2.94
Rows_examined: 184732
SELECT * FROM orders
WHERE user_id = 123
ORDER BY created_at DESC;

This query scanned 184,732 rows to return a few records.

That’s your optimization target.

Analyze Slow Queries

Built-in Tool

mysqldumpslow -s t -t 10 /var/log/mysql/mysql-slow.log

Professional Tool (Recommended)

pt-query-digest /var/log/mysql/mysql-slow.log

This gives:

  • Query frequency

  • Total execution time

  • Average latency

  • Rows examined

Laravel-Level Monitoring

Laravel Telescope (Great for QA)

composer require laravel/telescope
php artisan telescope:install
php artisan migrate

View query execution time directly in the dashboard.

Debugbar (Local Only)

composer require barryvdh/laravel-debugbar --dev

Never use Debugbar in production.

A Real Optimization Workflow

  1. Enable slow query log

  2. Identify worst queries

  3. Run EXPLAIN

  4. Add or adjust indexes

  5. Refactor queries if needed

  6. Measure before & after

  7. Deploy and monitor

Optimization without measurement is guesswork.

Real-World Case: Product Search Optimization

The Problem Query

Product::where('name', 'LIKE', '%laptop%')
       ->where('is_active', 1)
       ->orderBy('created_at', 'desc')
       ->paginate(20);

EXPLAIN showed:

  • Full table scan

  • Filesort

  • 800k rows scanned

Fix 1: Full-Text Search

Schema::table('products', function (Blueprint $table) {
    $table->fullText('name');
});
Product::whereFullText('name', 'laptop')
       ->where('is_active', 1)
       ->paginate(20);

Fix 2: Composite Index

Schema::table('products', function (Blueprint $table) {
    $table->index(['is_active', 'created_at']);
});

Results

| Metric       | Before  | After   |
| ------------ | ------- | ------- |
| Rows scanned | 850,000 | 220     |
| Query time   | 3.1s    | 0.04s   |
| CPU usage    | High    | Minimal |

Same app. Same data.
Just smarter database usage.

Laravel News Links

Laravel Debugbar v4.0.0 is released

https://picperf.io/https://laravelnews.s3.amazonaws.com/featured-images/laravel-debugbar-v4.png

Release Date: January 23, 2025

Package Version: v4.0.0

Summary

Laravel Debugbar v4.0.0 marks a major release with package ownership transferring from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar. This version brings php-debugbar 3.x support and includes several new collectors and improvements for modern Laravel applications.

  • HTTP Client collector for tracking outbound API requests
  • Inertia collector for Inertia.js data tracking
  • Improved Livewire support for versions 2, 3, and 4
  • Remove jQuery in favor of modern JS
  • Improved performance and delayed rendering
  • Laravel Octane compatibility for long-running processes
  • And more

What’s New

HTTP Client Collector

This release adds a new collector that tracks HTTP client requests made through Laravel’s HTTP client. The collector provides visibility into outbound API calls, making it easier to debug external service integrations and monitor response times.

Inertia Collector

For applications using Inertia.js, the new Inertia collector tracks shared data and props passed to Inertia components. This helps debug data flow in Inertia-powered applications.

Enhanced Livewire Support

The debugbar now includes improved component detection for Livewire versions 2, 3, and 4. This provides better visibility into Livewire component lifecycle events and data updates across all currently supported Livewire versions.

Laravel Octane Compatibility

This version includes better handling for Laravel Octane and other long-running server processes. The debugbar now properly manages state across requests in persistent application environments.

Cache Usage Estimation

The cache widget now displays estimated byte usage, giving developers better insight into cache memory consumption during request processing.

Debugbar Position and Themes

This version has many UI improvements and settings like debugbar position, auto-hiding empty collectors, themes (Dark, Light, Auto), and more:

Breaking Changes

Package Ownership and Installation

The package has moved from barryvdh/laravel-debugbar to fruitcake/laravel-debugbar, requiring manual removal and reinstallation:

composer remove barryvdh/laravel-debugbar --dev --no-scripts

composer require fruitcake/laravel-debugbar --dev --with-dependencies

The namespace has changed from the original structure to Fruitcake\LaravelDebugbar. You’ll need to update any direct references to debugbar classes in your codebase.

Removed Features

Several features have been removed in this major version:

  • Socket storage support has been removed
  • Lumen framework support is no longer included
  • PDO extension functionality has been dropped

Configuration Changes

Default configuration values have been updated, and deprecated configuration options have been removed. Review your config/debugbar.php file and compare it with the published configuration from the new package.

Upgrade Notes

This is not a standard upgrade. You must manually remove the old package and install the new one using the commands shown above. After installation, update any namespace references in your code from the old barryvdh namespace to Fruitcake\LaravelDebugbar.

Review your configuration file for deprecated options and compare with the new defaults. The package maintains compatibility with Laravel 9.x through 12.x. See the upgrade docs for details on upgrading from 3.x to 4.x.

References

Laravel News

LarAgent v1.0 – Production-Ready AI Agents for Laravel

https://blog.laragent.ai/content/images/size/w1200/2026/01/ChatGPT-Image-Jan-22–2026–03_15_35-PM.png

This major release takes LarAgent to the next level – focused on structured responses, reliable context management, richer tooling, and production-grade agent behavior.

Designed for both development teams and business applications where predictability, observability, and scalability matter

🛠️ Structured Outputs with DataModel

LarAgent introduces DataModel-based structured responses, moving beyond arrays to typed, predictable output shapes you can rely on in real apps.

What it means

  • Type-safe outputs — no more guessing keys or parsing unstructured text
  • Responses conform to a defined schema, you receive DTO-like object as response as well as tool arguments
  • Easier integration with UIs, APIs, and automated workflows
  • Full support for nesting, collections, nullables, union type and everything you need to define structure of any complexity

Example

use LarAgent\Core\Abstractions\DataModel;
use LarAgent\Attributes\Desc;

class WeatherResponse extends DataModel
{
    #[Desc('Temperature in Celsius')]
    public float $temperature;

    #[Desc('Condition (sunny/cloudy/etc.)')]
    public string $condition;
}

class WeatherAgent extends Agent
{
    protected $responseSchema = WeatherResponse::class;
}

$response = WeatherAgent::ask('Weather in Tbilisi?');
echo $response->temperature;

🗄️ Storage Abstraction Layer

v1.0 introduces a pluggable storage layer for chat history and context, enabling persistent, switchable, and scalable storage drivers.

What’s new

  • Eloquent & SimpleEloquent drivers included
  • Swap between memory, cache, or database without rewriting agents
  • Fallback mechanism with one primary and multiple secondary drivers
class MyAgent extends Agent
{
    protected $history = [
        CacheStorage::class,  // Primary: read first, write first
        FileStorage::class,   // Fallback: used if primary fails on read
    ];
}

🔄 Intelligent Context Truncation

Long chats are inevitable, but hitting token limits shouldn’t be catastrophic. LarAgent now provides smart context management strategies.

Available strategies

  • Sliding Window: drop the oldest messages
  • Summarization: compress context using AI summaries
  • Symbolization: replace old messages with symbolic tags
class MyAgent extends Agent
{
    protected $enableTruncation = true;
    protected $truncationThreshold = 50000;
}

👉 Save on token costs while preserving context most relevant to the current conversation.

🧠 Enhanced Session + Identity Management

Context now supports identity-based sessions which is created by user id, chat name, agent name and group. Identity storage holds all identity keys that makes context of any agent available via the Context facade to manage. For example:

Context::of(MyAgent::class)
    ->forUser($userId)
    ->clearAllChats();

✔ Better support for multi-tenant SaaS, shared agents, and enterprise apps.

Generate fully-formed custom tool classes with boilerplate and IDE-friendly structure

php artisan make:agent:tool WeatherTool

This generates a ready tool with name, description, and handle() stub. Ideal for quickly adding capabilities to your agents.

Now the CLI chat shows tool calls as they happen — invaluable when debugging agent behavior.

You: Find me Laravel queue docs
Tool call: web_search
Tool call: extract_content

Agent: Here’s the documentation…

👉 Easier debugging and More transparency into what your agent actually does

MCP (Model Context Protocol) tools now support automatic caching.

Why it matters

  • First request fetches tool definitions from servers
  • Subsequent requests use cached definitions
  • Significantly faster agent initialization

Add to .env:

MCP_TOOL_CACHE_ENABLED=true
MCP_TOOL_CACHE_TTL=3600
MCP_TOOL_CACHE_STORE=redis

Clear with:

php artisan agent:tool-clear

✔ Great for production systems where latency matters

📊 Usage Tracking

Track prompt tokens, completion tokens, and usage stats per agent — ideal for cost analysis and billing

$agent = MyAgent::for('user-123');
$usage = $agent->usageStorage();

$totalTokens = $usage->getTotalTokens();

Usage tracking is based on session identity – it means that you can check token usage by user, by agent and/or by chat – allowing you to implement comprehensive statistics and reporting capabilities.


⚠️ Breaking Changes

v1.0 includes a few breaking API changes. Make sure to check the migration guide.


🧩 Summary — v1.0 Highlights

Production-focused improvements:

  • 🧱 Structured DataModel outputs
  • 📦 Storage abstraction (Eloquent & persistent drivers)
  • 🤖 Truncation strategies for stable contexts
  • 👥 Identity-aware sessions & Context Facade
  • 🔧 Better tooling & CLI observability
  • ⚡ MCP caching and usage tracking

LarAgent v1.0 is all about reliability, predictability, and scale — turning AI agents into first-class citizens of your Laravel application.

Happy coding! 🚀

Laravel News Links

Makita’s Handheld Powered Snow Thrower

https://s3files.core77.com/blog/images/1790938_81_140409_0_G8pt0a3.jpg

These days, companies like Stihl and Makita sell multi-heads. These are battery-powered motors that can drive a variety of common landscaping attachments, like string trimmers and hedge cutters.

Uniquely, Makita also offers this Snow Thrower attachment:

The business end is 12" wide and can handle a 6" depth of snow at a time. Tiltable vanes on the inside let you control whether you want to throw the snow to the left, to the right or straight ahead. The company says you can clear about five parking spaces with two 18V batteries.

So how well does it work? Seeing is believing. Here’s Murray Kruger of Kruger Construction putting it through its paces:

Core77

Automate the export of Amazon RDS for MySQL or Amazon Aurora MySQL audit logs to Amazon S3 with batching or near real-time processing

https://d2908q01vomqb2.cloudfront.net/887309d048beef83ad3eabf2a79a64a389ab1c9f/2026/01/12/DBBLOG-50081-1-1260×597.png

Audit logging has become a crucial component of database security and compliance, helping organizations track user activities, monitor data access patterns, and maintain detailed records for regulatory requirements and security investigations. Database audit logs provide a comprehensive trail of actions performed within the database, including queries executed, changes made to data, and user authentication attempts. Managing these logs is more straightforward with a robust storage solution such as Amazon Simple Storage Service (Amazon S3).

Amazon Relational Database Service (Amazon RDS) for MySQL and Amazon Aurora MySQL-Compatible Edition provide built-in audit logging capabilities, but customers might need to export and store these logs for long-term retention and analysis. Amazon S3 offers an ideal destination, providing durability, cost-effectiveness, and integration with various analytics tools.

In this post, we explore two approaches for exporting MySQL audit logs to Amazon S3: either using batching with a native export to Amazon S3 or processing logs in real time with Amazon Data Firehose.

Solution overview

The first solution involves batch processing by using the built-in audit log export feature in Amazon RDS for MySQL or Aurora MySQL-Compatible to export logs to Amazon CloudWatch Logs. Amazon EventBridge periodically triggers an AWS Lambda function. This solution creates a CloudWatch export task that sends the last one days’s of audit logs to Amazon S3. The period (one day) is configurable based on your requirements. This solution is the most cost-effective and practical if you don’t require the audit logs to be available in real-time within an S3 bucket. The following diagram illustrates this workflow.

Overview of solution 1

The other proposed solution uses Data Firehose to immediately process the MySQL audit logs within CloudWatch Logs and send them to an S3 bucket. This approach is suitable for business use cases that require immediate export of audit logs when they’re available within CloudWatch Logs. The following diagram illustrates this workflow.

Overview of solution 2

Use cases

Once you’ve implemented either of these solutions, you’ll have your Aurora MySQL or RDS for MySQL audit logs stored securely in Amazon S3. This opens up a wealth of possibilities for analysis, monitoring, and compliance reporting. Here’s what you can do with your exported audit logs:

  • Run Amazon Athena queries: With your audit logs in S3, you can use Amazon Athena to run SQL queries directly against your log data. This allows you to quickly analyze user activities, identify unusual patterns, or generate compliance reports. For example, you could query for all actions performed by a specific user, or find all failed login attempts within a certain time frame.
  • Create Amazon Quick Sight dashboards: Using Amazon Quick Sight in conjunction with Athena, you can create visual dashboards of your audit log data. This can help you spot trends over time, such as peak usage hours, most active users, or frequently accessed database objects.
  • Set up automated alerting: By combining your S3-stored logs with AWS Lambda and Amazon SNS, you can create automated alerts for specific events. For instance, you could set up a system to notify security personnel if there’s an unusual spike in failed login attempts or if sensitive tables are accessed outside of business hours.
  • Perform long-term analysis: With your audit logs centralized in S3, you can perform long-term trend analysis. This could help you understand how database usage patterns change over time, informing capacity planning and security policies.
  • Meet compliance requirements: Many regulatory frameworks require retention and analysis of database audit logs. With your logs in S3, you can easily demonstrate compliance with these requirements, running reports as needed for auditors.

By leveraging these capabilities, you can turn your audit logs from a passive security measure into an active tool for database management, security enhancement, and business intelligence.

Comparing solutions

The first solution used EventBridge to periodically trigger a Lambda function. This function creates a CloudWatch Log export task that sends a batch of log data to Amazon S3 at regular intervals. This method is well-suited for scenarios where you prefer to process logs in batches to optimize costs and resources.

The second solution uses Data Firehose to create a real-time audit log processing pipeline. This approach streams logs directly from CloudWatch to an S3 bucket, providing near real-time access to your audit data. In this context, “real-time” means that log data is processed and delivered synchronously as it is generated, rather than being sent in a pre-defined interval. This solution is ideal for scenarios requiring immediate access to log data or for high-volume logging environments.

Whether you choose the near real-time streaming approach or the scheduled export method, you will be well-equipped to managed your Aurora MySQL and RDS for MySQL audit logs effectively.

Prerequisites for both solutions

Before getting started, complete the following prerequisites:

  1. Create or have an existing RDS for MySQL instance or Aurora MySQL cluster.
  2. Enable audit logging:
    1. For Amazon RDS, add the MariaDB Audit Plugin within your option group.
    2. For Aurora, enable Advanced Auditing within your parameter group.

Note: In audit logging, by default all users are logged which can potentially be costly.

  1. Publish MySQL audit logs to CloudWatch Logs.
  2. Make sure you have a terminal with the AWS Command Line Interface (AWS CLI) installed or use AWS CloudShell within your console.
  3. Create an S3 bucket to store the MySQL audit logs using the below AWS CLI command:

aws s3api create-bucket --bucket <bucket_name>

After the command is complete, you will see an output similar to the following:

Note: Each solution has specific service components which are discussed in their respective sections.

Solution #1: Peform audit log batch processing with EventBridge and Lambda

In this solution, we create a Lambda function to export your audit log to Amazon S3 based on the schedule you set using EventBridge Scheduler. This solution offers a cost-efficient way to transfer audit log files within an S3 bucket in a scheduled manner.

Create IAM role for EventBridge Scheduler

The first step is to create an AWS Identity and Access Management (IAM) role responsible for allowing EventBridge Scheduler to invoke the Lambda function we will create later. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create a file named TrustPolicyForEventBridgeScheduler.json using your preferred text editor:

nano TrustPolicyForEventBridgeScheduler.json

  1. Insert the following trust policy into the JSON file:

Note: Make sure to amend SourceAccount before saving into a file. The condition is used to prevents unauthorized access from other AWS accounts.

  1. Create a file named PermissionsForEventBridgeScheduler.json using your preferred text editor:

nano PermissionsForEventBridgeScheduler.json

  1. Insert the following permissions into the JSON file:

Note: Replace <LambdaFunctionName> with the name of the function you’ll create later.

  1. Use the following AWS CLI command to create the IAM role for EventBridge Scheduler to invoke the Lambda function:
  1. Create the IAM policy and attach it to the previously created IAM role:

In this section, we created an IAM role with appropriate trust and permissions policies that allow EventBridge Scheduler to securely invoke Lambda functions from your AWS account. Next, we’ll create another IAM role that defines the permissions that your Lambda function needs to execute its tasks.

Create IAM role for Lambda

The next step is to create an IAM role responsible for allowing Lambda to put records from CloudWatch into your S3 bucket. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano TrustPolicyForLambda.json

  1. Insert the following trust policy into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for Lambda to insert records from CloudWatch to Amazon S3:
  1. Create a file named PermissionsForLambda.json using your preferred text editor:

nano PermissionsForLambda.json

  1. Insert the following permissions into the JSON file:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create ZIP file for the Python Lambda function

To create a file with the code the Lambda function will invoke, complete the following steps:

  1. Create and write to a file named lambda_function.py using your preferred text editor:

nano lambda_function.py

  1. Within the file, insert the following code:
import boto3
import os
import datetime
import logging
import time
from botocore.exceptions import ClientError, NoCredentialsError, BotoCoreError

logger = logging.getLogger()
logger.setLevel(logging.INFO)
def check_active_export_tasks(client):
    """Check for any active export tasks"""
    try:
        response = client.describe_export_tasks()
        active_tasks = [
            task for task in response.get('exportTasks', [])
            if task.get('status', {}).get('code') in ['RUNNING', 'PENDING']
        ]
        return active_tasks
    except ClientError as e:
        logger.error(f"Error checking active export tasks: {e}")
        return []
def wait_for_export_task_completion(client, max_wait_minutes=15, check_interval=60):
    """Wait for any active export tasks to complete"""
    max_wait_seconds = max_wait_minutes * 60
    waited_seconds = 0
    
    while waited_seconds < max_wait_seconds:
        active_tasks = check_active_export_tasks(client)
        
        if not active_tasks:
            logger.info("No active export tasks found, proceeding...")
            return True
        
        logger.info(f"Found {len(active_tasks)} active export task(s). Waiting {check_interval} seconds...")
        for task in active_tasks:
            task_id = task.get('taskId', 'Unknown')
            status = task.get('status', {}).get('code', 'Unknown')
            logger.info(f"Active task ID: {task_id}, Status: {status}")
        
        time.sleep(check_interval)
        waited_seconds += check_interval
    
    logger.warning(f"Timed out waiting for export tasks to complete after {max_wait_minutes} minutes")
    return False
def lambda_handler(event, context):
    try:
        
        required_env_vars = ['GROUP_NAME', 'DESTINATION_BUCKET', 'PREFIX', 'NDAYS']
        missing_vars = [var for var in required_env_vars if not os.environ.get(var)]
        
        if missing_vars:
            error_msg = f"Missing required environment variables: {', '.join(missing_vars)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        GROUP_NAME = os.environ['GROUP_NAME'].strip()
        DESTINATION_BUCKET = os.environ['DESTINATION_BUCKET'].strip()
        PREFIX = os.environ['PREFIX'].strip()
        NDAYS = os.environ['NDAYS'].strip()
        
        
        MAX_WAIT_MINUTES = int(os.environ.get('MAX_WAIT_MINUTES', '30'))
        CHECK_INTERVAL = int(os.environ.get('CHECK_INTERVAL', '60'))
        RETRY_ON_CONCURRENT = os.environ.get('RETRY_ON_CONCURRENT', 'true').lower() == 'true'
        
        
        if not all([GROUP_NAME, DESTINATION_BUCKET, PREFIX, NDAYS]):
            error_msg = "Environment variables cannot be empty"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            nDays = int(NDAYS)
            if nDays <= 0:
                raise ValueError("NDAYS must be a positive integer")
        except ValueError as e:
            error_msg = f"Invalid NDAYS value '{NDAYS}': {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            currentTime = datetime.datetime.now()
            StartDate = currentTime - datetime.timedelta(days=nDays)
            EndDate = currentTime - datetime.timedelta(days=nDays - 1)
            
            fromDate = int(StartDate.timestamp() * 1000)
            toDate = int(EndDate.timestamp() * 1000)
            
            
            if fromDate >= toDate:
                raise ValueError("Invalid date range: fromDate must be less than toDate")
                
        except (ValueError, OverflowError) as e:
            error_msg = f"Date calculation error: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 400,
                'body': {'error': error_msg}
            }
        
        
        try:
            BUCKET_PREFIX = os.path.join(PREFIX, StartDate.strftime('%Y{0}%m{0}%d').format(os.path.sep))
        except Exception as e:
            error_msg = f"Error creating bucket prefix: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        
        
        logger.info(f"Starting export task for log group: {GROUP_NAME}")
        logger.info(f"Date range: {StartDate.strftime('%Y-%m-%d')} to {EndDate.strftime('%Y-%m-%d')}")
        logger.info(f"Destination: s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}")
        
        
        try:
            client = boto3.client('logs')
        except NoCredentialsError:
            error_msg = "AWS credentials not found"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        except Exception as e:
            error_msg = f"Error creating boto3 client: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
        
        
        if RETRY_ON_CONCURRENT:
            logger.info("Checking for active export tasks...")
            active_tasks = check_active_export_tasks(client)
            
            if active_tasks:
                logger.info(f"Found {len(active_tasks)} active export task(s). Waiting for completion...")
                if not wait_for_export_task_completion(client, MAX_WAIT_MINUTES, CHECK_INTERVAL):
                    return {
                        'statusCode': 409,
                        'body': {
                            'error': f'Active export task(s) still running after {MAX_WAIT_MINUTES} minutes',
                            'activeTaskCount': len(active_tasks)
                        }
                    }
        
        
        try:
            response = client.create_export_task(
                logGroupName=GROUP_NAME,
                fromTime=fromDate,
                to=toDate,
                destination=DESTINATION_BUCKET,
                destinationPrefix=BUCKET_PREFIX
            )
            
            task_id = response.get('taskId', 'Unknown')
            logger.info(f"Export task created successfully with ID: {task_id}")
            
            return {
                'statusCode': 200,
                'body': {
                    'message': 'Export task created successfully',
                    'taskId': task_id,
                    'logGroup': GROUP_NAME,
                    'fromDate': StartDate.isoformat(),
                    'toDate': EndDate.isoformat(),
                    'destination': f"s3://{DESTINATION_BUCKET}/{BUCKET_PREFIX}"
                }
            }
            
        except ClientError as e:
            error_code = e.response['Error']['Code']
            error_msg = e.response['Error']['Message']
            
            
            if error_code == 'ResourceNotFoundException':
                logger.error(f"Log group '{GROUP_NAME}' not found")
                return {
                    'statusCode': 404,
                    'body': {'error': f"Log group '{GROUP_NAME}' not found"}
                }
            elif error_code == 'LimitExceededException':
                
                logger.error(f"Export task limit exceeded (concurrent task running): {error_msg}")
                
                
                active_tasks = check_active_export_tasks(client)
                
                return {
                    'statusCode': 409,
                    'body': {
                        'error': 'Cannot create export task: Another export task is already running',
                        'details': error_msg,
                        'activeTaskCount': len(active_tasks),
                        'suggestion': 'Only one export task can run at a time. Please wait for the current task to complete or set RETRY_ON_CONCURRENT=true to auto-retry.'
                    }
                }
            elif error_code == 'InvalidParameterException':
                logger.error(f"Invalid parameter: {error_msg}")
                return {
                    'statusCode': 400,
                    'body': {'error': f"Invalid parameter: {error_msg}"}
                }
            elif error_code == 'AccessDeniedException':
                logger.error(f"Access denied: {error_msg}")
                return {
                    'statusCode': 403,
                    'body': {'error': f"Access denied: {error_msg}"}
                }
            else:
                logger.error(f"AWS ClientError ({error_code}): {error_msg}")
                return {
                    'statusCode': 500,
                    'body': {'error': f"AWS error: {error_msg}"}
                }
                
        except BotoCoreError as e:
            error_msg = f"BotoCore error: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
            
        except Exception as e:
            error_msg = f"Unexpected error creating export task: {str(e)}"
            logger.error(error_msg)
            return {
                'statusCode': 500,
                'body': {'error': error_msg}
            }
    
    except Exception as e:
        
        error_msg = f"Unexpected error in lambda_handler: {str(e)}"
        logger.error(error_msg, exc_info=True)
        return {
            'statusCode': 500,
            'body': {'error': 'Internal server error'}
        }
  1. Zip the file using the following command:

zip function.zip lambda_function.py

Create Lambda function

Complete the following steps to create a Lambda function:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Run the following command, which references the zip file previously created:

The NDAYS variable in the preceding command will determine the dates of audit logs exported per invocation of the Lambda function. For example, if you plan on exporting logs one time per day to Amazon S3, set NDAYS=1, as shown in the preceding command.

  1. Add concurrency limits to keep executions in control:

Note: Reserved concurrency in Lambda sets a fixed limit on how many instances of your function can run simultaneously, like having a specific number of workers for a task. In this database export scenario, we’re limiting it to 2 concurrent executions to prevent overwhelming the database, avoid API throttling, and ensure smooth, controlled exports. This limitation helps maintain system stability, prevents resource contention, and keeps costs in check

In this section, we created a Lambda function that will handle the CloudWatch log exports, configured its essential parameters including environment variables, and set a concurrency limit to ensure controlled execution. Next, we’ll create an EventBridge schedule that will automatically trigger this Lambda function at specified intervals to perform the log exports.

Create EventBridge schedule

Complete the following steps to create an EventBridge schedule to invoke the Lambda function at an interval of your choosing:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Run the following command:

The schedule-expression parameter in the preceding command must be equal to the environmental variable NDAYS in the previously created Lambda function.

This solution provides an efficient, scheduled approach to exporting RDS audit logs to Amazon S3 using AWS Lambda and EventBridge Scheduler. By leveraging these serverless components, we’ve created a cost-effective, automated system that periodically transfers audit logs to S3 for long-term storage and analysis. This method is particularly useful for organizations that need regular, batch-style exports of their database audit logs, allowing for easier compliance reporting and historical data analysis.

While the first solution offers a scheduled, batch-processing approach, some scenarios require a more real-time solution for audit log processing. In our next solution, we’ll explore how to create a near real-time audit log processing system using Amazon Kinesis Data Firehose. This approach will allow for continuous streaming of audit logs from RDS to S3, providing almost immediate access to log data.

Solution 2: Create near real-time audit log processing with Amazon Data Firehose

In this section, we review how to create a near real-time audit log export to Amazon S3 using the power of Data Firehose. With this solution, you can directly load the latest audit log files to an S3 bucket for quick analysis, manipulation, or other purposes.

Create IAM role for CloudWatch Logs

The first step is to create an IAM role responsible for allowing CloudWatch Logs to put records into the Firehose delivery stream (CWLtoDataFirehoseRole). Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano TrustPolicyForCWL.json

  1. Insert the following trust policy into the JSON file:
  1. Create and write to a new JSON file for the IAM permissions policy using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following permissions into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for CloudWatch Logs to insert records into the Firehose delivery stream:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create IAM role for Firehose delivery stream

The next step is to create an IAM role (DataFirehosetoS3Role) responsible for allowing the Firehose delivery stream to insert the audit logs into an S3 bucket. Complete the following steps to create this role:

  1. Connect to a terminal with the AWS CLI or CloudShell.
  2. Create and write to a JSON file for the IAM trust policy using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following trust policy into the JSON file:
  1. Create and write to a new JSON file for the IAM permissions using your preferred text editor:

nano PermissionsForCWL.json

  1. Insert the following permissions into the JSON file:
  1. Use the following AWS CLI command to create the IAM role for Data Firehose to perform operations on the S3 bucket:
  1. Create the IAM policy and attach it to the previously created IAM role:

Create the Firehose delivery stream

Now you create the Firehose delivery stream to allow near real-time transfer of MySQL audit logs from CloudWatch Logs to your S3 bucket. Complete the following steps:

  1. Create the Firehose delivery stream with the following AWS CLI command. Setting the buffer interval and size determines how long your data is buffered before being delivered to the S3 bucket. For more information, refer to AWS documentation. In this example, we use the default values:
  1. Wait until the Firehose delivery stream becomes active (this might take a few minutes). You can use the Firehose CLI describe-delivery-stream command to check the status of the delivery stream. Note the DeliveryStreamDescription.DeliveryStreamARN value, to use in a later step:

aws firehose describe-delivery-stream --delivery-stream-name <delivery-stream-name>

  1. After the Firehose delivery stream is in an active state, create a CloudWatch Logs subscription filter. This subscription filter immediately starts the flow of near real-time log data from the chosen log group to your Firehose delivery stream. Make sure to provide the log group name that you want to push to Amazon S3 and properly copy the destination-arn of your Firehose delivery stream:

Your near real-time MySQL audit log solution is now properly configured and will begin delivering MySQL audit logs to your S3 bucket through the Firehose delivery stream.

Clean up

To clean up your resources, complete the following steps (depending on which solution you used):

  1. Delete the RDS instance or Aurora cluster.
  2. Delete the Lambda functions.
  3. Delete the EventBridge rule.
  4. Delete the S3 bucket.
  5. Delete the Firehose delivery stream.

Conclusion

In this post, we’ve presented two solutions for managing Aurora MySQL or RDS for MySQL audit logs, each offering unique benefits for different business use cases.

We encourage you to implement these solutions in your own environment and share your experiences, challenges, and success stories in the comments section. Your feedback and real-world implementations can help fellow AWS users choose and adapt these solutions to best fit their specific audit logging needs.


About the authors

Mahek Shah

Mahek is a Cloud Support Engineer I who has worked within the AWS database team for almost 2 years. Mahek is an Amazon Aurora MySQL and RDS MySQL subject matter expert with deep expertise in helping customers implement robust, high-performing, and secure database solutions within the AWS Cloud.

Ryan Moore

Ryan is a Technical Account Manager at AWS with three years of experience, having launched his career on the AWS database team. He is an Aurora MySQL and RDS MySQL subject matter expert that specializes in enabling customers to build performant, scalable, and secure architectures within the AWS Cloud.

Nirupam Datta

Nirupam is a Sr. Technical Account Manager at AWS. He has been with AWS for over 6 years. With over 14 years of experience in database engineering and infra-architecture, Nirupam is also a subject matter expert in the Amazon RDS core systems and Amazon RDS for SQL Server. He provides technical assistance to customers, guiding them to migrate, optimize, and navigate their journey in the AWS Cloud.

Planet for the MySQL Community

Laravel Toaster Magic v2.0 – The Theme Revolution

https://opengraph.githubassets.com/31857b29a46a84e2c6bbc712f5ec663b85c7a1b32aa7df4cc5c9e371828cb100/devrabiul/laravel-toaster-magic/releases/tag/v2.0

🌟 One Package, Infinite Possibilities

Laravel Toaster Magic is designed to be the only toaster package you’ll need for any type of Laravel project.
Whether you are building a corporate dashboard, a modern SaaS, a gaming platform, or a simple blog, I have crafted a theme that fits perfectly.

"One Package, Many Themes." — No need to switch libraries just to change the look.

This major release brings 7 stunning new themes, full Livewire v3/v4 support, and modern UI enhancements.


🚀 What’s New?

1. 🎨 7 Beautiful New Themes

I have completely redesigned the visual experience. You can now switch between 7 distinct themes by simply updating your config.

Theme Config Value Description
Default 'default' Clean, professional, and perfect for corporate apps.
Material 'material' Google Material Design inspired. Flat and bold.
iOS 'ios' (Fan Favorite) Apple-style notifications with backdrop blur and smooth bounce animations.
Glassmorphism 'glassmorphism' Trendy frosted glass effect with vibrant borders and semi-transparent backgrounds.
Neon 'neon' (Dark Mode Best) Cyberpunk-inspired with glowing neon borders and dark gradients.
Minimal 'minimal' Ultra-clean, distraction-free design with simple left-border accents.
Neumorphism 'neumorphism' Soft UI design with 3D embossed/debossed plastic-like shadows.

👉 How to use:

// config/laravel-toaster-magic.php
'theme' => 'neon', 

2. ⚡ Full Livewire v3 & v4 Support

I’ve rewritten the Javascript core to support Livewire v3 & v4 natively.

  • No more custom event listeners required manually.
  • Uses Livewire.on (v3) or standard event dispatching.
  • Works seamlessly with SPA mode and wire:navigate.
// Dispatch from component
$this->dispatch('toastMagic', 
    status: 'success', 
    message: 'User Saved!', 
    title: 'Great Job'
);

3. 🌈 Gradient Mode

Want your toasts to pop without changing the entire theme? Enable Gradient Mode to add a subtle "glow-from-within" gradient based on the toast type (Success, Error, etc.).

// config/laravel-toaster-magic.php
'gradient_enable' => true

Works best with Default, Material, Neon, and Glassmorphism themes.


4. 🎨 Color Mode

Don’t want themes? Just want solid colors? Color Mode forces the background of the toast to match its type (Green for Success, Red for Error, etc.), overriding theme backgrounds for high-visibility alerts.

// config/laravel-toaster-magic.php
'color_mode' => true

5. 🛠 Refactored CSS Architecture

I have completely modularized the CSS.

  • CSS Variables: All colors and values are now CSS variables, making runtime customization instant.
  • Scoped Styles: Themes are namespaced (.theme-neon, .theme-ios) to prevent conflicts.
  • Dark Mode: Native dark mode support via body[theme="dark"].

📋 Upgrade Guide

Upgrading from v1.x to v2.0?

  1. Update Composer:

    composer require devrabiul/laravel-toaster-magic "^2.0"
  2. Republish Assets (Critical for new CSS/JS):

    php artisan vendor:publish --tag=toast-magic-assets --force
  3. Check Config:
    If you have a published config file, add the new options:

    'options' => [
        'theme' => 'default',
        'gradient_enable' => false,
        'color_mode' => false,
    ],
    'livewire_version' => 'v3',

🏁 Conclusion

v2.0 transforms Laravel Toaster Magic from a simple notification library into a UI-first experience. Whether you’re building a sleek SaaS (use iOS), a gaming platform (use Neon), or an admin dashboard (use Material), there is likely a theme for you.

Enjoy the magic! 🍞✨


Laravel News Links

KeyPort Versa58 Swiss Army Upgrade System

https://theawesomer.com/photos/2026/01/keyport_versa58_swiss_army_accessories_t.jpg

KeyPort Versa58 Swiss Army Upgrade System

KeyPort’s latest creation is a modular upgrade system for standard 58mm Swiss Army Knives. At the heart of the Versa58 are its magnetic mounting plates, which let you easily snap tools on and off. The first modules include a mini flashlight, a retractable pen, a USB-C flash drive, a pocket clip, and a multi-purpose holder for a toothpick, tweezers, or ferro rod.

The Awesomer

MySQL 8.4 disables AHI – Why and What you need to know

MySQL 8.4 changed the InnoDB adaptive hash index (innodb_adaptive_hash_index) default from ON to OFF, a major shift after years of it being enabled by default. Note that the MySQL adaptive hash index (AHI) feature remains fully available and configurable.

This blog is me going down the rabbit hole so you don’t have to and present you what you actually need to know. I am sure you’re a great MySQLer know-it-all and you might want to skip this but DON’T, participate in bonus task towards the end.

Note that MariaDB already made this change in 10.5.4 (see MDEV-20487), so MySQL is doing nothing new! But why? Let me start with What(?) first!

What is Adaptive Hash Index in MySQL (AHI)

This has been discussed so many times, I’ll keep it short.

We know InnoDB uses B-trees for all indexes. A typical lookup requires traversing 3 – 4 levels: root > internal nodes > leaf page. For millions of rows, this is efficient but not instant.

AHI is an in-memory hash table that sits on top of your B-tree indexes. It monitors access patterns in real-time, and when it detects frequent lookups with the same search keys, it builds hash entries that map those keys directly to buffer pool pages.

So when next time the same search key is hit, instead of a multi-level B-tree traversal, you get a single hash lookup from the AHI memory section and direct jump to the buffer pool page giving you immediate data access.

FYI, AHI is part of InnoDB bufferpool.

What is “adaptive” in the “hash index”

InnoDB watches your workload and decides what to cache adaptively based on access patterns and lookup frequency. You don’t configure which indexes or keys to hash, InnoDB figures it out automatically. High-frequency lookups? AHI builds entries. Access patterns changes? AHI rebuilds the hash. It’s a self tuning optimization that adjusts to your actual runtime behavior and query patterns. That’s the adaptive-ness.

Sounds perfect, right? What’s the problem then?

The Problem(s) with AHI

– Overhead of AHI

AHI is optimal for frequently accessed pages but for non-frequent? The look-up path for such query is:

– Check AHI
– Check bufferpool
– Read from disk

For infrequent or random access patterns the AHI lookup isn’t useful, only to fall through to the regular B-tree path anyway. It causes you to spend memory search, comparison and burn CPU cycles.

– There is a latch on the AHI door

AHI is a shared data structure, though partitioned (innodb_adaptive_hash_index_parts), it has mutexes for controlled access. Thus when the concurrency increases, AHI may cause those threads blocking each other.

– The unpredictability of AHI

This appears to be the main reason for disabling the Adaptive Hash Index in MySQL 8.4. The optimizer needs to predict costs BEFORE the query runs. It has to decide: “Should I use index A or index B?”. AHI is dynamically built and is access (more frequently or less) dependent thus optimizer cannot predict a consistent query path.

The comments in this IndexLookupCost function section of cost_model.h explains it better, and I quote:

“With AHI enabled the cost of random lookups does not appear to be predictable using standard explanatory variables such as index height or the logarithm of the number of rows in the index.”

I encourage you to admire the explanation in the comments here: https://dev.mysql.com/doc/dev/mysql-server/latest/cost__model_8h_source.html

Why AHI Disabled in MySQL 8.4

I’d word it like this… the default change of InnoDB Adaptive Hash Index in MySQL 8.4 was driven by,
One: the realization that “favoring predictability” is more important than potential gains in specific scenarios and
Two: End users have the feature available and they can Enable it if they know/think it’d help them.

In my production experience, AHI frequently becomes a contention bottleneck under certain workloads, like write-heavy, highly concurrent or when active dataset is more than the buffer pool size. Disabling AHI ensures consistent response times and eliminates a common source of performance unpredictability”.

That comes to our next segment, what is that YOU need to do? and importantly, HOW?

The bottom line: MySQL 8.4 defaults to innodb_adaptive_hash_index=OFF. Before upgrading, verify whether AHI is actually helping your workload or quietly hurting it.

How to track MySQL AHI usage

Using the MySQL CLI

Use ENGINE INNODB STATUS command and look for the section that says “INSERT BUFFER AND ADAPTIVE HASH INDEX”:

SHOW ENGINE INNODB STATUS\G
8582.85 hash searches/s, 8518.85 non-hash searches/s

Here:
hash searches: Lookups served by AHI
non-hash searches: Regular B-tree lookups (after AHI search fails)

If your hash search rate is significantly higher, AHI is actively helping.
If the numbers for AHI are similar or lower, AHI isn’t providing much benefit.

Is AHI causing contention in MySQL?

In SHOW ENGINE INNODB STATUS look for wait events in SEMAPHORE section:

-Thread X has waited at btr0sea.ic line … seconds the semaphore:
S-lock on RW-latch at … created in file btr0sea.cc line …

If ENGINE INNODB STATUS shows many threads waiting on rw-latches created in btr0sea.c, it is the signs of Adaptive Hash index locking contention. That’s a sign for disabling it.
Refer: https://dev.mysql.com/doc/dev/mysql-server/latest/btr0sea_8cc.html

Monitoring AHI for MySQL

How about watching a chart that shows AHI efficiency? Percona Monitoring and Management makes visualization easy to decide on if that’s better for current workload. Here are 1000 words for you:

Bonus Task

Think you’ve got it about MySQL AHI here? Let’s do this task:

  1. Open pmmdemo.percona.com
  2. Go to Dashboards > MySQL > MySQL InnoDB Details
  3. Scroll down to “Innodb Adaptive Hash Index” section
  4. Answer this question in comments section: Which MySQL instances are better off without AHI?

Conclusion

AHI is a great idea and it works until it doesn’t. You’ve gotta do the homework, track usage, measure impact, then decide. Make sure you be ready for your upgrade.
If your monitoring shows consistently high hash search rates with minimal contention, you’re in the sweet spot, AHI should remain enabled. If not, innodb_adaptive_hash_index is good to remain OFF.
I recall a recent song verse that suits well on MySQL AHI: “I’m a king but I’m far from a saint” “It’s a blessing and a curse” (IUKUK)

Have you seen AHI help or hurt in your systems? What’s your plan for MySQL 8.4? I’d love to hear real-world experiences… the database community learns best when we share our war stories.

PS

Open source is beautiful, you can actually read the code (and comments) and understand the “why” behind decisions.

Planet for the MySQL Community

150+ SQL Commands Explained With Examples (2026 Update)

https://codeforgeek.com/wp-content/uploads/2026/01/150-SQL-Commands-Explained.pngIn this guide, we explain 150+ SQL commands in simple words, covering everything from basic queries to advanced functions for 2026. We cover almost every SQL command that exists in one single place, so you never have to go search for anything anywhere else. If you master these 150 commands, you will become an SQL […]Planet MySQL